March 30, 2020

Computing and digital crafting should be accessible to all! This imperative inspires the mission that Ubuntu has been pursuing for nearly two decades now. The Raspberry Pi Foundation is pursuing a similar mission with the single-board, low-cost and high-performance Raspberry Pi computers. With our commitment to official Ubuntu support for the Raspberry Pi, we want to accelerate the commodification of digital innovation.

Besides bringing the benefits of modern GNU/Linux, Ubuntu delivers the latest and greatest free and open source software to the Raspberry Pi. Ubuntu also brings versatile options for software packaging, delivery and updates. Users will benefit from frequently and reliably published software and long-term support. Ubuntu provides innovators – in their garage, in schools, in labs or in the enterprise – with a robust software infrastructure to create exciting solutions on their Raspberry Pi.

Progress to date

Since our last roadmap update, we’ve extended our support to cover a large proportion of the +30M shipped Raspberry Pi family devices. The Raspberry Pi 2/3/4 models and Compute Modules are supported by the latest release of Ubuntu (19.10) in 32 bits for all models, and 64 bits version for the Pi/3 and /4 models. Ubuntu Core and the most recent long-term-support (LTS) release of Ubuntu (18.04) are similarly available, with enterprise and commercial use cases in mind.

The way forward

Going forward, new releases of Ubuntu will automatically support the latest Raspberry Pi device models. We will also strive to make Ubuntu available from day one for any new Raspberry Pi model.

Our future support efforts will be centered around advancing computing education,  fostering the digital maker culture, improving developer’s productivity, and finally accelerating enterprise innovation.

Advancing computing education

Through our support, we will strengthen the role of the Raspberry Pi as a vehicle for introducing digital technologies. Our focus will be university students and technology professionals.

Ubuntu is a leading platform for innovation in the cloud and at the edge. As such, it is a channel of choice to introduce emerging digital technologies. We will make following key technologies more accessible to educational audiences through the Raspberry Pi:

  • Cloud native technologies like Kubernetes, containers, and microservices
  • Edge computing frameworks that interface with public and private clouds
  • IoT platforms for creating and managing sensing applications

We will leverage the convenience of snaps to make these technologies easy to discover, deploy and experiment with. Furthermore, we will increase educational content pertaining to these technologies. We will  also ramp-up collaboration with educators and training platforms around open digital training resources.

Fostering digital maker culture

Canonical will be launching two new services intended for makers and hobbyists by the second half of this year. Digital makers and do-it-yourself (DIY) hobbyists value solving problems themselves and sharing knowledge in communities. Ubuntu and the Raspberry Pi are already popular in communities of digital makers and DIY hobbyists. With our upcoming services, we want to contribute in catalysing the digital maker/DIY culture.

A catalog of open source appliances

We are collaborating with developers of popular open source applications to create an online catalog of appliance images for the Raspberry Pi. These appliance images will be optimised to harness the full capabilities of the Raspberry Pi.

The catalog will offer open source appliance images in popular categories like smart home, smart speakers, network security, storage, desktop, gaming, media servers, robotics, 3d printing and more.

This catalog will enable makers and DIY enthusiasts to build Raspberry Pi based tech appliances on their own, for use at home, at work or to offer to friends and family. Being built on Ubuntu Core, these appliances will benefit from strong security and automatic over the air software updates. 

An online image composition service

We will also deploy an online service for composing and remotely building custom images of Ubuntu Core for the Raspberry Pi. The service will allow makers to select any application from the snap store to build a custom Ubuntu Core image in the cloud for personal use.

This service will make it trivial for any DIY enthusiast to compose and build custom operating system images for the Raspberry Pi. This will be particularly useful to those with modest or no coding experience.

Improving developer’s productivity

We want to allow developers to get things done on the Raspberry Pi. Therefore we are working to improve developers productivity on the Raspberry Pi. To this end, the following tools will be added to the next releases of Ubuntu.

A new configuration tool

A tool that will improve the user experience associated with setting up the Raspberry Pi for use with Ubuntu is in development. This tool will cover all the parameters susceptible to be configured on the Raspberry Pi.

Software utilities

Software utilities for managing Raspberry Pi peripherals like displays, cameras, bluetooth and hat modules will be included into upcoming releases of Ubuntu. Hardware debugging utilities will also be provided.

Accelerating enterprise innovation

We aim to boost the adoption of the Raspberry Pi in the enterprise. The Raspberry Pi is a platform suitable for introducing the internet of things revolution to the enterprise in an agile manner. 

Requirements for innovation in the enterprise differ in that security and reliability are indispensable even in the early stages. We will be supporting enterprise innovation on Raspberry Pi in two main ways.

Ubuntu support for the industrial-grade Raspberry Pis

The Compute Modules are already supported by Ubuntu and this support will continue. Besides support for the Compute Module, we anticipate that the next models of Raspberry Pi will be increasingly suitable for industrial use cases (Model 4 onwards), thanks to continuous capability upgrades. We will strive for day one support and LTS availability on these devices.

We also strive to partner with hardware vendors of custom industrial-grade Raspberry Pi boards to make Ubuntu Core and Ubuntu Server LTS available for distribution on custom Raspberry Pi boards.

Supporting agile innovation in the enterprise

Within the next few months, we will be introducing a commercial package that will allow enterprises to launch and scale IoT projects in an agile way, based on industrial-grade Raspberry Pis. The offering will include engineering support for the creation of enterprise-grade images of custom applications, over-the-air software management services, dedicated hosting services, technical support, as well as training and consulting.

on March 30, 2020 02:43 PM

Ep 83 – 1 2 3 experiência som

Podcast Ubuntu Portugal

Episódio 83: 1 2 3 experiência som. Ainda um pouco do que temos feito por casa nas últimas semanas e notícias do projecto UBports. Já sabem: oiçam, comentem e partilhem!

  • https://ansol.org/COVID-19
  • https://covid-19.ansol.org/
  • https://ubuntu.com/wsl
  • https://soundcloud.com/ubports/app-dev-audiocast-ubucon-talks
  • https://t.me/ubuntufrannonce/55
  • https://ubuntu-paris.org/
  • https://www.youtube.com/watch?v=v6U_YNvGcWQ

Apoios

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on March 30, 2020 02:25 PM

Due to the rapidly developing Coronavirus (COVID-19) situation, the entire web team has transitioned to 100% remote for the foreseeable future. Canonical is well set up to remain productive but brings design challenges such as group sketching which we are testing and evaluating solutions.

Here are some of the highlights of our completed work over the previous iteration.

Meet the team

Zihe:

Hi, I’m Zihe (子和)!

I love museums, shows and yes all kinds of food. Spent a lot of time travelling, I see the world with boundless imagination and unique perception.

I graduated from University College London with a Master’s Degree in Human-Computer Interaction. Previously I was a UX designer on a wellbeing management app and then a smart home IoT project.

I joined Canonical in February 2020, based in our lovely London office (oh I miss you BlueFin). Currently, I’m working on Vanilla React components and JaaS dashboard 🙂

Web squad

Our Web Squad develops and maintains most of Canonical’s promotional sites.

Whitepapers, case studies and webinars

As usual, we created pages for a set of new resources, including:

Rigado cuts customers’ time-to-market with Ubuntu Core and AWS

Virtual event: MicroK8s & Kubernetes

CVEs

We are working on a new section of ubuntu.com to list CVEs, to replace the existing list in people.canonical.com.

This iteration we designed the information schema for the new section, and we will continue next iteration.

Brand

Lots of Marketing support and template creation in this iteration alongside our Brand Hierarchy project. Below are a few other things we worked on:

Illustrations for OpenStack distro page:

Cube logo and branding, initial designs and exploration:

Focal Fossa social assets:

MAAS

The MAAS squad develops the UI for the MAAS project.

Searching and filtering, grouping persistence

We have been cracking on with finishing up the machine listing work in React, Huw has done an amazing job on adding searching and filtering as well as introducing in-browser persistence on the grouping functionality. We also introduced the ability to share filtered (and searched for) results, as those are now reflected in the URL of the page.

User testing report 

On the UX end of things, we completed analysing the results from the user testing sessions in Frankfurt, created a report with those and have a list of goals – both quick wins and larger areas to explore ready to begin work on.

JAAS

The JAAS squad develops the UI for the JAAS store and Juju GUI  projects.

JAAS.ai

The team started working on the strategy for the new content for jaas.ai. After the release of juju.is, the website dedicated to the Juju project, and the release of the new JAAS dashboard, the content and marketing pages of jaas.ai need to be updated.

The team is working with product managers and engineering team to organise and create the new content.

JAAS dashboard

The team is continuing to work on the implementation of new features in the JAAS dashboard. For this iteration we completed: 

Topology Integration

The integration of the model topology is completed in its first iteration, the responsive bug that was affecting the topology on smaller screens is now fixed and will be released live soon.

User testing for grouping and filtering

As a follow up from previous user testing, some other quick Guerrilla testings were done to see if users can perform grouping and filtering tasks using existing components on the top bar. New findings and suggestions will be discussed to further refine the design of JAAS dashboard as well as Vanilla React components.

Vanilla

The Vanilla squad designs and maintains the design system and Vanilla framework library. They ensure a consistent style throughout web assets.

Side navigation

For the last couple of weeks, we’ve been working on designing and building new side navigation component that could be used across our products.

The side navigation component is going to be a part of Vanilla 2.9 release. It includes a simple plain text version and optional icons or dark theme.

Grid for IE11

We worked on improving the support for the Vanilla grid in IE11. Since we implemented the grid using CSS grid features we’ve been struggling with IE11 bugs around that. To avoid dealing with IE11 partial support for CSS grid we implemented a fallback using flexbox.

Snapcraft

The Snapcraft team works closely with the Snap Store team to develop and maintain the Snap Store site.

User testing

Last month, during our Frankfurt Engineering sprint, we did some user testing on the Release interface of the publisher side of Snapcraft. During the past weeks, we have been analysing the results of the test in order to come up with improvements for the next cycle.

Despite the challenges of working remotely, the Snap squad met to read the transcripts together of the user test and extract key takeaways from these in the form of quotes. We then classified these into overarching themes and translated those into actionable items.

GitHub Builds

We made some improvements to the GitHub builds process and identified some other areas that need work. Automatic builds, when you push code to your repo, should be more stable and succeed more often. There was also some improvements made to the feedback users receive about builds.

We’re currently working on a poller script that will build snaps automatically if the snap’s parts are built from GitHub and have been updated. This should land next week – so expect to see some auto-builds happening soon.

Maintenance

CharmHub

Progress has been made on CharmHub, most notably a mobile version of the site is being designed, while the desktop version build is nearing completion, with static data.

Team posts:

We wish good health for you and your families during these uncertain times.

With ♥ from Canonical web team.

on March 30, 2020 08:58 AM

March 28, 2020

S13E01 – Thirteen

Ubuntu Podcast from the UK LoCo

This week the band is back together. We’ve been bringing new life into the universe and interconnecting chat systems. Distros are clad in new wallpapers, Raspberry Pi’s are being clustered with MicroK8s and the VR game industry has been revolutionsed.

It’s Season 13 Episode 01 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on March 28, 2020 11:00 PM
This is experimental, but I went ahead and ran the following after reading about it on reddit. snap install fwupdmgr /snap/bin/fwupdmgr install https://fwupd.org/downloads/cbe7b45a2591e9d149e00cd4bbf0ccbe5bb95da7-Synaptics-Prometheus_Config-0021.cab /snap/bin/fwupdmgr install https://fwupd.org/downloads/3b5102b3430329a10a3636b4a594fc3dd2bfdc09-Synaptics-Prometheus-10.02.3110269.cab These two cab files are referenced from: https://fwupd.org/lvfs/devices/com.synaptics.prometheus.config https://fwupd.org/lvfs/devices/com.synaptics.prometheus.firmware Rebooted and then went ahead with the actual fingerprint setup: After this was all done, login with fingerprints just worked. The only downside is that you need to press a key first to bring up the unlock logic.
on March 28, 2020 10:38 PM

March 27, 2020

Full Circle Magazine #155

Full Circle Magazine

This month:
* Command & Conquer
* How-To : Python, Ubuntu & Security, and Rawtherapee [NEW!]
* Graphics : Inkscape
* Graphics : Krita for Old Photos
* Linux Loopback: nomadBSD
* Everyday Ubuntu
* Review : QNAP NAS
* Ubuntu Games : Asciiker
plus: News, My Opinion, The Daily Waddle, Q&A, and more.

Get it while it’s hot!

https://fullcirclemagazine.org/issue-155/

on March 27, 2020 05:01 PM

March 26, 2020

Kubuntu 20.04 Testing Week

The Kubuntu team is delighted to announce an ‘Ubuntu Testing Week’ from April 2nd to April 8th with other flavors in the Ubuntu family. April 2nd is the beta release of what will become Kubuntu 20.04 and during this week, there will be a freeze on changes to features, the user interface and documentation. Between April 2nd and final release on April 23rd, the Kubuntu team and community will focus on ISO testing, bug reporting, and fixing bugs. Please join the community by downloading the daily ISO image and trying it out, even beginning today.

QA tracker: http://iso.qa.ubuntu.com/qatracker/milestones/408/builds

From this main page, click on the ‘Kubuntu Desktop amd64’ link to arrive at the testcases page. On the testcases page, you can download the ISO by clicking the ‘Link to the download information’ and report test results to the various test cases for Kubuntu. If you see other flavors needing testing on the main page, please test for them as well.

Chat live on IRC (#ubuntu-quality) or Telegram (UbuntuTesters: https://t.me/UbuntuTesters) if you like, during this time of pandemic social distancing.

If you have no spare computer to use for testing, no problem! You can test without changing your system by running it in a VM (Virtual Machine) with software like Virtualbox, or running it in the live session from a USB or DVD, so you can also test if your hardware works correctly. We encourage those that are willing, to install it either in a VM or on physical hardware–requires at least 6GB of harddisk space–and use it continuously for a few days, as more bugs can be exposed and reported this way.

The easy way to report a bug is to open up Konsole by pressing alt+space and typing konsole or Menu > Konsole and then typing `ubuntu-bug packagename`, where packagename is the program or application where you experience the bug.

If you prefer working in the terminal, open the virtual console (terminal) by pressing control + alt + F2, 3, 4 etc. and typing `ubuntu-bug packagename`, where packagename is the program or application where you experience the bug. Control + Alt + F1 to return to your desktop. If a crash has landed you in the terminal, login with your usual user name and password, and report the bug as above.

Here is a nice youtube video showing the entire process, including one way to figure out what packagename is appropriate in GNOME: https://www.youtube.com/watch?v=CjTyzyY9RHw

Using ‘ubuntu-bug’ will automatically upload error logs and/or other files to Launchpad that developers need to fix the bug. By the way, the installer’s packagename is ubiquity. Experience tells us that is the most useful packagename to know for ISO testing when things go wrong with the installation. The live session software package is casper, should you encounter bugs affecting the live session itself, not programs. Other programs with bugs should be filed against their packages, for instance firefox, dolphin, vlc, etc. Only the bug *number* is needed when reporting the results of a test on the QA tracker.

Please test programs / applications that you regularly use, so you can identify bugs and regressions that should be reported. New ISO files are built every day; always test with the most up-to-date ISO. It is easier and faster to update an existing daily ISO with the command below (first right-click on the ISO’s folder in Dolphin and select ‘Open in Terminal’) or just open konsole or yakuake and `cd path-to-ISO-folder`. Zsync downloads only changes, so it’s very quick.
$ zsync http://cdimage.ubuntu.com/kubuntu/daily-live/current/focal-desktop-amd64.iso.zsync

on March 26, 2020 09:21 PM

Lockdown

Jonathan Carter

I just took my dog for a nice long walk. It’s the last walk we’ll be taking for the next 3 weeks, he already starts moping around if we just skip one day of walking, so I’m going to have to get creative keeping him entertained. My entire country is going on lockdown starting at midnight. People won’t be allowed to leave their homes unless it’s for medical emergencies, to buy food or if their work has been deemed essential.

Due to the Covid-19 epidemic nearing half a million confirmed infections, this has become quite common in the world right now, with about a quarter of the world’s population currently under lockdown and confined to their homes.

Some people may have noticed I’ve been a bit absent recently, I’ve been going through some really rough personal stuff. I’m dealing with it and I’ll be ok, but please practice some patience with me in the immediate future if you’re waiting on anything.

I have a lot of things going on in Debian right now. It helps keeping me busy through all the turmoil and gives me something positive to focus on. I’m running for Debian Project Leader (DPL), I haven’t been able to put in quite the energy into my campaign that I would have liked, but I think it’s going ok under the circumstances. I think because of everything happening in the world it’s been more difficult for other Debianites to participate in debian-vote discussions as well. Recently we also announced Debian Social, a project that’s still in its early phases, but we’ve been trying to get it going for about 2 years, so it’s nice to finally see it shaping up. There’s also plans to put Debian Social and some additional tooling to the test, with the idea to host a MiniDebConf entirely online. No dates have been confirmed yet, we still have a lot of crucial bits to figure out, but you can subscribe to debian-devel-announce and Debian micronews for updates as soon as more information is available.

To everyone out there, stay safe, keep your physical distance for now and don’t lose hope, things will get better again.

on March 26, 2020 03:09 PM

We’re delighted to announce that we’re participating in an ‘Ubuntu Testing Week’ from April 2nd to April 8th with other flavors in the Ubuntu family. On April 2nd, we’ll be releasing the beta release of Xubuntu 20.04 LTS, after halting all new changes to its features, user interface and documentation. And between April 2nd and the final release on April 23rd, all efforts by the Xubuntu team and community are focused on ISO testing, bug reporting, and fixing bugs.

So, we highly encourage you to join the community by downloading the daily ISO image and trying it out, though you are welcome to start from today. There are a variety of ways that you can help test the release, including trying out the various testcases for live sessions and installations on the ISO tracker (Xubuntu is found at the bottom of the page), which take less than 30 minutes to complete (example 1, example 2, example 3 below).

You can test without changing your system by running it in a VM (Virtual Machine) with software like VMWare Player, VirtualBox (apt-install), and Gnome Boxes (apt-install), or running it in the live session from a USB, SD Card, or DVD, so you can also test if your hardware works correctly. There are a number of software like etcher and Gnome Disks that can copy the ISO to a USB Drive and SD Card. We encourage those that are willing, to install it either in a VM or on physical hardware (it requires at least 6GB of harddisk space) and use it continuously for a few days, as more bugs can be reported this way.

If you find a bug in the installer, you can file it against ubiquity, or if you find a bug not in an application but in the live session from the booting to the shutdown, you can file it against casper. If you can’t figure out which package to file a bug against after watching the video above, then please file it with the Xubuntu Bugs Team.

Please test apps that you regularly use, so you can identify bugs and regressions that should be reported. New ISO files are built everyday, and you should always test with the most up-to-date ISO. It is easier and faster to update an existing daily ISO file on Linux by running the command below in the folder containing the ISO, after right-click on the folder and select ‘Open in Terminal’ from the context menu (example).

$ zsync http://cdimage.ubuntu.com/xubuntu/daily-live/current/focal-desktop-amd64.iso.zsync

In order to assist you in your testing efforts, we encourage you to read our Quality Assurance (QA) guide and our new testers wiki. You can also chat with us live in our dedicated IRC channel ( #ubuntu-quality on freenode ) or telegram group ( Ubuntu Testers ). In order to submit reports to us, you’ll need an Launchpad account and once you have one, you can also join the Xubuntu Testers team.

We hope that you will join the community in making Xubuntu 20.04 a success, and hope that you will also take time to also test out the other Ubuntu flavors (Kubuntu, Lubuntu, Ubuntu, Ubuntu Budgie, Ubuntu Kylin, Ubuntu MATE, and Ubuntu Studio), as we will all benefit from that. We look forward to your contributions, your live chatting and for your return to future testing sessions. Happy bug hunting.

on March 26, 2020 06:00 AM

March 25, 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In February, 226 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA gave back 12 out of his assigned 14h, thus he is carrying over 2h for March.
  • Ben Hutchings did 19.25h (out of 20h assigned), thus carrying over 0.75h to March.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Dylan Aïssi did 5.5h (out of 4h assigned and 1.5h from January).
  • Emilio Pozuelo Monfort did 29h (out of 20h assigned and 15.75h from January), thus carrying over 6.75h to March.
  • Hugo Lefeuvre gave back the 12h he got assigned.
  • Markus Koschany did 10h (out of 20h assigned and 8.75h from January), thus carrying over 18.75h to March.
  • Mike Gabriel did 5.75h (out of 20h assigned) and gave 12h back to the pool, thus he is carrying over 2.25h to March.
  • Ola Lundqvist did 10h (out of 8h assigned and 4.5h from January), thus carrying over 2.5h to March.
  • Roberto C. Sánchez did 20.25h (out of 20h assigned and 13h from January) and gave back 12.75h to the pool.
  • Sylvain Beucler did 20h (out of 20h assigned).
  • Thorsten Alteholz did 20h (out of 20h assigned).
  • Utkarsh Gupta did 20h (out of 20h assigned).

Evolution of the situation

February began as rather calm month and the fact that more contributors have given back unused hours is an indicator of this calmness and also an indicator that contributing to LTS has become more of a routine now, which is good.

In the second half of February Holger Levsen (from LTS) and Salvatore Bonaccorso (from the Debian Security Team) met at SnowCamp in Italy and discussed tensions and possible improvements from and for Debian LTS.

The security tracker currently lists 25 packages with a known CVE and the dla-needed.txt file has 21 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on March 25, 2020 04:44 PM

Ep 82 – Corsários e Capitães

Podcast Ubuntu Portugal

Nuno do Carmo o nosso corsário preferido está de volta para nos contar como foia WSLConf. Fiquem também para saber o que é um Capitão. Já sabem: oiçam, comentem e partilhem!

  • https://ansol.org/COVID-19
  • https://covid-19.ansol.org/
  • https://ubuntu.com/blog/wslconf-the-first-conference-dedicated-to-windows-subsystem-for-linux-goes-virtual
  • https://www.wslconf.dev/
  • https://twitter.com/nunixtech
  • https://wsl.dev/

Apoios

Este episódio foi produzido e editado por Alexandre Carrapiço, o Senhor Podcast.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on March 25, 2020 10:07 AM

Over time, there have been a number of approaches to indicating the original client and the route that a request took when forwarded across multiple proxy servers. For HTTP(S), the three most common approaches you’re likely to encounter are the X-Forwarded-For and Forwarded HTTP headers, and the PROXY protocol. They’re all a little bit different, but also the same in many ways.

X-Forwarded-For

X-Forwarded-For is the oldest of the 3 solutions, and was probably introduced by the Squid caching proxy server. As the X- prefix implies, it’s not an official standard (i.e., an IETF RFC). The header is an HTTP multi-valued header, which means that it can have one or more values, each separated by a comma. Each proxy server should append the IP address of the host from which it received the request. The resulting header looks something like:

1
X-Forwarded-For: client, proxy1, proxy2

This would be a request that has passed through 3 proxy servers – the IP of the 3rd proxy (the one closest to the application server) would be the IP seen by the application server itself. (Often referred to as the “remote address” or REMOTE_ADDR in many application programming contexts.)

So, you could end up seeing something like this:

1
X-Forwarded-For: 2001:DB8::6, 192.0.2.1

Coming from a TCP connection from 127.0.0.1. This implies that the client had IPv6 address 2001:DB8::6 when connecting to the first proxy, then that proxy used IPv4 to connect from 192.0.2.1 to the final proxy, which was running on localhost. A proxy running on localhost might be nginx splitting between static and application traffic, or a proxy performing TLS termination.

Forwarded

The HTTP Forwarded header was standardized in RFC 7239 in 2014 as a way to better express the X-Forwarded-For header and related X-Forwarded-Proto and X-Forwarded-Port headers. Like X-Forwarded-For, this is a multi-valued header, so it consists of one or more comma-separated values. Each value is, itself, a set of key-value pairs, with pairs separated by semicolons (;) and the keys and values separated by equals signs (=). If the values contain any special characters, the value must be quoted.

The general syntax might look like this:

1
Forwarded: for=client, for=proxy1, for=proxy2

The key-value pairs are necessary to allow expressing not only the client, but the protocol used, the original HTTP Host header, and the interface on the proxy where the request came in. For figuring out the routing of our request (and for parity with the X-Forwarded-For header), we’re mostly interested in the field named for. While you might think it’s possible to just extract this key-value pair and look at the values, the authors of the RFC added some extra complexity here.

The RFC contains provisions for “obfuscated identifiers.” It seems this is mostly intended to prevent revealing information about internal networks when using forward proxies (e.g., to public servers), but you might even see it in operating reverse proxies. According to the RFC, these should be prefixed by an underscore (_), but I can imagine cases where this would not be respected, so you’d need to be prepared for that in parsing the identifiers.

The RFC also contains provisions for unknown upstreams, identified as unknown. This is used to indicate forwarding by a proxy in some manner that prevented identifying the upstream source (maybe it was through a TCP load balancer first).

Finally, there’s also the fact that, unlike the defacto standard of X-Forwarded-For, Forwarded allows for the option of including the port number on which it was received. Because of this, IPv6 addresses are enclosed in square brackets ([]) and quoted.

The example from the X-Forwarded-For section above written using the Forwarded header might look like:

1
Forwarded: for="[2001:DB8::6]:1337", for=192.0.2.1;proto=https

Additional examples taken from the RFC:

1
2
3
4
Forwarded: for="_gazonk"
Forwarded: For="[2001:db8:cafe::17]:4711"
Forwarded: for=192.0.2.60;proto=http;by=203.0.113.43
Forwarded: for=192.0.2.43, for=198.51.100.17

PROXY Protocol

At this point, you may have noticed that both of these headers are HTTP headers, and so can only be modified by L7/HTTP proxies or load balancers. If you use a pure TCP load balancer, you’ll lose the information about the source of the traffic coming in to you. This is particularly a problem when forwarding HTTPS connections where you don’t want to offload your TLS termination (perhaps traffic is going via an untrusted 3rd party) but you still want information about the client.

To that end, the developers of HAProxy developed the PROXY protocol. There are (currently) two versions of this protocol, but I’ll focus on the simpler (and more widely deployed) 1st version. The proxy should add a line at the very beginning of the TCP connection in the following format:

1
PROXY <protocol> <srcip> <dstip> <srcport> <dstport>\r\n

Note that, unlike the HTTP headers, this makes the PROXY protocol not backwards compatible. Sending this header to a server not expecting it will cause things to break. Consequently, this header will be used exclusively by reverse proxies.

Additionally, there’s no support for information about the hops along the way – each proxy is expected to maintain the same PROXY header along the way.

Version 2 of the PROXY protocol is a binary format with support for much more information beyond the version 1 header. I won’t go into details of the format (check the spec if you want) but the core security considerations are much the same.

Security Considerations

If you need (or want) to make use of these headers, there are some key security considerations in how you use them to use them safely. This is particularly of consideration if you use them for any sort of IP whitelisting or access control decisions.

Key to the problem is recognizing that the headers represent untrusted input to your application or system. Any of them could be forged by a client connecting, so you need to consider that.

Parsing Headers

After I spent so long telling you about the format of the headers, here’s where I tell you to disregard it all. Okay, really, you just need to be prepared to receive poorly-formatted headers. Some variation is allowed by the specifications/implementations: optional spaces, varying capitalization, etc. Some of this will be benign but still unexpected: multiple commas, multiple spaces, etc. Some of it will be erroneous: broken quoting, invalid tokens, hostnames instead of IPs, ports where they’re not expected, and so on.

None of this, however, precludes malicious input in the case of these headers. They may contain attempts at SQL Injection, Cross-Site Scripting and other malicious content, so one needs to be cautious in parsing and using the input from these headers.

Running a Proxy

As a proxy, you should consider whether you expect to be receiving these headers in your requests. You will only want that if you are expecting requests to be forwarded from another proxy, and then you should make sure the particular request came from your proxy by validating the source IP of the connection. As untrusted input, you cannot trust any headers from proxies not under your control.

If you are not expecting these headers, you should drop the headers from the request before passing it on. Blindly proxying them might cause downstream applications to trust their values when they come from your proxy, leading to false assertions about the source of the request.

Generally, you should rewrite the appropriate headers at your proxy, including adding the information on the source of the request to your proxy, before passing them on to the next stage. Most HTTP proxies have easy ways to manage this, so you don’t usually need to format the header yourself.

Running an Application

This is where it gets particularly tricky. If you’re using IP addresses for anything of significance (which you probably shouldn’t be, but it’s likely there’s some cases where people still are), you need to figure out whether you can trust these headers from incoming requests.

First off, if you’re not running the proxies: just don’t trust them. (Of course, I count a managed provider as run by you.) Also, if you’re not running the proxy, I hope we’re only talking about the PROXY protocol and you’re not exposing plaintext to untrusted 3rd parties.

If you are running proxies, you need to make sure the request actually came from one of your proxies by checking the IP of the direct TCP connection. This is the “remote address” in most web programming frameworks. If it’s not from your proxy, then you can’t trust the headers.

If it’s your proxy and you made sure not to trust incoming headers in your proxy (see above), then you can trust the full header. Otherwise, you can only trust the incoming hop to your proxy and anything before that is not trustworthy.

Man in the Middle Attacks

All of this disregards MITM attacks of course. If an attacker can inject traffic and spoof source IP addresses into your traffic, all bets on trusting headers are off. TLS will still help with header integrity, but they can still spoof the source address, convincing you to trust the headers in the request.

Bug Bounty Tip

Try inserting a few headers to see if you get different responses. Even if you don’t get a full authorization out of it, some applications will give you debug headers or other interesting behavior. Consider some of the following:

1
2
3
X-Forwarded-For: 127.0.0.1
X-Forwarded-For: 10.0.0.1
Forwarded: for="_localhost"
on March 25, 2020 07:00 AM

March 24, 2020

Systems Failure At Main Mission

Stephen Michael Kellat

I am still alive. In a prior post I had mentioned that things had been changing rather rapidly. With a daily press conference by the Governor of Ohio there has been one new decree after another relative to the COVID-19 situation.

A “stay at home” order takes effect at 0359 hours Coordinated Universal Time on Tuesday, March 24, 2020. This is not quite a “lockdown” but pretty much has me stuck. The State of Ohio has resources posted as to economic help in this situation but they’re also dealing with many multiple systems crashes as they try to react and some of their solutions are extremely bureaucratic.

Although I wanted to get started with doing daily livestream on Twitch there have been some logistical delays. I am also having to scrape together what equipment I do have at home to set up make-shift production capacity since our proper production facility is now inaccessible for the immediate future. There is an Amazon wish list of replacement items to try to fill in gaps if anybody feels generous though I am not sure when/if those would show up in the current circumstances. That’s also why I’m having to encourage folks to either buy the Kindle version or buy the EPUB version of the novella since the print version is possibly not going to be available any time soon.

I have further testing of packages to do to see what I can make break. OBS Studio certainly does make the fan on my laptop go into high speed action. Life here at Main Mission is getting stranger by the day. With debating ensuing about the economic carnage leading to possible economic disaster, I can only note that I at least got this up shortly before we entered lockdown.

The stay-at-home order gets reassessed on April 6th. It technically has no expiration date to it currently so it can last legally until the current governor leaves office in 2023. I do hope we make progress in getting this mess resolved sooner rather than later.

on March 24, 2020 03:03 AM

March 23, 2020

Welcome to the Ubuntu Weekly Newsletter, Issue 623 for the week of March 15 – 21, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on March 23, 2020 10:22 PM

March 22, 2020

I’m trying something new – a “Security 101” series. I hope to make these topics readable for those with no security background. I’m going to pick topics that are either related to my other posts (such as foundational knowledge) or just things that I think are relevant or misunderstood.

Today, I want to cover Virtual Private Networks, commonly known as VPNs. First I want to talk about what they are and how they work, then about commercial VPN providers, and finally about common misconceptions.

VPN Basics

At the most basic level, a VPN is intended to provide a service that is equivalent to having a private network connection, such as a leased fiber, between two endpoints. The goal is to provide confidentiality and integrity for the traffic travelling between those endpoints, which is usually accomplished by cryptography (encryption).

The traffic tunneled by VPNs can operate at either Layer 2 (sometimes referred to as “bridging”) or Layer 3 (sometimes referred to as “routing”) of the OSI model. Layer 2 VPNs provide a more seamless experience between the two endpoints (e.g., device autodiscovery, etc.) but are less common and not supported on all platforms. Most VPN protocols operate at the application layer, but IPsec is an extension to IPv4, so operates at Layer 3.

The most common VPN implementations you’re likely to run into are IPsec, OpenVPN, or Wireguard. I’ll cover these in my examples, as they’re the bulk of what individuals might be using for personal VPNs as well as the most common option for enterprise VPN. Other relatively common implementations are Cisco AnyConnect (and the related OpenConnect), L2TP, OpenSSH’s VPN implementation, and other ad-hoc (often TLS-based) protocols.

A Word on Routing

In order to understand how VPNs work, it’s useful to understand how routing works. Now, this isn’t an in-depth dive – there are entire books devoted to the topic – but it should cover the basics. I will only consider the endpoint case with typical routing use cases, and use IPv4 in all my examples, but the same core elements hold for IPv6.

IP addresses are the sole way the source and destination host for a packet are identified. Hostnames are not involved at all, that’s the job of DNS. Additionally, individual sub networks (subnets) are composed of an IP prefix and the “subnet mask”, which specifies how many leading bits of the IP refer to the network versus the individual host. For example, 192.168.1.10/24 indicates that the host is host number 10 in the subnet 192.168.1 (since the first 3 octets are a total of 24 bits long.).

1
2
3
4
% ipcalc 192.168.1.10/24
Address:   192.168.1.10         11000000.10101000.00000001. 00001010
Netmask:   255.255.255.0 = 24   11111111.11111111.11111111. 00000000
Network:   192.168.1.0/24       11000000.10101000.00000001. 00000000

When your computer wants to send a packet to another computer, it has to figure out how to do so. If the two machines are on the same sub network, this is easy – it can be sent directly on the appropriate interface. So if the host with the IP 192.168.1.10/24 on it’s wireless network interface wants to send a packet to 192.168.1.22, it will just send it directly on that interface.

If, however, it wants to send a packet to 1.1.1.1, it will need to send it via a router (a device that routes packets from one network to another). Most often, this will be via the “default route”, sometimes represented as 0.0.0.0/0. This is the route used when the packet doesn’t match any other route. In between the extremes of the same network and the default route can be any number of other routes. When routing an outbound packet, the kernel picks the most specific route.

VPN Routing

Most typically, a VPN will be configured to route all traffic (i.e., the default) via the VPN server. This is often done by either a higher priority routing metric, or a more specific route. The more specific route may be done via two routes, one each for the top and bottom half of the IPv4 space. (0.0.0.0/1 and 128.0.0.0/1)

Of course, you need to make sure you can still reach the VPN server – routing traffic to the VPN server via the VPN won’t work. (No VPN-ception here!) So most VPN software will add a route specifically for your VPN server that goes via the default route outside the VPN (i.e., your local router).

For example, when connected via Wireguard, I have the following routing tables:

1
2
3
4
5
% ip route
default dev wg0 table 51820 scope link
default via 192.168.20.1 dev wlp3s0 proto dhcp metric 600
10.13.37.128/26 dev wg0 proto kernel scope link src 10.13.37.148
192.168.20.0/24 dev wlp3s0 proto kernel scope link src 192.168.20.21 metric 600

10.13.37.148/26 is the address and subnet for my VPN, and 192.168.20.21/24 is my local IP address on my local network. The routing table provides for a default via wg0, my wireguard interface. There’s a routing rule that prevents wireguard traffic from itself going over that route, so it falls to the next route, which uses my home router (running pfSense) to get to the VPN server.

The VPN only provides its confidentiality and integrity for packets that travel via its route (and so go within the tunnel). The routing table is responsible for selecting whether a packet will go via the VPN tunnel or via the normal (e.g., non-encrypted) network interface.

Just for fun, I dropped my Wireguard VPN connection and switched to an OpenVPN connection to the same server. Here’s what the routing table looks like then (tun0 is the VPN interface):

1
2
3
4
5
6
7
8
% ip route
default via 10.13.37.1 dev tun0 proto static metric 50
default via 192.168.20.1 dev wlp3s0 proto dhcp metric 600
10.13.37.0/26 dev tun0 proto kernel scope link src 10.13.37.2 metric 50
10.13.37.0/24 via 10.13.37.1 dev tun0 proto static metric 50
198.51.100.6 via 192.168.20.1 dev wlp3s0 proto static metric 600
192.168.20.0/24 dev wlp3s0 proto kernel scope link src 192.168.20.21 metric 600
192.168.20.1 dev wlp3s0 proto static scope link metric 600

This is a little bit more complicated, but you’ll still note the two default routes. In this case, instead of using a routing rule, OpenVPN sets the metric of the VPN route to a lower value. You can think of a metric as being a cost to a route: if multiple routes are equally specific, then the lowest metric (cost) is the one selected by the kernel for routing the packet.

Otherwise, the routing table is very similar, but you’ll also notice the route specifically for the VPN server (198.51.100.6) is routed via my local gateway (192.168.20.1). This is how OpenVPN ensures that its packets (those encrypted and signed by the VPN client) are not routed over the VPN itself by the kernel.

Note that users of Docker or virtual machines are likely to see a number of additional routes going over the virtual interfaces to containers/VMs.

Using VPNs for “Privacy” vs “Security”

There are many reasons for using a VPN, but for many people, they boil down to being described as “Privacy” or “Security”. The single most important thing to remember is that the VPN offers no protection to data in transit between the VPN server and the remote server. Where data reaches the remote server, it looks exactly the same as if it had been sent directly.

Some VPNs are just used to access private resources on the remote network (e.g., corporate VPNs), but a lot of VPN usage these days is routing all traffic, including internet traffic, over the VPN connection. I’ll mostly consider those scenarios below.

When talking about what a VPN gets you, you also need to consider your “threat model”. Specifically, who is your adversary and what do you want to prevent them from being able to do? Some common examples of concerns people have and where a VPN can actually benefit you include:

  • (Privacy) Prevent their ISP from being able to market their browsing data
  • (Security) Prevent man-in-the-middle attacks on public/shared WiFi
  • (Privacy) Prevent tracking by “changing” your IP address

Some scenarios that people want to achieve, but a VPN is ineffective for, include:

  • (Privacy) Preventing “anyone” from being able to see what sites you’re visiting
  • (Privacy) Prevent network-wide adversaries (e.g., governments) from tracking your browsing activity
  • (Privacy) Prevent all tracking of your browsing

Commercial VPNs

Commercial VPN providers have the advantage of mixing all of your traffic with that of their other customers. Typically, a couple of dozen or more inbound connections come out from the same IP address. They also come with no administration overhead, and often have servers in a variety of locations, which can be useful if you’d like to access Geo-Restricted content. (Please comply with the appropriate ToS however.)

On the flip side, using a commercial VPN server has just moved the endpoint of your plaintext traffic to another point, so if privacy is your main concern, you’d better trust your VPN provider more than you trust your ISP.

If you’re after anonymity online, it’s important to consider who you’re seeking anonymity from. If you’re only concerned about advertisers, website operators, etc., than a commercial VPN helps provide a pseudonymous browsing profile compared to coming directly from your ISP-provided connection.

Rolling Your Own

Rolling your own gives you the ultimate in control of your VPN server, but does require some technical know-how. I really like the approach of using Trail of Bits’ Algo on DigitalOcean for a fast custom VPN server. When rolling your own, you’re not competing with others for bandwidth and can choose a hosting provider in the location you want to get nearly any egress you want.

Alternatively, you can set up either OpenVPN or Wireguard yourself. While Wireguard is considered cleaner and uses more modern cryptography, OpenVPN takes care of a few things (like IP address assignment) that Wireguard does not. Both are well-documented at this point and have clients available for a variety of platforms.

Note that a private VPN generally does not have the advantage of mixing your traffic with that of others – you’re essentially moving your traffic from one place to another, but it’s still your traffic.

VPN Misconceptions

When people are new to the use of a VPN, there seems to be a lot of misconceptions about how they’re supposed to work and their properties.

VPNs Change Your IP Address

VPNs do not change the public IP address of your computer. While they do usually assign a new private IP for the tunnel interface, this IP is one that will never appear on the internet, so is not of concern to most users. What it does do is route your traffic via the tunnel so it emerges onto the public internet from another IP address (belonging to your VPN server).

VPN “Leaks”

Generally speaking, when someone refers to a VPN leak, they’re referring to the ability of a remote server to identify the public IP to which the endpoint is directly attached. For example, a server seeing the ISP-assigned IP address of your computer as the source of incoming packets can be seen as a “leak”.

These are not, generally, the fault of the VPN itself. They are usually caused by the routing rules your computer is using to determine how to send packets to their destination. You can test the routing rules with a command like:

1
2
3
% ip route get 8.8.8.8
8.8.8.8 dev wg0 table 51820 src 10.13.37.148 uid 1000
    cache

You can see that, in order to reach the IP 8.8.8.8 (Google’s DNS server), I’m routing packets via the wg0 interface – so out via the VPN. On the other hand, if I check something on my local network, you can see it will go directly:

1
2
3
% ip route get 192.168.20.1
192.168.20.1 dev wlp3s0 src 192.168.20.21 uid 1000
    cache

If you don’t see the VPN interface when you run ip route get <destination>, you’ll end up with traffic not going via the VPN, and so going directly to the destination server. Using ifconfig.co to test the IP being seen by servers, I’ll examine the two scenarios:

1
2
3
4
5
6
7
8
9
10
11
12
13
% host ifconfig.co
ifconfig.co has address 104.28.18.94
% ip route get 104.28.18.94
104.28.18.94 dev wgnu table 51820 src 10.13.37.148 uid 1000
    cache
% curl -4 ifconfig.co
198.51.100.6
... shutdown VPN ...
% ip route get 104.28.18.94
104.28.18.94 via 192.168.20.1 dev wlp3s0 src 192.168.20.21 uid 1000
    cache
% curl -4 ifconfig.co
192.0.2.44

Note that my real IP (192.0.2.44) is exposed to the ifconfig.co service when the route is not destined to go via the VPN. If you see a route via your local router to an IP, then that traffic is not going over a VPN client running on your local host.

Note that routing DNS outside the VPN (e.g., to your local DNS server) provides a trivial IP address leak. By merely requesting a DNS lookup to a unique hostname for your connection, the server can force an “IP leak” via DNS. There are other things that can potentially be seen as an “IP leak,” like WebRTC.

VPN Killswitches

A VPN “killswitch” is a common option in a 3rd party clients. This endeavors to block any traffic not going through the VPN, or block all traffic when the VPN connection is not active. This is not a core property of VPNs, but may be a property of a particular VPN client. (For example, this is not built in to the official OpenVPN or Wireguard clients, nor the IPSec implementations for either Windows or Linux.)

That being said, you could implement your own protection on this. For example, you could block all traffic on your physical interface except that going between your computer and the VPN server. For example, using iptables, and with your VPN server being 198.51.100.6 on UDP port 51820, a pair of rules like this one will block all other traffic from going out on any interface except interfaces beginning with wg:

1
2
3
iptables -P OUTPUT REJECT
iptables -A OUTPUT -p udp --dport 51820 -d 198.51.100.6 -j ACCEPT
iptables -A OUTPUT -o wg+ -j ACCEPT

VPNs Protect Against Nation-State Adversaries

There’s a lot of discussion on VPN and privacy forums about selecting “no logging” VPNs or VPN providers outside the “Five Eyes” (and the expanded selections of allied nations). To me, this indicates that these individuals are concerned about Nation-State level adversaries (i.e., NSA, GCHQ, etc.). First of all, consider whether you need that level of protection – maybe you’re doing something you shouldn’t be! However, I can understand the desire for privacy and the uneasy feeling of thinking someone is reading your conversations.

No single VPN will protect you against a nation-state adversary or most well-resourced adversaries. Almost all VPN providers receive the encrypted traffic and route the plaintext traffic back out via the same interface. In such a scenario, any adversary that can see all of the traffic there can correlate the traffic coming into and out of the VPN provider1.

If you need effective protection against such an adversary, you’re best to look at something like Tor.

VPN Routers

One approach to avoiding local VPN configuration issues is to use a separate router that puts all of the clients connected to it through the VPN. This has several advantages, including easier implementation of a killswitch, support for clients that may not support VPN applications (e.g., smart devices, e-Readers, etc.). If configured correctly, it can ensure no leaks (e.g., by only routing from its “LAN” side to the “VPN” side, and never from “LAN” to “WAN”).

I do this when travelling with a gl.inet AR750-S “Slate”. The stock firmware is based on OpenWRT, so you can choose to run a fully OpenWRT custom build (like I do) or the default firmware, which does support both Wireguard and OpenVPN. (Note that, being a low-power MIPS CPU, throughput will not match raw throughput available from your computer’s CPU, however it will still best the WiFi at the hotel or airport.

VPNs are not a Panacea

Many people look for a VPN as an instant solution for privacy, security, or anonymity. Unfortunately, it’s not that simple. Understanding how VPNs work, how IP addresses work, how routing works, and what your threat model is will help you make a more informed decision. Just asking “is this secure” or “will I be anonymous” is not enough without considering the lengths your adversary is willing to go to.

Got a request for a Security 101 topic? Hit me up on Twitter.

on March 22, 2020 07:00 AM

March 19, 2020


I have just released procenv version 0.46. Although this is a very minor release for the existing platforms (essentially 1 bug fix), this release now introduces support for a new platform...

Darwin

Yup - OS X now joins the ranks of supported platforms.

Although adding support for Darwin was made significantly easier as a result of the recent internal restructure of the procenv code, it did present a challenge: I don't own any Apple hardware. I could have borrowed a Macbook, but instead I decided to see this as a challenge:

  • Could I port procenv to Darwin without actually having a local Apple system?
 Well, you've just read the answer, but how did I do this?

Stage 1: Docker


Whilst surfing around I came across this interesting docker image:


It provides a Darwin toolchain that I could run under Linux. It didn't take very long to follow my own instructions on porting procenv to a new platform. But although I ended up with a binary, I couldn't actually run it, partly because Darwin uses a different binary file format to Linux: rather than ELF, it uses the Mach-O format.



Stage 2: Travis

The final piece of the puzzle for me was solved by Travis. I'd read the very good documentation on their site, but had initially assumed that you could only build Objective-C based projects on OSX with Travis. But a quick test proved my assumption to be incorrect: it didn't take much more than adding "osx" to the os list and "clang" to the compiler list in procenv's .travis.yml to have procenv building and running (it runs itself as part of its build) on OSX under Travis!

Essentially, the following YAML snippet from procenv's .travis.yml did most of the work:

language: c
compiler:
  - gcc
  - clang
os:
  - linux
  - osx



All that remained was to install the build-time dependencies to the same file with this additional snippet:

before_install:
  - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew update; fi
  - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew install expat check perl; fi


(Note that it seems Travis is rather picky about before_install - all code must be on a single line, hence the rather awkward-to-read "if; then ....; fi" tests).


Summary


Although I've never personally run procenv under OSX, I have got a good degree of confidence that it does actually work.

That said, it would be useful if someone could independently verify this claim on a real system!) Feel free to raise bugs, send code (or even Apple hardware :-) my way!



on March 19, 2020 05:29 PM

March 18, 2020


I've been a full-time, work-from-home employee for the vast majority of the last 20 years, and 100% since 2008.

In this post, I'm going to share a few of the benefits and best practices that I've discovered over the years, and I'll share with you a shopping list of hardware and products that I have come to love or depend on, over the years.

I worked in a variety of different roles -- software engineer, engineering manager, product manager, and executive (CTO, VP Product, Chief Product Officer) -- and with a couple of differet companies, big and small (IBM, Google, Canonical, Gazzang, and Apex).  In fact, I was one of IBM's early work-from-home interns, as a college student in 2000, when my summer internship manager allowed me to continue working when I went back to campus, and I used the ATT Global Network dial-up VPN client to "upload" my code to IBM's servers.

If there's anything positive to be gained out of our recent life changes, I hope that working from home will become much more widely accepted and broadly practiced around the world, in jobs and industries where it's possible.  Moreover, I hope that other jobs and industries will get even more creative and flexible with remote work arrangements, while maintaining work-life-balance, corporate security, and employee productivity.

In many cases, we would all have a healthier workplace, if anyone generally stayed home when they were feeling even just a bit unwell.  Over these next few weeks, I hope that many other people discover the joy, freedom, and productivity working from home provides.  Here are a few things that I've learned over the years, and some of the tools that I've acquired...


Benefits, Costs, and Mitigations

Benefits

  • Commute -- If you're like me, you hate sitting in traffic.  Or waiting on a train.  Erase your commute entirely when you work from home.  I love having an extra hour in the morning, to set out my day, and extra time in the evenings with my family.
  • Family -- Speaking of family, I'm adding this as a benefit all on its own.  I love being able to put my kids on the bus in the morning, and be home when they get home, and have quality time in the evenings with my spouse and daughters and dogs.  When I have worked in an office, I've often found that I've left for work before anyone else was awake, and I often got home after everyone was in bed.
  • Location -- Work-from-home, in most cases, usually means, work-from-anywhere.  While I spend the vast majority of my time actually working from my home office, I've found that I can be nearly as effective working from a hotel, coffee shop, airplane, my in-laws' house, etc.  It takes some of the same skills and disciplines, but once you break free of the corporate desk, you'll find you can get great work done anywhere.
  • Productivity -- Your mileage may vary, but I find I'm personally more productive in the comfort of my own home office, which has evolved to meet my needs.  Yes, I love my colleagues and my teams, and yes, I spend plenty of time traveling, on the road meeting them.

Costs and Mitigations

  • Work-life-balance -- This one is important, but it's not hard to fix.  Some people find it hard to separate work and home life, when working from home.  Indeed, you could find yourself "always on", and burn out.  Definitely don't do that.  See the best practices below for some suggestions on mitigating this one.
  • Space and Equipment -- There's quite literally a dollar cost, in some cases, to having the necessary space and equipment necessary to work from home.  To mitigate this, you should look into any benefits your employer offers on computer equipment, and potentially speak to an accountant about tax deductions for dedicated space and hardware.  My office is a pretty modest 10'x12' (120 sqft), but it helps that I have big windows and a nice view.
  • Relationships -- It can seem a little lonely at home, sometimes, where you might miss out on some of the water cooler chatter and social lunches and happy hours.  You do have to work a little harder to forge and maintain some of those remote relationships.  I insist on seeing my team in person a couple of times a year (once a quarter at least, in most cases), and when I do, I try to ensure that we also hang out socially (breakfast, coffee, lunch, dinner, etc.) beyond just work.  It's amazing how far that will carry, into your next few dozen phone calls and teleconferences.
  • Kids -- (UPDATED: 2020-03-10) I'm adding this paragraph post publication, based on questions/comments I've received about how to make this work with kids.  I have two daughters (6 and 7 years old now), that were 18 months apart, so there was a while in there where I had two-under-two.  I'm not going to lie -- it was hard.  I'm blessed with a few advantages -- my wife is a stay-at-home-mom, and I have a dedicated room in my house which is my office.  It has a door that locks.  I actually replaced the cheap, contractor-grade hollow door, with a solid wood door, which greatly reduces noise.  When there is a lot of background noise, I switch from speakers-and-computer-mic, to my noise cancelling headset (more details below).  Sometimes, I even move all the way to the master bedroom (behind yet another set of doors).  I make ample use of the mute button (audio and/or video) while in conference meetings.  I also switch from the computer to the phone, and go outside sometimes.  In a couple of the extreme cases, where I really need silence and focus (e.g. job interviews), I'll sit in my car (in my own garage or at a nearby park), and tether my computer through my phone.  I've worked with colleagues who lived in small spaces, turn a corner of their own master bedroom into a office, with a hideaway desk, with a folding bracket and a butcher block.  My kids are now a little older, and sometimes they're just curious about what I'm doing.  If I'm not in a meeting, I try to make 5 minutes for them, and show them what I'm working on.  If I am in a meeting, and it's a 1:1 or time with a friendly colleague, I'll very briefly introduce them, let them say hi, and them move them along.  Part of the changes happening around the work-from-home shift, is that we're all becoming more understanding of stuff like this.

 Best Practices

  • Dedicated space -- I find it essential to have just a bit of dedicated space for an office, that you and the family respect as your working space.  My office is about 8' x 12', with lots of natural light (two huge windows, but also shades that I can draw).  It hangs off of the master bedroom, and it has a door that locks.  Not that I have to lock it often, but sometimes, for important meetings, I do, and my family knows that I need a little extra quiet when my office door is locked.
  • Set your hours -- It's really easy to get swept away, into unreasonably long working days, when working from home, especially when you enjoy your job, or when things are extra busy, or when you have a boss or colleagues who are depending on you, or a 1000 other reasons.  It's essential to set your working hours, and do you best to get into a consistent rhythm.  I usually put the kids on the bus around 7am, and then either go for a run or play the piano for a bit, and then start my day pretty consistently at 7:30am, and generally try to wrap up by 5:30pm most days.  Of course there are always exceptions, but that's the expectations I usually set for myself and the people around me.
  • Get up and move around -- I do try to get up and move around a few times per day.  If I'm on a call that doesn't require a screen, I'll try to take at least one or two of those from my phone and go move around a bit.  Take a walk down the street or in the garden or even just around the house.  I also try to unplug my laptop and work for at least an hour a day somewhere else around the house (as long as no one else is around) -- perhaps the kitchen table or back porch or couch for a bit.  In the time that I spent at Google, I really came to appreciate all of the lovely bonus spaces where anyone can curl up and work from a laptop for a few hours.  I've tried to reproduce that around my house.
  • Greenery -- I think I worked from home for probably 4 years before I added the first house plant in my office.  It's so subtle, but wow, what a difference.  In my older age, I'm now a bit of gardner (indoors and outside), and I try to keep at least a dozen or so bonsai trees, succulents, air plants, and other bits of greenery growing in my office.  If you need a little more Feng Shui in your life, check out this book.

Shopping List

  • Technology / Equipment
    • Computers
      • Macbook Pro 13 -- I use a 13" Apple Macbook Pro, assigned by my employer for work.  I never use it for personal purposes like Gmail, etc. at all, which I keep on a separate machine.
      • Thinkpad X250 -- I have an older a Thinkpad running Ubuntu on my desk, almost always streaming CNBC on YouTube TV in full screen (muted).  Sometimes I'll flip it over to CNN or a similar news channel.
      • Dell T5600 -- I keep a pretty beefy desktop / server workstation running Ubuntu, with a separate keyboard and monitor, for personal mail and browsing.
    • Keyboard / Mouse
      • Thinkpad USB Keyboard -- I love the Thinkpad keyboard, and this the USB version is a must have, for me.
      • Apple Wireless Keyboard and Trackpad and Tray -- I use the wireless Bluetooth keyboard and mouse pad for my work computer.  I find the tray essential, to keep the keyboard and mouse closely associated.
    • Monitors
      • Samsung 32" 4K UHD -- I use two monitors, one in portrait, one in landscape.  I really like Samsung, and these are the exact models I use: Gaming Monitor (LU32J590UQNXZA) – 60Hz Refresh, Widescreen Computer Monitor, 3840 x 2160p Resolution, 4ms Response, FreeSync, HDMI, Wall Mount.
      • Monitor Desk Mount -- And for those monitors, I use this desk mount, which is full motion, rotates, and attaches to my standing desk.
    • USB Hub
      • I use this dongle to connect my Macbook to the external 4K monitor, wired gigabit ethernet, and power supply.  This simple, single plug certainly helps me quickly and easily grab my laptop and move around the house easily during the day.
    • Laptop Stand
      • Nulaxy Ergonomic Riser -- I find this laptop stand helps get the camera angle on the top of my Macbook in a much place, and also frees up some space on my desk.  I sometimes take both the laptop and the stand outside with me, if I need to relocate somewhere and take a couple of conference calls.
    • Network
    • Storage
      • Synology -- I generally keep copies of our family photo archive in Google Photos, as well as a backup here at home, as well.  I'm a fan of the Synology DS218, and the Western Digital Caviar Green 3TB hard drives.  Really nice, simple interface, and yet feature-rich.
    • Printer / Scanner
      • HP Officejet -- While I avoid printing as much as possible, sometimes it's just inevitable.  But, also, working-from-home, you'll find that you often need to scan documents.  You'll definitely need something with an automatic document feeder and can scan directly to PDF.  I like the HP Officejet Pro 9015, but you're looking for a less expensive option, the HP Officejet 5255 is a fine printer/scanner too.
    • Speakers
      • Google Home Max -- I can't stress this enough: I find it extremely important to have high quality, full range speaker, that faithfully reproduces highs and lows.  I really need something much better than laptop speakers or cheap PC speakers.  Personally, I use a Google Home Max, with the Google Assistant microphone muted, and connected over Bluetooth.  I actually like it positioned behind me, believe it or not.  You could just as easily use an Amazon Echo or any other high quality Bluetooth speaker.
      • Bang and Olufsen Beoplay A9 -- This speaker is an absolute dream!  I used it in my office for a while, but eventually moved it to the family room where it's used much more for music and entertainment.  Besides sounding absolutely incredible, it's basically a work of art, beautiful to look at, in any room.
    • Headphones
      • Apple AirPods -- I use AirPods as my traveling headphones, mostly on planes.  I like that they're compact.  The short battery life leaves a lot to be desired, so I actually travel with two sets, and switch back and forth, recharging them in the case when I switch.
      • Bang and Olufsen Beoplay H9 -- Overwhelmingly, I use the Bluetooth speaker for my daily slate of teleconferences, meetings, and phone calls.  However, occasionally I need noise cancelling headphones.  The Beoplay H9i are my favorite -- outstanding comfort, excellent noise cancelling, and unbeatable audio quality.
      • Bose QuietComfort 35 ii -- These Bose headphones were my standards for years, until I gave them to my wife, and she fell in love with them.  They're hers now.  Having used both, I prefer the B&O, while she prefers the Bose.  
      • Wired headset with mic -- If you prefer a wired headset with a microphone, this gaming headset is a fantastic, inexpensive option.  Note that there's no noise cancellation, but they're quite comfortable and the audio quality is fine.
    • Webcam
      • Truth be told, at this point, I just use the web cam built into my Macbook.  The quality is much higher than that of my Thinkpad.  I like where it's mounted (top of the laptop screen).  While I connect the laptop to one of the external 4K monitors, I always use the 13" laptop screen as my dedicated screen for Zoom meetings.  I like that the built-in one just works.
      • Logitech -- All that said, I have used Logitech C920 webcams for years, and they're great, if you really want or need an external camera connected over USB.
    • Microphone
      • Like the Webcam, these days I'm just using the built-in mic on the Macbook.  I've tested a couple of different mics with recordings, and while the external ones do sound a little better, the difference is pretty subtle, and not worth the extra pain to connect 
      • Blue Snowball -- Again, all that said, I do have, and occasionally use, a Blue Snowball condenser mic.  While subtle, it is definitely an upgrade over the laptop built-in microphone.
    • Phone
      • For many years working from home, I did have a wired home phone system.  I use Ooma and a Polycom Voicestation.  But about two years ago, I got rid of it all and deliberately moved to using Google Hangouts and Zoom for pretty much everything, and just using my cell phone (Pixel 3) for the rest.
  • Furniture / Appliances
    • Standing desk
      • Uplift (72"x30") -- While I don't always stand, I have become a believer in the standing desk!  I change my position a couple of times per day, going from standing to sitting, and vice versa.  I'm extremely happy with my Uplift Desk, which is based here in Austin, Texas.
      • Apex -- I don't have direct experience with this desk, but this was the one I was looking at, and seems quite similar to the Uplift desk that I ended up getting.
    • Desk mat
      • Aothia Cork and Leather -- I really love desk mats.  They're so nice to write on.  These add a splash of color, and protect the desk from the inevitable coffee spill.  I have a couple of these, and they're great!
    • Coffee machine
      • Nespresso -- Yes, I have a coffee machine in my office.  It's essential on those days when you're back-to-back packed with meetings.  While I love making a nice pot of coffee down in the kitchen, sometimes I just need to push a button and have a good-enough shot of espresso.  And that's what I get with this machine and these pods (I recently switched from the more expensive authentic Nespresso pods, and can't really tell the difference).
    • Coffee Mug
      • Ember -- I received an Ember coffee mug as a gift, and I've really come to appreciate this thing.  I don't think I would have bought it for myself on my own, but as a gift, it's great.  Sleek looking and rechargeable, it'll keep your coffee hot down to your last sip.
    • Water cooler
      • Primo -- And yes, I have a water cooler in my office.  This has really helped me drink more water during the day.  It's nice to have both chilled water, as well as hot water for tea, on demand.
    • White board
    • Chair
    • Light / Fan
      • Haiku L -- My office is extremely well lit, with two huge windows.  Overhead, I used to have a single, nasty canned light, which I replaced with this Haiku L ceiling fan and light, and it's just brilliant, with a dimmer, and voice controls.
    • Air purifier
      • HEPA Filter -- Some years ago, I added an air purifier to my office, mainly to handle the pet dander (two big dogs and a cat) that used to room my office.  It's subtle, but generally something I'm glad to have pulling dust and germs out of the air.



Now I'm curious...  What did I miss?  What does your home office look like?  What are you favorite gadgets and appliances?

Disclosure: As an Amazon Associate I earn from qualifying purchases.

Cheers,
:-Dustin
on March 18, 2020 08:38 PM

March 17, 2020

No SMB1 to Share Devices

Harald Sitter

As it recently came up I thought I should perhaps post this more publicly…

As many of you will know SMB1, the super ancient protocol for windows shares, shouldn’t be used anymore. It’s been deprecated since like Windows Vista and was eventually also disabled by default in both Windows 10 and Samba. As a result you are not able to find servers that do not support either DNS-SD aka Bonjour aka Avahi, or WS-Discovery. But there’s an additional problem! Many devices (e.g. NAS) produced since the release of Vista could support newer versions of SMB but for not entirely obvious reasons do not have support for WS-Discovery-based … discovery. So, you could totally find and use a device without having to resort to SMB1 if you know its IP address. But who wants to remember IP addresses.

Instead you can just have another device on the network play discovery proxy! One of the many ARM boards out there, like a rapsberrypi, would do the trick.

To publish a device over DNS-SD (for Linux & OS X) you’ll first want to map its IP address to a local hostname and then publish a service on that hostname.

avahi-publish -a blackbox.local 192.168.100.16
avahi-publish -s -H blackbox.local SMB_ON_BLACKBOX _smb._tcp 445

If you also want to publish for windows 10 systems you’ll additionally want to run wsdd

wsdd.py -v -n BLACKBOX

Do note that BLACKBOX in this case can be a netbios, or LLMNR, or DNS-SD name (Windows 10 does support name resolution via DNS-SD these days). Unfortunate caveat of wsdd is that if you want to publish multiple devices you’ll need to set up a bridge and put the different wsdd instances on different interfaces.

on March 17, 2020 10:13 AM

March 16, 2020

Welcome to the Ubuntu Weekly Newsletter, Issue 622 for the week of March 8 – 14, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on March 16, 2020 08:51 PM

As all you already must to know, I am a Linux enthusiastic, especially when it comes Ubuntu. But the truth is that in each company in which I had to work for the last five years, they are companies that base their technology platform mostly on Windows Server operating systems.

Because of that, I had to manage Windows servers, but little by little I am including services running on Linux servers, the great advantage all Linux users / administrators know. However, sometimes the integrations require using scripts in PowerShell, that’s why I have installed what is necessary to be able to call PowerShell for WMI queries to Windows servers from my natural BASH, PHP or Python scripts. So then, below, the step by step of how to install the PowerShell console on my Linux operating system (Ubuntu 18.04)

# Download the Microsoft repository GPG keys
wget -q https://packages.microsoft.com/config/ubuntu/18.04/packages-microsoft-prod.deb

# Register the Microsoft repository GPG keys
sudo dpkg -i packages-microsoft-prod.deb

# Update the list of products
sudo apt-get update

# Enable the "universe" repositories
sudo add-apt-repository universe

# Install PowerShell
sudo apt-get install -y powershell

# Start PowerShell
pwsh
on March 16, 2020 02:52 PM

On the Saturday just gone, I thought to myself: OK, better get some food in. The cupboards aren’t bare or anything, but my freezer was showing a distinct skew towards “things that go with the main bit of dinner” and away from “things that are the main bit of dinner”, which is a long way of saying: didn’t have any meat. So, off to online shopping!

I sorta alternate between Tesco and Sainsbury’s for grocery shopping; Tesco decided they wouldn’t deliver to the centre of the city for a little while, but they’re back on it now. Anyway, I was rather disillusioned to see that both of them had no delivery slots available for at least a week. It seems that not only are people panic-buying toilet roll, they’re panic-buying everything else too. I don’t want to wait a week. So, have a poke around some of the others… and they’re all the same. Asda, Morrisons, Ocado, Iceland… wait a week at least for a delivery. Amazon don’t do proper food in the UK — “Amazon Pantry” basically sells jars of sun-dried tomatoes and things, not actual food — and so I was a little stymied. Hm. What to do? And then I thought of the Co-op. Which turned out to be an enormously pleasant surprise.

Their online shopping thing is rather neat. There is considerably less selection than there is from the big supermarkets, it must be admitted. But the way you order shows quite a lot of thinking about user experience. You go to the Co-op quickshop and… put in your postcode. No signup required. And the delay is close to zero. It’s currently 2pm on Monday, and I fill in my postcode and it tells me that the next available slot is 4pm on Monday. Two hours from now. That’s flat-out impossible everywhere else; the big supermarkets will only have slots starting from tomorrow even in less trying times. You go through and add the things you want to buy and then fill in your card details to pay… and then a chap on a motorbike goes to the Co-op, picks up your order, and drives it to your place. I got a text message1 when the motorbike chap set off, before he’d even got to the Co-op, giving me a URL by which I could track his progress. I got messages as he picked up the shopping and headed for mine. He arrived and gave me the stuff. All done.

It seemed very community-focused, very grass-roots. They don’t do their own deliveries; they use a courier, but a very local one. The stuff’s put into bags by your local Co-op and then delivered directly to you with very little notice. They’re open about the process and what’s going on. It seems so much more personal than the big supermarkets do… which I suppose is the Co-op’s whole shtick in the first place, and it’s commendable that they’ve managed to keep that the case even though they’ve moved online. And while the Co-op is a nationwide organisation, it’s also rather local and community-focused. I’ll be shopping there again; shame on me that I had to be pushed into it this first time.

  1. the company that they use to be couriers are called Stuart. This was confusing!
on March 16, 2020 01:56 PM

This article isn’t about anything “new”, like the previous ones on AppStream – it rather exists to shine the spotlight on a feature I feel is underutilized. From conversations it appears that the reason simply is that people don’t know that it exists, and of course that’s a pretty bad reason not to make your life easier 😉

Mini-Disclaimer: I’ll be talking about appstreamcli, part of AppStream, in this blogpost exclusively. The appstream-util tool from the appstream-glib project has a similar functionality – check out its help text and look for appdata-to-news if you are interested in using it instead.

What is this about?

AppStream permits software to add release information to their MetaInfo files to describe current and upcoming releases. This feature has the following advantages:

  • Distribution-agnostic format for release descriptions
  • Provides versioning information for bundling systems (Flatpak, AppImage, …)
  • Release texts are short and end-user-centric, not technical as the ones provided by distributors usually are
  • Release texts are fully translatable using the normal localization workflow for MetaInfo files
  • Releases can link artifacts (built binaries, source code, …) and have additional machine-readable metadata e.g. one can tag a release as a development release

The disadvantage of all this, is that humans have to maintain the release information. Also, people need to write XML for this. Of course, once humans are involved with any technology, things get a lot more complicated. That doesn’t mean we can’t make things easier for people to use though.

Did you know that you don’t actually have to edit the XML in order to update your release information? To make creating and maintaining release information as easy as possible, the appstreamcli utility has a few helpers built in. And the best thing is that appstreamcli, being part of AppStream, is available pretty ubiquitously on Linux distributions.

Update release information from NEWS data

The NEWS file is a not very well defined textfile that lists “user-visible changes worth mentioning” per each version. This maps pretty well to what AppStream release information should contain, so let’s generate that from a NEWS file!

Since the news format is not defined, but we need to parse this somehow, the amount of things appstreamcli can parse is very limited. We support a format in this style:

Version 0.2.0
~~~~~~~~~~~~~~
Released: 2020-03-14

Notes:
 * Important thing 1
 * Important thing 2

Features:
 * New/changed feature 1
 * New/changed feature 2 (Author Name)
 * ...

Bugfixes:
 * Bugfix 1
 * Bugfix 2
 * ...

Version 0.1.0
~~~~~~~~~~~~~~
Released: 2020-01-10

Features:
 * ...

When parsing a file like this, appstreamcli will allow a lot of errors/”imperfections” and account for quite a few style and string variations. You will need to check whether this format works for you. You can see it in use in appstream itself and libxmlb for a slightly different style.

So, how do you convert this? We first create our NEWS file, e.g. with this content:

Version 0.2.0
~~~~~~~~~~~~~~
Released: 2020-03-14

Bugfixes:
 * The CPU no longer overheats when you hold down spacebar

Version 0.1.0
~~~~~~~~~~~~~~
Released: 2020-01-10

Features:
 * Now plays a "zap" sound on every character input

For the MetaInfo file, we of course generate one using the MetaInfo Creator. Then we can run the following command to get a preview of the generated file: appstreamcli news-to-metainfo ./NEWS ./org.example.myapp.metainfo.xml - Note the single dash at the end – this is the explicit way of telling appstreamcli to print something to stdout. This is how the result looks like:

<?xml version="1.0" encoding="utf-8"?>
<component type="desktop-application">
  [...]
  <releases>
    <release type="stable" version="0.2.0" date="2020-03-14T00:00:00Z">
      <description>
        <p>This release fixes the following bug:</p>
        <ul>
          <li>The CPU no longer overheats when you hold down spacebar</li>
        </ul>
      </description>
    </release>
    <release type="stable" version="0.1.0" date="2020-01-10T00:00:00Z">
      <description>
        <p>This release adds the following features:</p>
        <ul>
          <li>Now plays a "zap" sound on every character input</li>
        </ul>
      </description>
    </release>
  </releases>
</component>

Neat! If we want to save this to a file instead, we just exchange the dash with a filename. And maybe we don’t want to add all releases of the past decade to the final XML? No problem too, just pass the --limit flag as well: appstreamcli news-to-metainfo --limit=6 ./NEWS ./org.example.myapp.metainfo.tmpl.xml ./result/org.example.myapp.metainfo.xml

That’s nice on its own, but we really don’t want to do this by hand… The best way to ensure the MetaInfo file is updated, is to simply run this command at build time to generate the final MetaInfo file. For the Meson build system you can achieve this with a code snippet like below (but for CMake this shouldn’t be an issue either – you could even make a nice macro for it there):

ascli_exe = find_program('appstreamcli')
metainfo_with_relinfo = custom_target('gen-metainfo-rel',
    input : ['./NEWS', 'org.example.myapp.metainfo.xml'],
    output : ['org.example.myapp.metainfo.xml'],
    command : [ascli_exe, 'news-to-metainfo', '--limit=6', '@INPUT0@', '@INPUT1@', '@OUTPUT@']
)

In order to also translate releases, you will need to add this to your .pot file generation workflow, so (x)gettext can run on the MetaInfo file with translations merged in.

Release information from YAML files

Since parsing a “no structure, somewhat human-readable file” is hard without baking an AI into appstreamcli, there is also a second option available: Generate the XML from a YAML file. YAML is easy to write for humans, but can also be parsed by machines.The YAML structure used here is specific to AppStream, but somewhat maps to the NEWS file contents as well as MetaInfo file data. That makes it more versatile, but in order to use it, you will need to opt into using YAML for writing news entries. If that’s okay for you to consider, read on!

A YAML release file has this structure:

---
Version: 0.2.0
Date: 2020-03-14
Type: development
Description:
- The CPU no longer overheats when you hold down spacebar
- Fixed bugs ABC and DEF
---
Version: 0.1.0
Date: 2020-01-10
Description: |-
  This is our first release!

  Now plays a "zap" sound on every character input

As you can see, the release date has to be an ISO 8601 string, just like it is assumed for NEWS files. Unlike in NEWS files, releases can be defined as either stable or development depending on whether they are a stable or development release, by specifying a Type field. If no Type field is present, stable is implicitly assumed. Each release has a description, which can either be a free-form multi-paragraph text, or a list of entries.

Converting the YAML example from above is as easy as using the exact same command that was used before for plain NEWS files: appstreamcli news-to-metainfo --limit=6 ./NEWS.yml ./org.example.myapp.metainfo.tmpl.xml ./result/org.example.myapp.metainfo.xml If appstreamcli fails to autodetect the format, you can help it by specifying it explicitly via the --format=yaml flag. This command would produce the following result:

<?xml version="1.0" encoding="utf-8"?>
<component type="console-application">
  [...]
  <releases>
    <release type="development" version="0.2.0" date="2020-03-14T00:00:00Z">
      <description>
        <ul>
          <li>The CPU no longer overheats when you hold down spacebar</li>
          <li>Fixed bugs ABC and DEF</li>
        </ul>
      </description>
    </release>
    <release type="stable" version="0.1.0" date="2020-01-10T00:00:00Z">
      <description>
        <p>This is our first release!</p>
        <p>Now plays a "zap" sound on every character input</p>
      </description>
    </release>
  </releases>
</component>

Note that the 0.2.0 release is now marked as development release, a thing which was not possible in the plain text NEWS file before.

Going the other way

Maybe you like writing XML, or have some other tool that generates the MetaInfo XML, or you have received your release information from some other source and want to convert it into text. AppStream also has a tool for that! Using appstreamcli metainfo-to-news <metainfo-file> <news-file> you can convert a MetaInfo file that has release entries into a text representation. If you don’t want appstreamcli to autodetect the right format, you can specify it via the --format=<text|yaml> switch.

Future considerations

The release handling is still not something I am entirely happy with. For example, the release information has to be written and translated at release time of the application. For some projects, this workflow isn’t practical. That’s why issue #240 exists in AppStream which basically requests an option to have release notes split out to a separate, remote location (and also translations, but that’s unlikely to happen). Having remote release information is something that will highly likely happen in some way, but implementing this will be a quite disruptive, if not breaking change. That is why I am holding this change back for the AppStream 1.0 release.

In the meanwhile, besides improving the XML form of release information, I also hope to support a few more NEWS text styles if they can be autodetected. The format of the systemd project may be a good candidate. The YAML release-notes format variant will also receive a few enhancements, e.g. for specifying a release URL. For all of these things, I very much welcome pull requests or issue reports. I can implement and maintain the things I use myself best, so if I don’t use something or don’t know about a feature many people want I won’t suddenly implement it or start to add features at random because “they may be useful”. That would be a recipe for disaster. This is why for these features in particular contributions from people who are using them in their own projects or want their new usecase represented are very welcome.

on March 16, 2020 11:40 AM

March 13, 2020

What is “Support”?

Ubuntu Studio

In technology, you hear the term “support” thrown around a lot. There are multiple definitions of that word. Here, we will focus on two Development and Maintenance The first one is related to development and maintenance. This is where the Ubuntu Studio development team comes in. That scope is rather... Continue reading
on March 13, 2020 11:21 PM

Con motivo del Coronavirus COVID-19 analizamos cómo teletrabajar usando Software Libre. En este episodio hacemos un crossover entre los podcasts NoLegalTech y Ubuntu y otras hierbas, con Bárbara, Jorge, Teruelo y Costales.

Ubuntu y otras hierbas

Escúchanos en:

Webs del software del que hablamos:

VPN

Herramientas para Comunicación

Herramientas para Organización del trabajo

Herramientas para Trabajo Colaborativo

NETCLOUD

Colección de otras herramientas

on March 13, 2020 10:16 PM

When Events Overtake Planning

Stephen Michael Kellat

The COVID-19 whirlwind continues in Ohio. The Governor of Ohio, Mike DeWine, ordered today a closure of K-12 schools for three weeks as well as banning public gatherings of 100 or more people. During those three weeks schools will be making calamity preparations to continue teaching via remote learning methods if necessary. The ban on public gatherings of 100 or more people in Ohio has no expiration date attached to it although, as a practical matter, it technically expires when Governor DeWine leaves office if not revoked sooner. He’s in the second year of a four year term so an unmodified order could theoretically run until January 13, 2023.

Right now I am hurrying along trying to gather test participants to see what I can do with video conferencing in Microsoft Teams. I have a few people and will need a few more. Courtesy UCLA’s Eugene Volokh, it appears that the Church of Jesus Christ of Latter Day Saints is suspending in-person church services across the entirety of the planet. They’re digging in for the long haul. The neighboring Roman Catholic diocese of Erie has granted dispensation relative to the obligation to attend Sunday mass for their faithful but the local diocesan for the Ashtabula area has not. The Roman Catholic diocese of Cleveland, also nearby Ashtabula, granted a smaller dispenation. Nobody specifically has asked for a plan at my church yet but normally I get asked at the very last minute so I need to be prepared.

Apparently there are a lot of articles already written on this topic and there is already a shoe-string operations HOWTO seemingly. Strangely enough turnkey solutions exist as well. These are things that normally would never even be considered as being appropriate in the life of the church I normally attend. Having to react to changing circumstances and politicians making things up as they go along means I am having to strike a happy medium as I go especially as I have no budget of any sort.

Development efforts continue. I’m going to have to pull what documentation I can on OBS Studio and study it quickly. Eventually I have to document my efforts for reproducibility.

on March 13, 2020 03:02 AM

March 11, 2020

Just like we did for Debian 7 Wheezy, some of the paid Debian LTS contributors will continue to maintain Debian 8 Jessie after its 5 years of support as part of Freexian’s Extended LTS service.

This service works differently than regular LTS:

  • only the packages used by the sponsors are supported;
  • the updates are provided in an external repository managed by Freexian so that Debian teams can bury Debian 8 Jessie and move on;
  • the cost of the service for each sponsor is evaluated each semester based on the number of packages that they want to see supported and the number of other sponsors with similar needs;
  • a limited set of architectures is supported (usually only amd64 and i386, but for Jessie it might be that we end up supporting armel due to a sponsor requesting it).

Some packages will not be supported:

  • linux 3.16 (instead we will likely continue to maintain the linux-4.9 backport from Debian 9 stretch)
  • openjdk-7 (its EOL is on June 2020), we might maintain a backport of openjdk-8 if there’s demand for it
  • tomcat7 will only be supported until its EOL in March 2021
  • this list might be expanded over time when we discover issues rendering security support infeasible on some packages

If you expect to have Debian 8 servers/devices running after June 30th 2020, and would like to have security updates for them, please get in touch with us. We’re building a list of potential sponsors right now and we expect to provide cost estimations at the end of the month.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on March 11, 2020 02:32 PM

March 07, 2020

APT 2.0 released

Julian Andres Klode

After brewing in experimental for a while, and getting a first outing in the Ubuntu 19.10 release; both as 1.9, APT 2.0 is now landing in unstable. 1.10 would be a boring, weird number, eh?

Compared to the 1.8 series, the APT 2.0 series features several new features, as well as improvements in performance, hardening. A lot of code has been removed as well, reducing the size of the library.

Highlighted Changes Since 1.8

New Features

  • Commands accepting package names now accept aptitude-style patterns. The syntax of patterns is mostly a subset of aptitude, see apt-patterns(7) for more details.

  • apt(8) now waits for the dpkg locks - indefinitely, when connected to a tty, or for 120s otherwise.

  • When apt cannot acquire the lock, it prints the name and pid of the process that currently holds the lock.

  • A new satisfy command has been added to apt(8) and apt-get(8)

  • Pins can now be specified by source package, by prepending src: to the name of the package, e.g.:

    Package: src:apt
    Pin: version 2.0.0
    Pin-Priority: 990
    

    Will pin all binaries of the native architecture produced by the source package apt to version 2.0.0. To pin packages across all architectures, append :any.

Performance

  • APT now uses libgcrypt for hashing instead of embedded reference implementations of MD5, SHA1, and SHA2 hash families.

  • Distribution of rred and decompression work during update has been improved to take into account the backlog instead of randomly assigning a worker, which should yield higher parallelization.

Incompatibilities

  • The apt(8) command no longer accepts regular expressions or wildcards as package arguments, use patterns (see New Features).

Hardening

  • Credentials specified in auth.conf now only apply to HTTPS sources, preventing malicious actors from reading credentials after they redirected users from a HTTP source to an http url matching the credentials in auth.conf. Another protocol can be specified, see apt_auth.conf(5) for the syntax.

Developer changes

  • A more extensible cache format, allowing us to add new fields without breaking the ABI

  • All code marked as deprecated in 1.8 has been removed

  • Implementations of CRC16, MD5, SHA1, SHA2 have been removed

  • The apt-inst library has been merged into the apt-pkg library.

  • apt-pkg can now be found by pkg-config

  • The apt-pkg library now compiles with hidden visibility by default.

  • Pointers inside the cache are now statically typed. They cannot be compared against integers (except 0 via nullptr) anymore.

python-apt 2.0

python-apt 2.0 is not yet ready, I’m hoping to add a new cleaner API for cache access before making the jump from 1.9 to 2.0 versioning.

libept 1.2

I’ve moved the maintenance of libept to the APT team. We need to investigate how to EOL this properly and provide facilities inside APT itself to replace it. There are no plans to provide new features, only bugfixes / rebuilds for new apt versions.

on March 07, 2020 08:43 PM
Out of 41 entries, the following 10 wallpapers were selected. Congratulations to the winners! These should be included on the daily Focal Fossa (future 20.04 LTS) daily ISO images shortly. Ubuntu Studio 20.04 LTS Community Wallpapers
on March 07, 2020 07:50 PM

This year’s FOSDEM conference was a lot of fun – one of the things I always enjoy most about this particular conference (besides having some of the outstanding food you can get in Brussels and meeting with friends from the free software world) is the ability to meet a large range of new people who I wouldn’t usually have interacted with, or getting people from different communities together who otherwise would not meet in person as each bigger project has their own conference (for example, the amount of VideoLAN people is much lower at GUADEC and Akademy compared to FOSDEM). It’s also really neat to have GNOME and KDE developers within reach at the same place, as I care about both desktops a lot.

An unexpected issue

This blog post however is not about that. It’s about what I learned when talking to people there about AppStream, and the outcome of that. Especially when talking to application authors but also to people who deal with larger software repositories, it became apparent that many app authors don’t really want to deal with the extra effort of writing metadata at all. This was a bit of a surprise to me, as I thought that there would be a strong interest for application authors to make their apps look as good as possible in software catalogs.

A bit less surprising was the fact that people apparently don’t enjoy reading a large specification, reading a long-ish intro guide with lots of dos and don’ts or basically reading any longer text at all before being able to create an AppStream MetaInfo/AppData file describing their software.

Another common problem seems to be that people don’t immediately know what a “reverse-DNS ID” is, the format AppStream uses for uniquely identifying each software component. So naturally, people either have to read about it again (bah, reading! 😜) or make something up, which occasionally is wrong and not the actual component-ID their software component should have.

The MetaInfo Creator

It was actually suggested to me twice that what people really would like to have is a simple tool to put together a MetaInfo file for their software. Basically a simple form with a few questions which produces the final file. I always considered this a “nice to have, but not essential” feature, but now I was convinced that this actually has a priority attached to it.

So, instead of jumping into my favourite editor and writing a bunch of C code to create this “make MetaInfo file” form as part of appstreamcli, this time I decided to try what the cool kids are doing and make a web application that runs in your browser and creates all metadata there.

So, behold the MetaInfo Creator! If you click this link, you will end up at an Angular-based web application that will let you generate MetaInfo/AppData files for a few component-types simply by answering a set of questions.

The intent was to make this tool as easy to use as possible for someone who basically doesn’t know anything about AppStream at all. Therefore, the tool will:

  • Generate a rDNS component-ID suggestion automatically based on the software’s homepage and name
  • Fill out default values for anything it thinks it has enough data for
  • Show short hints for what values we expect for certain fields
  • Interactively validate the entered value, so people know immediately when they have entered something invalid
  • Produce a .desktop file as well for GUI applications, if people select the option for it
  • Show additional hints about how to do more with the metadata
  • Create some Meson snippets as pointers how people can integrate the MetaInfo files into projects using the Meson build system

For the Meson feature, the tool simply can not generate a “use this and be done” script, as each Meson snippet needs to be adjusted for the individual project. So this option is disabled by default, but when enabled, a few simple Meson snippets will be produced which can be easily adjusted to the project they should be part of.

The tool currently does not generate any release information for a MetaInfo file at all, This may be added in future. The initial goal was to have people create any MetaInfo file in the first place, having projects also ship release details would be the icing on the cake.

I hope people find this project useful and use it to create better MetaInfo files, so distribution repositories and Flatpak repos look better in software centers. Also, since MetaInfo files can be used to create an “inventory” of software and to install missing stuff as-needed, having more of them will help to build smarter software managers, create smaller OS base installations and introspect what software bundles are made of easily.

I welcome contributions to the MetaInfo Creator! You can find its source code on GitHub. This is my first web application ever, the first time I wrote TypeScript and the first time I used Angular, so I’d bet a veteran developer more familiar with these tools will cringe at what I produced. So, scratch that itch and submit a PR! 😉 Also, if you want to create a form for a new component type, please submit a patch as well.

C developer’s experience notes for Angular, TypeScript, NodeJS

This section is just to ramble a bit about random things I found interesting as a developer who mostly works with C/C++ and Python and stepped into the web-application developer’s world for the first time.

For a project like this, I would usually have gone with my default way of developing something for the web: Creating a Flask-based application in Python. I really love Python and Flask, but of course using them would have meant that all processing would have had to be done on the server. One the one hand I could have used libappstream that way to create the XML, format it and validate it, but on the other hand I would have had to host the Python app on my own server, find a place at Purism/Debian/GNOME/KDE or get it housed at Freedesktop somehow (which would have taken a while to arrange) – and I really wanted to have a permanent location for this application immediately. Additionally, I didn’t want people to send the details of new unpublished software to my server.

TypeScript

I must say that I really like TypeScript as a language compared to JavaScript. It is not really revolutionary (I looked into Dart and other ways to compile $stuff to JavaScript first), but it removes just enough JavaScript weirdness to be pleasant to use. At the same time, since TS is a superset of JS, JavaScript code is valid TypeScript code, so you can integrate with existing JS code easily. Picking TS up took me much less than an hour, and most of its features you learn organically when working on a project. The optional type-safety is a blessing and actually helped me a few times to find an issue. It being so close to JS is both a strength and weakness: On the one hand you have all the JS oddities in the language (implicit type conversion is really weird sometimes) and have to basically refrain from using them or count on the linter to spot them, but on the other hand you can immediately use the massive amount of JavaScript code available on the web.

Angular

The Angular web framework took a few hours to pick up – there are a lot of concepts to understand. But ultimately, it’s manageable and pretty nice to use. When working at the system level, a lot of complexity is in understanding how the CPU is processing data, managing memory and using the low-level APIs the operating system provides. With the web application stuff, a lot of the complexity for me was in learning about all the moving parts the system is comprised of, what their names are, what they are, and what works with which. And that is not a flat learning curve at all. As C developer, you need to know how the computer works to be efficient, as web developer you need to know a bunch of different tools really well to be productive.

One thing I am still a bit puzzled about is the amount of duplicated HTML templates my project has. I haven’t found a way to reuse template blocks in multiple components with Angular, like I would with Jinja2. The documentation suggests this feature does not exist, but maybe I simply can’t find it or there is a completely different way to achieve the same result.

NPM Ecosystem

The MetaInfo Creator application ultimately doesn’t do much. But according to GitHub, it has 985 (!!!) dependencies in NPM/NodeJS. And that is the bare minimum! I only added one dependency myself to it. I feel really uneasy about this, as I prefer the Python approach of having a rich standard library instead of billions of small modules scattered across the web. If there is a bug in one of the standard library functions, I can submit a patch to Python where some core developer is there to review it. In NodeJS, I imagine fixing some module is much harder.

That being said though, using npm is actually pretty nice – there is a module available for most things, and adding a new dependency is easy. NPM will also manage all the details of your dependency chain, GitHub will warn about security issues in modules you depend on, etc. So, from a usability perspective, there isn’t much to complain about (unlike with Python, where creating or using a module ends up as a “fight the system” event way too often and the question “which random file do I need to create now to achieve what I want?” always exists. Fortunately, Poetry made this a bit more pleasant for me recently).

So, tl;dr for this section: The web application development excursion was actually a lot of fun, and I may make more of those in future, now that I learned more about how to write web applications. Ultimately though, I enjoy the lower-level software development and backend development a bit more.

Summary

Check out the MetaInfo Creator and its source code, if you want to create MetaInfo files for a GUI application, console application, addon or service component quickly.

on March 07, 2020 05:22 PM

March 05, 2020

First, create a container mycontainer with the command lxc launch ubuntu: mycontainer. This command creates a container with the currently default LTS version (at the time of writing, 18.04, soon it will be 20.04).

You are using LXD, you can launch containers and get a shell into them using the following lxc command. This command uses the exec subcommand to run a process in the container mycontainer, and the full line of the process is whatever appears after the , in this case, /bin/bash. The /bin/bash can be replaced with any executable from the container.

lxc exec mycontainer -- /bin/bash

The -- is a special sequence that tells parameter parser on the host to stop processing parameters. In the above example it is not required because we do not use a parameter for /bin/bash. However, in the following, it is needed. The lxc command sees in the first line that there is a -l option, and tries to process it. But there is no valid -l parameter to lxc exec, hence the error message. The following also demonstrate that the default working directory for the Ubuntu container images is root‘s home directory, /root.

$ lxc exec mycontainer ls -l
Error: unknown shorthand flag: 'l' in -l
$ lxc exec mycontainer -- ls -l
total 0
$ lxc exec mycontainer -- pwd
/root
$ 

The corollary is that if you do not get into the habit of adding --, you might run a complex command that has a valid lxc exec parameter and the result could be quite weird. However, in most cases, you would want to get just a shell into the container, and then do your work. That is the purpose of this blog post.

Learning about LXD aliases

Managing aliases in LXD

You can list, add, rename and remove LXD aliases.

Listing LXD aliases

Run the lxc alias list command to list all aliases. In the following, there are no aliases at all (default).

$ lxc alias list
+-------+--------+
| ALIAS | TARGET |
+-------+--------+

Adding a LXD alias

Let’s create an alias, called list. There is already a lxc list subcommand, therefore we will be hiding the official subcommand. But why do this? First, lxc list produces by default a very wide table so we are going to drop some of the columns. Second, we want to show here how to remove an alias, so we will remove the list alias anyway.

$ lxc alias add list 'list -c ns46'
$ lxc alias list
+-------+---------------+
| ALIAS | TARGET        |
+-------+---------------+
| list  | list -c ns46  |
+-------+---------------+

Now, when you run lxc list, you will get just four columns, n for the Name of the container, s for the State of the container (RUNNING, _STOPPED or something else), and 46 for both the IPv4 and IPv6 IP addresses of the container.

Renaming a LXD alias

Frankly, list is not a good choice for the name of an alias because it masks (hides) the proper lxc list subcommand. Let’s rename the alias to something else.

$ lxc alias rename list ll
$ lxc alias list
+-------+---------------+
| ALIAS | TARGET        |
+-------+---------------+
| ll    | list -c ns46  |
+-------+---------------+

Now you can run the following command to list the containers in a table of four only columns.

$ lxc ll

Removing a LXD alias

We have now decided to remove the ll alias. If you love it though, you may keep it! Here is the command anyway.

$ lxc alias remove ll

Using sudo --login ubuntu --shell

To get a non-root shell into a LXD container of an Ubuntu container image (ubuntu:), you can use the following command. The ubuntu: container images, when launched, create a non-root ubuntu account. Therefore, when we exec into the container, we sudo as user ubuntu and request a login shell. A login shell means that all shell configuration files are parsed (/etc/profile, /etc/bash*, ~/.profile, etc). Due to this, it might take a few hundreds milliseconds to get to the shell, compared to other ways. This command is also the command that I have been including in my tutorials (since 2016), therefore you can see it a lot around the Internet.

$ lxc exec mycontainer -- sudo --user ubuntu --login
To run a command as administrator (user "root"), use "sudo <command>".See "man sudo_root" for details.

$ 

If you were to launch a container and immediately run the above command to get a shell, you may encounter the following error (unknown user: ubuntu). What happened? We were really fast, and the mycontainer container did not fully finish starting up. Which means that the instruction to create the ubuntu non-root account did not run yet. In fact, the instruction to create this account is among that last instructions to run when a new container is created. In such a case, just wait for one more second, and run the lxc exec command again.

$ lxc launch ubuntu: mycontainer 
Creating mycontainerStarting mycontainer
$ lxc exec mycontainer -- sudo --user ubuntu --login
sudo: unknown user: ubuntu
sudo: unable to initialize policy plugin
$ 

The downside with this lxc exec command is that it assumes there is an ubuntu non-root account, which only works for the ubuntu: repository of container images. It does not work with the Ubuntu container images from the images: repository, nor with other container images. Those other container images only have a default root account.

Using lxc shell mycontainer

The lxc client of LXD currently has a single default alias for lxc shell. This alias is not listed when you run lxc alias list since it is part of the client. Here is the source code,

// defaultAliases contains LXC's built-in command line aliases.  The built-in
// aliases are checked only if no user-defined alias was found.                       
var defaultAliases = map[string]string{                        
                 "shell": "exec @ARGS@ -- su -l",                       
}

You would run this as follows. It runs su -l to get a login shell as root.

$ lxc shell mycontainer
mesg: ttyname failed: No such device
root@mycontainer:~#

Let’s add an alias that will su as a non-root user. We will name it shell, therefore masking the built-in alias.

$ lxc alias add shell "exec @ARGS@ -- su -l ubuntu"
$ lxc shell mycontainer
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

$ 

Using lxc ubuntu mycontainer

In 2018, Raymond E Ferguson started a thread on discuss.linuxcontainers.org about useful LXC aliases, and specifically about LXC aliases to get a login shell into a LXD container.

To get a non-root shell to a LXD container (Ubuntu container from ubuntu: repository)

Create the following alias, named ubuntu. You run this once to add the LXD alias to your host’s settings.

lxc alias add ubuntu 'exec @ARGS@ --mode interactive -- /bin/sh -xac $@ubuntu - exec /bin/login -p -f '

Use as:

lxc ubuntu mycontainer  

To get a shell to a LXD container (default: root, else specify with $USER)

Create the following alias, named ubuntu. You run this once to add the LXD alias to your host’s settings.

lxc alias add login 'exec @ARGS@ --mode interactive -- /bin/sh -xac $@${USER:-root} - exec /bin/login -p -f '

Run as (to get a root shell by default):

lxc login mycontainer  

Run as (to get a specific $USER shell):

lxc login mycontainer --env USER=ubuntu

Learning more about lxc exec

The current version of the LXD lxc client (now at 3.21) has several built-in features that can help get a shell. Let’s see the available parameters. Of interest are --user/--group and --env to set $HOME.

$ lxc exec --help
Description:
  Execute commands in instances

  The command is executed directly using exec, so there is no shell and
  shell patterns (variables, file redirects, ...) won't be understood.
  If you need a shell environment you need to execute the shell
  executable, passing the shell commands as arguments, for example:

    lxc exec  -- sh -c "cd /tmp && pwd"

  Mode defaults to non-interactive, interactive mode is selected if both stdin AND stdout are terminals (stderr is ignored).

Usage:
  lxc exec [:] [flags] [--] 

Flags:
      --cwd                    Directory to run the command in (default /root)
  -n, --disable-stdin          Disable stdin (reads from /dev/null)
      --env                    Environment variable to set (e.g. HOME=/home/foo)
  -t, --force-interactive      Force pseudo-terminal allocation
  -T, --force-noninteractive   Disable pseudo-terminal allocation
      --group                  Group ID to run the command as (default 0)
      --mode                   Override the terminal mode (auto, interactive or non-interactive) (default "auto")
      --user                   User ID to run the command as (default 0)

Global Flags:
      --debug            Show all debug messages
      --force-local      Force using the local unix socket
  -h, --help             Print help
      --project string   Override the source project
  -q, --quiet            Don't show progress information
  -v, --verbose          Show all information messages
      --version          Print version number

In addition, bash has a --login parameter which we are going to use as well. Therefore, we have the following command for a login shell. The ubuntu: container images create a non-root account with username ubuntu and UID/GID 1000/1000. We set the $HOME because even with bash --login it is not set. We do not specify the full path for bash just in case it is in either /bin or /usr/bin. In Ubuntu it is /bin/bash, so it will work in Ubuntu and provide flexibility if we were to extend to other distributions.

$ lxc exec --user 1000 --group 1000 --env "HOME=/home/ubuntu/" mycontainer -- bash --login
You just parsed /etc/bash.bashrc!
You just parsed /etc/profile.d/apps-bin-path.sh!
You just parsed /etc/profile!
You just parsed ~/.bashrc!
You just parsed ~/.profile!
ubuntu@mycontainer:~$ 

Notice above that I edited the various configuration files so that they print the sequence they are invoked for a login shell. Let’s see how this looks when we do not add --login to bash. Without a login shell, the only two files that are parsed are /etc/bash.bashrc and ~/.bashrc.

$ lxc exec mycontainer --user 1000 --group 1000 --env HOME=/home/ubuntu/ -- bash 
You just parsed /etc/bash.bashrc!
You just parsed ~/.bashrc!
ubuntu@mycontainer:/home/ubuntu$ 

We are good to go and create the alias. I am changing my old lxc ubuntu alias to this one. Here it is. We remove any old ubuntu alias and add the new one.

lxc alias remove ubuntu
lxc alias add ubuntu 'exec @ARGS@ --user 1000 --group 1000 --env HOME=/home/ubuntu/ -- /bin/bash --login'
How to use the lxc ubuntu alias. It works on Ubuntu container images from the ubuntu: repository. Creates a login shell using Bash for the ubuntu non-root user.

Summary

There are several ways to execute a login shell into a LXD container. For the Ubuntu containers (repository ubuntu:) I am using the following alias, lxc ubuntu mycontainer. ubuntu is the new subcommand (through an LXD alias) to get a login shell as user ubuntu.

If you want to get the same, run the following on each of your LXD hosts.

lxc alias add ubuntu 'exec @ARGS@ --user 1000 --group 1000 --env HOME=/home/ubuntu/ -- /bin/bash --login'

If your container is called mycontainer, you get a login shell as follows:

lxc ubuntu mycontainer
on March 05, 2020 05:57 PM

Putting-On a New Hat

Erich Eickmeyer

Before I began leading Ubuntu Studio, I was using a “spin” of Fedora called Fedora Jam. It was a musician/audio “lab” for Fedora which seemed to work well for me. Think of it as Ubuntu Studio minus the non-audio/music stuff, and with KDE Plasma instead of Xfce.

However, I knew of Ubuntu Studio’s importance in Linux-based production and creativity, and, as the story goes, I answered a call to help keep it alive.

Fast-forward two years. Ubuntu Studio is doing very well. I have a team that I rely on to keep things running. I decided to look at Fedora to see how they were doing, only to find out Fedora Jam had not been released for Fedora 31, and there was an un-responded-to keepalive request for Jam.

This got me thinking: what if something happens to Ubuntu Studio and Ubuntu/Debian became no-longer viable options for audio production? With that in mind, I decided to do something about it and stepped-in to become Fedora Jam’s new maintainer.

As it stands now, Fedora Jam 32 looks like it will be a thing, although not quite what I have envisioned. Hence, even now, I’m working on items for inclusion in Fedora 33 that should make it an excellent choice for audio production on Linux.

All this said, I want to make it clear: I am not leaving Ubuntu Studio. I am in a situation where I can adequately lead both Ubuntu Studio and Fedora Jam. Besides, this gives me a great deal of experience with packaging for Debian-based and .rpm-based Linux distributions.

Overall, I say this is a win-win for everybody who does audio production using Linux. I can take what I’ve learned in Ubuntu and apply it to Fedora and vice-versa.

on March 05, 2020 05:45 PM

March 04, 2020

The OpenUK Kids Competition is now open for registrations. Teams of 4 aged 11 and 14 can take part to design the most interesting use for the MiniMu musical glove from Imogen Heap.

It’s free to enter and one winning team from each Region will be brought by OpenUK to London on 10 and 11 June. They will compete in the Competition Final at Red Hat’s Innovation Lab in Monument, London on 11 June having had the opportunity to spend the night before in London.

on March 04, 2020 10:52 PM

CirrOS 0.5.0 released

Marcin Juszkiewicz

Someone may say that I am main reason why CirrOS project does releases.

In 2016 I got task at Linaro to get it running on AArch64. More details are in my blog post ‘my work on changing CirrOS images’. Result was 0.4.0 release.

Last year I got another task at Linaro. So we released 0.5.0 version today.

But that’s not how it happened.

Multiple contributors

Since 0.4.0 release there were changes done by several developers.

Robin H. Johnson took care of kernel modules. Added new ones, updated names. Also added several new features.

Murilo Opsfelder Araujo fixed build on Ubuntu 16.04.3 as gcc changed preprocessor output.

Jens Harbott took care of lack of space for data read from config-drive.

Paul Martin upgraded CirrOS build system to BuildRoot 2019.02.1 and bumped kernel/grub versions.

Maciej Józefczyk took care of metadata requests.

Marcin Sobczyk fixed starting of Dropbear and dropped creation of DSS ssh key which was no longer supported.

My Linaro work

At Linaro I got Jira card with “Upgrade CirrOS’ kernel to Ubuntu 18.04’s kernel” title.

This was needed as 4.4 kernel was far too old and gave us several booting issues. Internally we had builds with 4.15 kernel but it should be done properly and upstream.

So I fetched code, did some test builds and started looking how to improve situation. Spoke with Scott Moser (owner of CirrOS project) and he told me about his plans to migrate from Launchpad to GitHub. So we did that in December 2019 and then fun started.

Continuous Integration

GitHub has several ways of adding CI to projects. First we tried GitHub Actions but turned out that it is paid service. Looked around and then I decided to go with Travis CI.

Scott generated all required keys and integration started. Soon we had every pull request going through CI. Then I added simple script (bin/test-boot) so each image was booted after build. Scott improved script and fixed Power boot issue.

Next step was caching downloads and ccache files. This was huge improvement!

In meantime Travis bumped free service to 5 simultaneous builders which got our builds even faster.

CirrOS supports building only under Ubuntu LTS. But I use Fedora so we merged two changes to make sure that proper ‘grub(2)-mkimage’ command is used.

Kernel changes

4.4 kernel had to go. First idea was to move to 4.18 from Ubuntu 18.04 release. But if we upgrade then why not going for HWE one? I checked 5.0 and 5.3 versions. As both worked fine we decided to go with newer one.

Modules changes

During start of CirrOS image several kernel modules are loaded. But there were several “no kernel module found” like messages for built-in ones.

We took care of it by querying /sys/module/ directory so now module loading is quiet process. At the end a list of loaded ones is printed.

VirtIO changes

Lot of things happened since 4.4 kernels. So we added several VirtIO modules.

One of results is working graphical console on AArch64. Thanks to ‘virtio-gpu’ providing framebuffer and ‘hid-generic’ handling usb input devices.

As lack of entropy is common issue in VM instances we added ‘virtio-rng’ module. No more ‘uninitialized urandom read’ messages from kernel.

Final words

Yesterday Scott created 0.5.0 tag and CI built all release images. Then I wrote release notes (based on ones from pre-releases). Kolla project got patch to move to use new version.

When next release? Looking at history someone may say 2023 as previous one was in 2016 year. But know knows. Maybe we will get someone with “please add s390x support” question ;D

on March 04, 2020 10:53 AM

March 03, 2020

We live in a world where websites and apps mostly make people unhappy. Buying or ordering or interacting with anything at all online involves a thousand little unpleasant bumps in the road, a thousand tiny chips struck off the edges of your soul. “This website uses cookies: accept all?” Videos that appear over the thing you’re reading and start playing automatically. Grant this app access to your contacts? Grant this app access to your location? “Sign up for our newsletter”, with a second button saying “No, because I hate free things and also hate America”. Better buy quick — there’s only 2 tickets/beds/rooms/spaces left! Now now now!

This is not new news. Everyone already knows this. If you ask people — ordinary, real people, not techies — about their experiences of buying things online or reading things online and say, was this a pleasant thing to do? were you delighted by it? then you’re likely to get a series of wry headshakes. It’s not just that everyone knows this, everyone’s rather inured to it; the expectation is that it will be a bit annoying but you’ll muddle through. If you said, what’s it like for you when your internet connection goes down, or you want to change a flight, they will say, yeah, I’ll probably have to spend half an hour on hold, and the call might drop when I get to queue position 2 and I’ll have to call again, and they’ll give me the runaround; the person on the call will be helpful, but Computer will Say No. Decent customer service is no longer something that we expect to receive; it’s something unusual and weird. Even average non-hostile customer service is now so unusual that we’re a bit delighted when it happens; when the corporate body politic rouses itself to do something other than cram a live rattlesnake up your bottom in pursuit of faceless endless profit then that counts as an unexpected and pleasant surprise.

It’d be nice if the world wasn’t like that. But one thing we’re a bit short of is the vocabulary for talking about this; rather than the online experience being a largely grey miasma of unidentified minor divots, can we enumerate the specific things that make us unhappy? And for each one, look at how it could be done better and why it should be done better?

Trine Falbe, Kim Andersen, and Martin Michael Frederiksen think maybe we can, and have written The Ethical Design Handbook, published by Smashing Media. It’s written, as they say, for professionals — for the people building these experiences, to explain how and why to do better, rather than for consumers who have to endure them. And they define “ethical design” as businesses, products, and services that grow from a principle of fairness and fundamental respect towards everyone involved.

They start with some justifications for why ethical design is important, and I’ll come back to that later. But then there’s a neat segue into different types of unethical design, and this is fascinating. There’s nothing here that will come as a surprise to most people reading it, especially most tech professionals, but I’d not seen it enumerated quite this baldly before. They describe, and name, all sorts of dark patterns and unpleasant approaches which are out there right now: mass surveillance, behavioural change, promoting addiction, manipulative design, pushing the sense of urgency through scarcity and loss aversion, persuasive design patterns; all with real examples from real places you’ve heard of. Medium hiding email signup away so you’ll give them details of your social media account; Huel adding things to your basket which you need to remove; Viagogo adding countdown timers to rush you into making impulsive purchases; Amazon Prime’s “I don’t want my benefits” button, meaning “don’t subscribe”. Much of this research already existed — the authors did not necessarily invent these terms and their classifications — but having them all listed one after the other is both a useful resource and a rather terrifying indictment of our industry and the manipulative techniques it uses.

However, our industry does use these techniques, and it’s important to ask why. The book kinda-sorta addresses this, but it shies away a little from admitting the truth: companies do this stuff because it works. Is it unethical? Yeah. Does it make people unhappy? Yeah. (They quote a rather nice study suggesting that half of all people recognise these tricks and distrust sites that use them, and the majority of those go further and feel disgusted and contemptuous.) But, and this is the kicker… it doesn’t seem to hurt the bottom line. People feel disgusted or distrusting and then still buy stuff anyway. I’m sure a behavioural psychologist in the 1950s would have been baffled by this: if you do stuff that makes people not like you, they’ll go elsewhere, right? Which is, it seems, not the case. Much as it’s occasionally easy to imagine that companies do things because they’re actually evil and want to increase the amount of suffering in the world, they do not. There are no actual demons running companies. (Probably. Hail to Hastur, just in case.) Some of it is likely superstition — everyone else does this technique, so it’ll probably work for us — and some of it really should get more rigorous testing than it does get: when your company added an extra checkbox to the user journey saying “I would not dislike to not not not sign not up for the newsletter”, did purchases go up, or just newsletter signups? Did you really A/B test that? Or just assume that “more signups, even deceptive ones = more money” without checking? But they’re not all uninformed choices. Companies do test these dark patterns, and they do work. We might wish otherwise, but that’s not how the world is; you can’t elect a new population who are less susceptible to these tricks or more offended by them, even if you might wish to.

And thereby hangs, I think, my lack of satisfaction with the core message of this book. It’s not going to convince anyone who isn’t already convinced. This is where we come back to the justifications mentioned earlier. “[P]rivacy is important to [consumers], and it’s a growing concern”, says the book, and I wholeheartedly agree with this; I’ve written and delivered a whole talk on precisely this topic at a bunch of conferences. But I didn’t need to read this book to feel that manipulation of the audience is a bad thing: not because it costs money or goodwill, but just because it’s wrong, even if it earns you more money. It’s not me you’ve gotta convince: it’s the people who put ethics and goodwill on one side of the balance and an increased bottom line on the other side and the increased bottom line wins. The book says “It’s not good times to gamble all your hard work for quick wins at the costs of manipulation”, and “Surveillance capitalism is unethical by nature because at its core, it takes advantage of rich data to profile people and to understand their behaviour for the sole purpose of making money”, but the people doing this know this and don’t care. It in fact is good times to go for quick wins at the cost of manipulation; how else can you explain so many people doing it? And so the underlying message here is that the need for ethical design is asserted rather than demonstrated. Someone who already buys the argument (say, me) will nod their way through the book, agreeing at every turn, and finding useful examples to bolster arguments or flesh out approaches. Someone who doesn’t already buy the argument will see a bunch of descriptions of a bunch of things that are, by the book’s definition, unethical… and then simply write “but it makes us more money and that’s my job, so we’re doing it anyway” after every sentence and leave without changing anything.

It is, unfortunately, the same approach taken by other important but ignored technical influences, such as accessibility or open source or progressive enhancement. Or, outside the tech world, environmentalism or vegetarianism. You say: this thing you’re doing is bad, because just look at it, it is… and here’s all the people you’re letting down or excluding or disenfranchising by being bad people, so stop being bad people. It seems intuitively obvious to anyone who already believes: why would you build inaccessible sites and exclude everyone who isn’t able to read them? Why would you build unethical apps that manipulate people and leave them unhappy and disquieted? Why would you use plastic and drive petrol cars when the world is going to burn? But it doesn’t work. I wish it did. Much as the rightness and righteousness of our arguments ought to be convincing in themselves, they are not, and we’re not moving the needle by continually reiterating the reasons why someone should believe.

But then… maybe that’s why the book is named The Ethical Design Handbook and not The Ethical Design Manifesto. I went into reading this hoping that what the authors had written would be a thing to change the world, a convincer that one could hand to unethical designers or ethical designers with unethical bosses and which would make them change. It isn’t. They even explicitly disclaim that responsibility early on: “Designers from the dark side read other books, not this one, and let us leave it at that,” says the introduction. So this maybe isn’t the book that changes everyone’s minds; that’s someone else’s job. Instead, it’s a blueprint for how to build the better world once you’ve already been convinced to do so. If your customers keep coming back and saying that they find your approach distasteful, if you decide to prioritise delight over conversions at least a little bit, if you’re prepared to be a little less rich to be a lot more decent, then you’ll need a guidebook to explain what made your people unhappy and what to do about it. In that regard, The Ethical Design Handbook does a pretty good job, and if that’s what you need then it’s worth your time.

This is an important thing: there’s often the search for a silver bullet, for a thing which fixes the world. I was guilty of that here, hoping for something which would convince unethical designers to start being ethical. That’s not what this book is for. It’s for those who want to but don’t know how. And because of that, it’s full of useful advice. Take, for example, the best practices chapter: it specifically calls out some wisdom about cookie warnings. In particular, it calls out that you don’t need cookie warnings at all if you’re not being evil about what you plan to allow your third party advertisers to do with the data. This is pretty much the first place I’ve seen this written down, despite how it’s the truth. And this is useful in itself; to have something to show one’s boss or one’s business analyst. If the word has come down from on high to add cookie warnings to the site then pushback on that from design or development is likely to be ignored… but being able to present a published book backing up those words is potentially valuable. Similarly, the book goes to some effort to quantify what ethical design is, by giving scores to what you do or don’t do, and this too is a good structure on which to hang a new design and to use to feed into the next thing your team builds. So, don’t make the initial mistake I did, of thinking that this is a manifesto; this is a working book, filled with how to actually get the job done, not a philosophical thinkpiece. Grab it and point at it in design meetings and use it to bolster your team through their next project. It’s worth it.

on March 03, 2020 07:08 PM

March 02, 2020

Belgians

This month started off in Belgium for FOSDEM on 1-2 February. I attended FOSDEM in Brussels and wrote a separate blog entry for that.

The month ended with Belgians at Tammy and Wouter’s wedding. On Thursday we had Wouter’s bachelors and then over the weekend I stayed over at their wedding venue. I thought that other Debianites might be interested so I’m sharing some photos here with permission from Wouter. It was the only wedding I’ve been at where nearly everyone had questions about Debian!

I first met Wouter on the bus during the daytrip on DebConf12 in Nicaragua, back then I’ve eagerly followed the Debianites on Planet Debian for a while so it was like meeting someone famous. Little did I know that 8 years later, I’d be at his wedding back in my part of the world.

If you went to DebConf16 in South Africa, you might remember Tammy, who have done a lot of work for DC16 including most of the artwork, bunch of website work, design on the badges, bags, etc and also did a lot of organisation for the day trips. Tammy and Wouter met while Tammy was reviewing the artwork in the video loops for the DebConf videos, and then things developed from there.

Wouter’s Bachelors

Wouter was blindfolded and kidnapped and taken to the city center where we prepared to go on a bike tour of Cape Town, stopping for beer at a few places along the way. Wouter was given a list of tasks that he had to complete, or the wedding wouldn’t be allowed to continue…

Wouter’s tasks
Wouter’s props, needed to complete his tasks
Bike tour leg at Cape Town Stadium.
Seeking out 29 year olds.
Wouter finishing his lemon… and actually seemingly enjoying it.
Reciting South African national anthem notes and lyrics.
The national anthem, as performed by Wouter (I was actually impressed by how good his pitch was).

The Wedding

Friday afternoon we arrived at the lodge for the weekend. I had some work to finish but at least this was nicer than where I was going to work if it wasn’t for the wedding.

Accommodation at the lodge

When the wedding co-ordinators started setting up, I noticed that there were all these swirls that almost looked like Debian logos. I asked Wouter if that was on purpose or just a happy accident. He said “Hmm! I haven’t even noticed that yet!”, didn’t get a chance to ask Tammy yet, so it could still be her touch.

Debian swirls everywhere
I took a canoe ride on the river and look what I found, a paddatrapper!

Kyle and I weren’t the only ones out on the river that day. When the wedding ceremony started, Tammy made a dramatic entrance coming in on a boat, standing at the front with the breeze blowing her dress like a valkyrie.

A bit of digital zoomage of previous image.
Time to say the vows.
Just married. Thanks to Sue Fuller-Good for the photo.
Except for one character being out of place, this was a perfect fairy tale wedding, but I pointed Wouter to https://jonathancarter.org/how-to-spell-jonathan/ for future reference so it’s all good.

Congratulations again to both Tammy and Wouter. It was a great experience meeting both their families and friends and all the love that was swirling around all weekend.

Debian Package Uploads

2020-02-07: Upload package calamares (3.2.18-1) to Debian unstable.

2020-02-07: Upload package python-flask-restful (0.3.8-1) to Debian unstable.

2020-02-10: Upload package kpmcore (4.1.0-1) to Debian unstable.

2020-02-16: Upload package fracplanet (0.5.1-5.1) to Debian unstable (Closes: #946028).

2020-02-20: Upload package kpmcore (4.1.0-2) to Debian unstable.

2020-02-20: Upload package bluefish (2.2.11) to Debian unstable.

2020-02-20: Upload package gdisk (1.0.5-1) to Debian unstable.

2020-02-20: Accept MR#6 for gamemode.

2020-02-23: Upload package tanglet (1.5.5-1) to Debian unstable.

2020-02-23: Upload package gamemode (1.5-1) to Debian unstable.

2020-02-24: Upload package calamares (3.2.19-1) to Debian unstable.

2020-02-24: Upload package partitionmanager (4.1.0-1) to Debian unstable.

2020-02-24: Accept MR#7 for gamemode.

2020-02-24: Merge MR#1 for calcoo.

2020-02-24: Upload package calcoo (1.3.18-8) to Debian unstable.

2020-02-24: Merge MR#1 for flask-api.

2020-02-25: Upload package calamares (3.2.19.1-1) to Debian unstable.

2020-02-25: Upload package gnome-shell-extension-impatience (0.4.5-4) to Debian unstable.

2020-02-25: Upload package gnome-shell-extension-harddisk-led (19-2) to Debian unstable.

2020-02-25: Upload package gnome-shell-extension-no-annoyance (0+20170928-f21d09a-2) to Debian unstable.

2020-02-25: Upload package gnome-shell-extension-system-monitor (38-2) to Debian unstable.

2020-02-25: Upload package tuxpaint (0.9.24~git20190922-f7d30d-1~exp3) to Debian experimental.

Debian Mentoring

2020-02-10: Sponsor package python-marshmallow-polyfield (5.8-1) for Debian unstable (Python team request).

2020-02-10: Sponsor package geoalchemy2 (0.6.3-2) for Debian unstable (Python team request).

2020-02-13: Sponsor package python-tempura (2.2.1-1) for Debian unstable (Python team request).

2020-02-13: Sponsor package python-babel (2.8.0+dfsg.1-1) for Debian unstable (Python team request).

2020-02-13: Sponsor package python-pynvim (0.4.1-1) for Debian unstable (Python team request).

2020-02-13: Review package ledmon (0.94-1) (Needs some more work) (mentors.debian.net request).

2020-02-14: Sponsor package citeproc-py (0.3.0-6) for Debian unstable (Python team request).

2020-02-24: Review package python-suntime (1.2.5-1) (Needs some more work) (Python team request).

2020-02-24: Sponsor package python-babel (2.8.0+dfsg.1-2) for Debian unstable (Python team request).

2020-02-24: Sponsor package 2048 (0.0.0-1~exp1) for Debian experimental (mentors.debian.net request).

2020-02-24: Review package notcurses (1.1.8-1) (Needs some more work) (mentors.debian.net request).

2020-02-25: Sponsor package cloudpickle (1.3.0-1) for Debian unstable (Python team request).

Debian Misc

2020-02-12: Apply Planet Debian request and close MR#21.

2020-02-23: Accept MR#6 for ToeTally (DebConf Video team upstream).

2020-02-23: Accept MR#7 for ToeTally (DebConf Video team upstream).

on March 02, 2020 04:48 PM

2 Years at Weaveworks

Daniel Holbach

Time flies when you’re having fun: it’s now been two years since I started working at Weaveworks and I 💕 the experience it’s been.

Back then I was just concluding my sabbatical and had started pursuing a second career in that year. So my mind wasn’t exactly in the tech space, still I felt rested and open to new challenges. In the meantime, many old friends had already waved over from Cloud Native land and tried to lure me there. Especially my ex team mate Jorge Castro had raved about the Kubernetes community whenever we talked: that future was being made here and that the people were great and this was exactly the right thing to be involved with. If you know him just a little bit, you know this is typical Jorge.

When I set out looking for a job, I was lucky that Jono Lange reached out as well and told me what was happening at Weave. A couple of weeks later I was part of the team. The interview process was great because I got to talk to lots of people and get a feel for the entire team and their vision. The process wasn’t without mishaps though: to Alexis I talked on a video call out of a car on a mountain on Euboea, Greece - it was the only place where I had passable reception, to a call with Tamao I was 20 minutes late because of traffic in Cairo - ouch! - and when I talked to Matthias, Ilya and Stefan I had turn off video at times because the internet in Dahab at the Red Sea wasn’t quite up to it. I’m so glad they came to the conclusion to have me as part of the team and my future team put up with this.

I was a bit weary starting into this, since I had been out of tech for 13 months and the Cloud Native landscape is simply overwhelming when you’re new: it is a HUGE community, a big entangled mess of competing solutions, vendors, tools, its own lingo, hundreds of sub-communities and challenges I hadn’t deeply considered yet. Frankly, it was a very daunting perspective at the time and I wasn’t sure if I could “learn all of this” - in my mind, that’s what I thought I needed to do.

What helped me ease into the job and communities were right from the start my teams: Bianca, Filip, Lili and Simon in Berlin and the DX gang: Tamao, Ilya, Stefan, Lucas, Leigh, Stacey, Dennis and Chanwit.

All of them were infinitely patient, had great ideas to help me grow, encouraged me to dig deeper, introduced me to other interesting folks and gave me the feeling that even if they would probably never let me go near Prod or a customer’s cluster, that I was generally doing an all right job finding into this space. 🙏

My immediate impression from Weaveworks was that I enjoyed the fact that the company was smaller than Canonical (where I worked before) a lot: getting things done across “departments” (I think we were around 25 people when I joined) was often an ad-hoc affair that often just involved a call or chat on Slack. It gave me a sense of being able to influence direction and that people generally trusted my judgement. As the token German in my teams I sometimes felt things could be a bit more structured, so I was happy to help e.g. putting together a Wiki from our internal docs, setting up a Knowledge Base and so on.

There were so many things I got to work on and folks I collaborated with. Particularly that there is so much Open Source in our story makes me proud and happy to be here. Here are a few of my highlights:

Supporting our booth and workshops at conferences was fun! It got me in touch with many of our users, customers and friends - I learned a lot about the field and everyone’s challenges there.

I spent quite some time with the Flux team (Michael, Stefan, Hidde, Fons). As one of our most important projects, making Flux more accessible to users and developers was very rewarding. Initiatives like simplifying installation, adding more docs, improving some of the community structures fell on very fertile ground: the number of users and contributors exploded over time. Flux entering the CNCF was probably another big contributing factor. Flux and Argo joining forces is just another big milestone on the way.

Working with the Scope team (Fons, Bryan, Filip, Satyam, Akash) was a lot of fun too. It’s still one of the most loved k8s projects out there, you see it at booths and in workshops a lot, so having a healthy group maintaining it and re-integrating features from experiments elsewhere was important. Visiting Satyam and Akash and the entire MayaData team in Bangalore and working with them was a great experience too. Miss the great food there too!

During my time at Weave we released many useful tools to the world, all open source. If you haven’t checked out Firekube, Ignite, Footlose, eksctl, wksctl, Flagger or any of the other good stuff yet, please do - they’re going to make your life easier! I was happy to be able to help out in putting open source, community and docs structures in place for them. I recently helped others breath new life into kured and grafanalib - stay tuned for contributors meetings for the two any time soon.

I also started contributing upstream in Kubernetes. This was a great experience learning from all the great folks who make Cluster Addons (Justin, Jeff, Leigh, Evan and so many others), a sub-project of SIG Cluster Lifecycle happen.

Work at Weave hasn’t been focused on just Open Source things. Getting to talk to customers, users, contributors and partners was very educational and helped me understand their challenges. Commercially we added the Weave Kubernetes Platform to our product portfolio to help enterprises manage their Kubernetes cluster needs and customer uptake is great.

The company has been growing quite a bit in the last time as well. We more than doubled in size and added New York and Colorado Springs as office locations. We are still hiring, so if you’re bored at your current job, don’t feel supported there any more, stopped believing in your current mission, join us at Weave - it’s a great team and I feel privileged to be here.

There’s so much more that happened in the last time and this blog post is long enough already. Big thanks to the entire team for this great journey. I’m looking forward to our future together! 🚀

Update: Now that I finished the blog post I realised the so many of things I hadn’t mentioned:

  • I managed to write the post without mentioning GitOps one time! Nuts! We helped to build a GitOps community and make the concept almost common-place.
  • Sharing an Airbnb with Ilya and Stefan (and Oana!) in SF was so much fun!
  • The culture at Weaveworks and how smart and caring people are! 💗
on March 02, 2020 09:15 AM

February 29, 2020

I love to spend time trying to automatize out boring part of my job. One of these boring side is remembering people to rotate AWS Access Keys, as suggested also by AWS in their best practices.

The AWS IAM console helps highlighting which keys are old, but if you have dozens of users, or multiple AWS accounts, it is still boring doing it by hand. So, I wrote some code to doing it automatically leveraging AWS Lambda - since it has a generous free-tier, this check is free (however, your mileage may vary).

automation - xkcd Image by Randall Munroe, xkcd.com

Setting up the permissions

Of course, we want to follow the principle of least privilege: the Lambda function will have access only to the minimum data necessary to perform its task. Thus, we need to create a dedicated role over the IAM Console. AWS Guide to create roles for AWS services

Our custom role needs to have the managed policy AWSLambdaBasicExecutionRole, needed to execute a Lambda function. Other than this, we create a custom inline policy with this permissions:

  • iam:ListUsers, to know which users have access to the account. If you want to check only a subset of users, like filtering by department, you can use the Resource field to limit the access.
  • iam:ListAccessKeys, to read access keys of the users. Of course, you can limit here as well which users the Lambda has access to.
  • ses:SendEmail, to send the notification emails. Once again, you can (and should!) restrict the ARN to which it has access to.

And that are all the permissions we need!

The generated policy should look like this, more or less:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ses:SendEmail",
                "iam:ListAccessKeys"
            ],
            "Resource": [
                "arn:aws:iam::<ACCOUNT_ID>:user/*",
                "arn:aws:ses:eu-central-1:<ACCOUNT_ID>:identity/*"
            ]
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": "iam:ListUsers",
            "Resource": "*"
        }
    ]
}

Setting up SES

To send the notification email we use AWS Simple Email Service.

Before using it, you need to move out of the sandbox mode, or to verify domains you want to send emails to. If all your users have emails from the same domain, and you have access to the DNS, probably is faster to just verify your domain, especially if the AWS account is quite new.

After that, you don’t have to do anything else, SES will be used by the Lambda code.

Setting up Lambda

You can now create an AWS Lambda function. I’ve written the code that you find below in Python, since I find it is the fastest way to put in production a so simple script. However, you can use any of the supported languages. If you have never used AWS Lambda before, you can start from here

You need to assign the role we created before as execution role. As memory, 128MB is more than enough. About the timeout, it’s up to how big is your company. More or less, it is able to check 5/10 users every second. You should test it and see if it goes in timeout.

Lambda Code

Following there is the code to perform the task. To read it better, you can find it also on this Gitlab’s snippet.

from collections import defaultdict
from datetime import datetime, timezone
import logging

import boto3
from botocore.exceptions import ClientError


# How many days before sending alerts about the key age?
ALERT_AFTER_N_DAYS = 100
# How ofter we have set the cron to run the Lambda?
SEND_EVERY_N_DAYS = 3
# Who send the email?
SES_SENDER_EMAIL_ADDRESS = 'example@example.com'
# Where did we setup SES?
SES_REGION_NAME = 'eu-west-1'

iam_client = boto3.client('iam')
ses_client = boto3.client('ses', region_name=SES_REGION_NAME)

# Helper function to choose if a key owner should be notified today
def is_key_interesting(key):
    # If the key is inactive, it is not interesting
    if key['Status'] != 'Active':
        return False
    
    elapsed_days = (datetime.now(timezone.utc) - key['CreateDate']).days
    
    # If the key is newer than ALERT_AFTER_N_DAYS, we don't need to notify the
    # owner
    if elapsed_days < ALERT_AFTER_N_DAYS:
        return False
    
    return True
    
# Helper to send the notification to the user. We need the receiver email, 
# the keys we want to notify the user about, and on which account we are
def send_notification(email, keys, account_id):
    email_text = f'''Dear {keys[0]['UserName']},
this is an automatic reminder to rotate your AWS Access Keys at least every {ALERT_AFTER_N_DAYS} days.

At the moment, you have {len(keys)} key(s) on the account {account_id} that have been created more than {ALERT_AFTER_N_DAYS} days ago:
'''
    for key in keys:
        email_text += f"- {key['AccessKeyId']} was created on {key['CreateDate']} ({(datetime.now(timezone.utc) - key['CreateDate']).days} days ago)\n"
    
    email_text += f"""
To learn how to rotate your AWS Access Key, please read the official guide at https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html#Using_RotateAccessKey
If you have any question, please don't hesitate to contact the Support Team at support@example.com.

This automatic reminder will be sent again in {SEND_EVERY_N_DAYS} days, if the key(s) will not be rotated.

Regards,
Your lovely Support Team
"""
    
    try:
        ses_response = ses_client.send_email(
            Destination={'ToAddresses': [email]},
            Message={
                'Body': {'Html': {'Charset': 'UTF-8', 'Data': email_text}},
                'Subject': {'Charset': 'UTF-8',
                            'Data': f'Remember to rotate your AWS Keys on account {account_id}!'}
            },
            Source=SES_SENDER_EMAIL_ADDRESS
        )
    except ClientError as e:
        logging.error(e.response['Error']['Message'])
    else:
        logging.info(f'Notification email sent successfully to {email}! Message ID: {ses_response["MessageId"]}')

def lambda_handler(event, context):
    users = []
    is_truncated = True
    marker = None
    
    # We retrieve all users associated to the AWS Account.  
    # Results are paginated, so we go on until we have them all
    while is_truncated:
        # This strange syntax is here because `list_users` doesn't accept an 
        # invalid Marker argument, so we specify it only if it is not None
        response = iam_client.list_users(**{k: v for k, v in (dict(Marker=marker)).items() if v is not None})
        users.extend(response['Users'])
        is_truncated = response['IsTruncated']
        marker = response.get('Marker', None)
    
    # Probably in this list you have bots, or users you want to filter out
    # You can filter them by associated tags, or as I do here, just filter out 
    # all the accounts that haven't logged in the web console at least once
    # (probably they aren't users)
    filtered_users = list(filter(lambda u: u.get('PasswordLastUsed'), users))
    
    interesting_keys = []
    
    # For every user, we want to retrieve the related access keys
    for user in filtered_users:
        response = iam_client.list_access_keys(UserName=user['UserName'])
        access_keys = response['AccessKeyMetadata']
        
        # We are interested only in Active keys, older than
        # ALERT_AFTER_N_DAYS days
        interesting_keys.extend(list(filter(lambda k: is_key_interesting(k), access_keys)))
    
    # We group the keys by owner, so we send no more than one notification for every user
    interesting_keys_grouped_by_user = defaultdict(list)
    for key in interesting_keys:
        interesting_keys_grouped_by_user[key['UserName']].append(key)

    for user in interesting_keys_grouped_by_user.values():
        # In our AWS account the username is always a valid email. 
        # However, you can recover the email from IAM tags, if you have them
        # or from other lookups
        # We also get the account id from the Lambda context, but you can 
        # also specify any id you want here, it's only used in the email 
        # sent to the users to let them know on which account they should
        # check
        send_notification(user[0]['UserName'], user, context.invoked_function_arn.split(":")[4])

Schedule your Lambda

You can schedule your Lambda to run thanks to CloudWatch Events. You can use a schedule expression such rate(3 days) to run the email every 3 days. Lambda will add necessary permissions to the role we created before to invoke the Lambda. If you need any help, AWS covers you with a dedicated tutorial!

Conclusions

This is just an idea on how to create a little script, leveraging AWS Lambda and AWS SES, to keep your AWS account safe. There are, of course, lots of possible improvements! And remember to check the logs, sometimes ;-)

If you have hundreds or thousands of users, the function will go in timeout: there are different solutions you can implement, as using tags on users to know when you have lasted checked them, or checking a different group of users every our, leveraging the PathPrefix argument of list_users.

Also in my example it’s simple knowing to whom send the notification email - but what if your users don’t have their email as username? You can use tags, and set their contact email there. Or, you maybe have to implement a lookup somewhere else.

We could also send a daily report to admins: since users usually ignore automatic emails, admins can intervene if too many reports have been ignored. Or, we can forcibly delete keys after some time - although this could break production code, so I wouldn’t really do it - or maybe yes, it’s time developers learn to have a good secrets hygiene.

And you? How do you check your users rotate their access keys?

For any comment, feedback, critic, suggestion on how to improve my English, reach me on Twitter (@rpadovani93) or drop an email at riccardo@rpadovani.com.

Ciao,
R.

on February 29, 2020 07:00 PM

February 28, 2020

We’re on our way to the 20.04 LTS release and it’s time for another community wallpaper contest!

How to participate?

For a chance to win, submit your submission at contest.xubuntu.org.

Important dates

  • Start of submissions: Immediately
  • Submission deadline: March 13th, 2020
  • Announcement of selections: Late March

All dates are in UTC.

Contest terms

All submissions must adhere to the Terms and Guidelines, including specifics about subject matter, image resolution and attribution.

After the submission deadline, the Xubuntu team will pick 6 winners from all submissions for inclusion on the Xubuntu 20.04 ISO, and will also be available to other Xubuntu version users as a xubuntu-community-wallpaper package. The winners will also receive some Xubuntu stickers.

Any questions?

Please join #xubuntu-devel on Freenode for assistance or email the Xubuntu developer mailing list if you have any problems with your submission.

on February 28, 2020 06:51 AM

February 20, 2020

What Is This?

SGT Puzzles Collection 0.2.5 Released

SGT Puzzles Collection, or simply sgt-launcher, is a game launcher and wrapper for Simon Tatham’s Portable Puzzle Collection, a popular collection of logic games by the developer of PuTTY.

Joining the Xubuntu package set way back in Xubuntu 17.10 "Artful Aardvark", SGT Puzzles Collection has quietly provided Xubuntu users with a variety of distracting games for several releases. If you want to learn more about the project, check out my introductory blog post.

What's New?

  • Fixed issues with looping when launching a game (LP: #1697107)
  • Added "Report a Bug..." option to the menu which takes the user straight to the bug tracker for SGT Puzzles Collection
  • Added support for closing the application with Ctrl-C in the terminal
  • Fixed AppData validation and added some additional details
  • Switched the AppData and launcher to RDN-style (org.bluesabre.SgtLauncher) to follow FreeDesktop standards. Installations from source may now have duplicate launchers, just delete sgt-launcher.desktop
  • Updated Danish translation

Downloads

Source tarball

$ md5sum sgt-launcher-0.2.5.tar.gz
28e77a28faeb9ed6105cab09c90f1ea0

$ sha1sum sgt-launcher-0.2.5.tar.gz
59595b926950e3b76230b35ee540e883407e8132

$ sha256sum sgt-launcher-0.2.5.tar.gz
2aa00b35b3cb19c041246bfa9e892862f3bcb133843a1f2f3767e9f5be278fc7

SGT Puzzles Collection 0.2.5 will be included in Xubuntu 20.04 "Focal Fossa", available in April. Users testing the daily images should be able to test it any moment now.

on February 20, 2020 04:38 AM

February 19, 2020

Full Circle Weekly News #164

Full Circle Magazine


Linus Torvalds Releases Linux Kernel 5.5 rc7
https://lkml.org/lkml/2020/1/19/237
Credits:
Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

on February 19, 2020 06:53 PM