July 12, 2020

One of my client’s dedicated server on OVH didn’t get back online after a reboot, so I checked via KVM and found that it was stuck at GRUB 2 prompt. So I changed netboot to rescue mode from OVH control panel, and with the rescue mode SSH credentials emailed to me, performed the following tasks […]

The post Bootloader Fix on NVMe Drive appeared first on Cyber Kingdom of Russell John.

on July 12, 2020 02:36 AM

July 11, 2020

A few months ago I wrote a new GStreamer plugin: an audio filter for live loudness normalization and automatic gain control.

The plugin can be found as part of the GStreamer Rust plugin in the audiofx plugin. It’s also included in the recent 0.6.0 release of the GStreamer Rust plugins and available from crates.io.

Its code is based on Kyle Swanson’s great FFmpeg filter af_loudnorm, about which he wrote some more technical details on his blog a few years back. I’m not going to repeat all that here, if you’re interested in those details and further links please read Kyle’s blog post.

From a very high-level, the filter works by measuring the loudness of the input following the EBU R128 standard with a 3s lookahead, adjusts the gain to reach the target loudness and then applies a true peak limiter with 10ms to prevent any too high peaks to get passed through. Both the target loudness and the maximum peak can be configured via the loudness-target and max-true-peak properties, same as in the FFmpeg filter. Different to the FFmpeg filter I only implemented the “live” mode and not the two-pass mode that is implemented in FFmpeg, which first measures the loudness of the whole stream and then in a second pass adjusts it.

Below I’ll describe the usage of the filter in GStreamer a bit and also some information about the development process, and the porting of the C code to Rust.

Usage

For using the filter you most likely first need to compile it yourself, unless you’re lucky enough that e.g. your Linux distribution includes it already.

Compiling it requires a Rust toolchain and GStreamer 1.8 or newer. The former you can get via rustup for example, if you don’t have it yet, the latter either from your Linux distribution or by using the macOS, Windows, etc binaries that are provided by the GStreamer project. Once that is done, compiling is mostly a matter of running cargo build in the audio/audiofx directory and copying the resulting libgstrsaudiofx.so (or .dll or .dylib) into one of the GStreamer plugin directories, for example ~/.local/share/gstreamer-1.0/plugins.

After that boring part is done, you can use it for example as follows to run loudness normalization on the Sintel trailer:

gst-launch-1.0 playbin \
    uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm \
    audio-filter="audioresample ! rsaudioloudnorm ! audioresample ! capsfilter caps=audio/x-raw,rate=48000"

As can be seen above, it is necessary to put audioresample elements around the filter. The reason for that is that the filter currently only works on 192kHz input. This is a simplification for now to make it easier inside the filter to detect true peaks. You would first upsample your audio to 192kHz and then, if needed, later downsample it again to your target sample rate (48kHz in the example above). See the link mentioned before for details about true peaks and why this is generally a good idea to do. In the future the resampling could be implemented internally and maybe optionally the filter could also work with “normal” peak detection on the non-upsampled input.

Apart from that caveat the filter element works like any other GStreamer audio filter and can be placed accordingly in any GStreamer pipeline.

If you run into any problems using the code or it doesn’t work well for your use-case, please create an issue in the GStreamer bugtracker.

The process

As I wrote above, the GStreamer plugin is part of the GStreamer Rust plugins so the first step was to port the FFmpeg C code to Rust. I expected that to be the biggest part of the work, but as writing Rust is simply so much more enjoyable than writing C and I would have to adjust big parts of the code to fit the GStreamer infrastructure anyway, I took this approach nonetheless. The alternative of working based on the C code and writing the plugin in C didn’t seem very appealing to me. In the end, as usual when developing in Rust, this also allowed me to be more confident about the robustness of the result and probably reduced the amount of time spent debugging. Surprisingly, the translation was actually not the biggest part of the work, but instead I had to debug a couple of issues that were already present in the original FFmpeg code and find solutions for them. But more on that later.

The first step for porting the code was to get an implementation of the EBU R128 loudness analysis. In FFmpeg they’re using a fork of the libebur128 C library. I checked if there was anything similar for Rust already, maybe even a pure-Rust implementation of it, but couldn’t find anything. As I didn’t want to write one myself or port the code of the libebur128 C library to Rust, I wrote safe Rust bindings for that library instead. The end result of that can be found on crates.io as an independent crate, in case someone else also needs it for other purposes at some point. The crate also includes the code of the C library, making it as easy as possible to build and include into other projects.

The next step was to actually port the FFmpeg C code to Rust. In the end that was a rather straightforward translation fortunately. The latest version of that code can be found here.

The biggest difference to the C code is the usage of Rust iterators and iterator combinators like zip and chunks_exact. In my opinion this makes the code quite a bit easier to read compared to the manual iteration in the C code together with array indexing, and as a side effect it should also make the code run faster in Rust as it allows to get rid of a lot of array bounds checks.

Apart from that, one part that was a bit inconvenient during that translation and still required manual array indexing is the usage of ringbuffers everywhere in the code. For now I wrote those like I would in C and used a few unsafe operations like get_unchecked to avoid redundant bounds checks, but at a later time I might refactor this into a proper ringbuffer abstraction for such audio processing use-cases. It’s not going to be the last time I need such a data structure. A short search on crates.io gave various results for ringbuffers but none of them seem to provide an API that fits the use-case here. Once that’s abstracted away into a nice data structure, I believe the Rust code of this filter is really nice to read and follow.

Now to the less pleasant parts, and also a small warning to all the people asking for Rust rewrites of everything: of course I introduced a couple of new bugs while translating the code although this was a rather straightforward translation and I tried to be very careful. I’m sure there is also still a bug or two left that I didn’t find while debugging. So always keep in mind that rewriting a project will also involve adding new bugs that didn’t exist in the original code. Or maybe you’re just a better programmer than me and don’t make such mistakes.

Debugging these issues that showed up while testing the code was a good opportunity to also add extensive code comments everywhere so I don’t have to remind myself every time again what this block of code is doing exactly, and it’s something I was missing a bit from the FFmpeg code (it doesn’t have a single comment currently). While writing those comments and explaining the code to myself, I found the majority of these bugs that I introduced and as a side-effect I now have documentation for my future self or other readers of the code.

Fixing these issues I introduced myself wasn’t that time-consuming neither in the end fortunately, but while writing those code comments and also while doing more testing on various audio streams, I found a couple of bugs that already existed in the original FFmpeg C code. Further testing also showed that they caused quite audible distortions on various test streams. These are the bugs that unfortunately took most of the time in the whole process, but at least to my knowledge there are no known bugs left in the code now.

For these bugs in the FFmpeg code I also provided a fix that is merged already, and reported the other two in their bug tracker.

The first one I’d be happy to provide a fix for if my approach is considered correct, but the second one I’ll leave for someone else. Porting over my Rust solution for that one will take some time and getting all the array indexing involved correct in C would require some serious focusing, for which I currently don’t have the time.

Or maybe my solutions to these problems are actually wrong, or my understanding of the original code was wrong and I actually introduced them in my translation, which also would be useful to know.

Overall, while porting the C code to Rust introduced a few new problems that had to be fixed, I would definitely do this again for similar projects in the future. It’s more fun to write and in my opinion the resulting code is easier readable, and better to maintain and extend.

on July 11, 2020 05:34 PM
Lubuntu 19.10 (Eoan Ermine) was released October 17, 2019 and will reach End of Life on Friday, July 17, 2020. This means that after that date there will be no further security updates or bugfixes released. We highly recommend that you update to 20.04 as soon as possible if you are still running 19.10. After […]
on July 11, 2020 04:01 PM

Full Circle Weekly News #177

Full Circle Magazine


Linux Kernel 5.7 rc4 Out
https://lkml.org/lkml/2020/5/3/306

Linux Kernel 5.5 Is Now End of Life
http://lkml.iu.edu/hypermail/linux/kernel/2004.2/07196.html

Red Hat Enterprise Linux 8.2 Out
https://www.redhat.com/archives/rhelv6-list/2020-April/msg00000.html

Parrot 4.9 Out
https://parrotsec.org/blog/parrot-4.9-release-notes/

IPFire 2.25 Core Update 143 Out
https://blog.ipfire.org/post/ipfire-2-25-core-update-143-released

Oracle Virtualbox 6.1.6 Out
https://www.virtualbox.org/wiki/Changelog-6.1

LibreOffice 6.4.3 Out
https://blog.documentfoundation.org/blog/2020/04/16/libreoffice-6-4-3/

Proton 5.0-6 Out
https://www.gamingonlinux.com/articles/steam-play-proton-50-6-is-out-to-help-doom-eternal-rockstar-launcher-and-more-on-linux.16442

VLC 3.0.10 Out
https://www.videolan.org/vlc/releases/3.0.10.html

Darktable 3.0.2 Out
https://www.darktable.org/2020/04/darktable-302-released/

OpenSUSE Tumbleweed for AWS Marketplace Out
https://9to5linux.com/opensuse-tumbleweed-is-now-available-on-aws-marketplace

KDE 20.04 Applications Out
https://9to5linux.com/kde-applications-20-04-officially-released-this-is-whats-new

Credits:
Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

on July 11, 2020 11:04 AM

Adventures in Writing

Simon Quigley

The Linux community is a fascinating and powerful space.

When I joined the Ubuntu project approximately five years ago, I (vaguely at the time) understood that there was a profound sense of community and passion everywhere that is difficult to find in other spaces. My involvement has increased, and so has my understanding. I had thought of starting a blog as a means of conveying the information that I stumbled across, but my writing skills were very crude and regrettable, being in my early teenage years.

I have finally decided to take the leap. In this blog, I would like to occasionally provide updates on my work, either through focused deep dives on a particular topic, or broad updates on low hanging fruit that has been eliminated. While the articles may be somewhat spontaneous, I decided that an initial post was in order to explain my goals. Feel free to subscribe for more detailed posts in the future, as there are many more to come.

on July 11, 2020 10:59 AM

July 10, 2020

I actually wanted to move on with the node-red series of blog posts, but noticed that there is something more pressing to write down first …

People (on the snapcraft.io forum or IRC) often ask about “how would i build a package for Ubuntu Core” …

If your Ubuntu Core device is i.e. a Raspberry Pi you won’t easily be able to build for its armhf or arm64 target architecture on your PC which makes development harder.

You can use the snapcraft.io auto-build service that builds for all supported arches automatically or use fabrica but if you want to iterate fast over your code, waiting for the auto-builds is quite time consuming. Others i heard of simply have two SD cards in use, one running classic Ubuntu Server and the second one running Ubuntu Core so you can switch them around to test your code on Core after building on Server … Not really ideal either and if you do not have two Raspberry Pis this ends in a lot reboots, eating your development time.

There is help !

There is an easy way to do your development on Ubuntu Core by simply using an LXD container directly on the device … you can make code changes and quickly build inside the container, pull the created snap package out of your build container and install it on the Ubuntu Core host without any reboots or waiting for remote build services, just take a look at the following recipe of steps:

1) Grab an Ubuntu Core image from the stable channel, run through the setup wizard to set up user and network and ssh into the device:

$ grep Model /proc/cpuinfo 
Model       : Raspberry Pi 3 Model B Plus Rev 1.3
$ grep PRETTY /etc/os-release 
PRETTY_NAME="Ubuntu Core 18"
$

2) Install lxd on the device and set up a container targeting the release that your snapcraft.yaml defines in the base: entry (i.e. base: core -> 16.04, base: core18 -> 18.04, base: core20 -> 20.04):

$ snap install lxd
$ sudo lxd init --auto
$ sudo lxc launch ubuntu:18.04 bionic
Creating bionic
Starting bionic
$

3) Enter the container with the lxc shell command, install the snapcraft snap, clone your tree and edit/build your code:

$ sudo lxc shell bionic
root@bionic:~# snap install snapcraft --classic
...
root@bionic:~# git clone https://github.com/ogra1/htpdate-daemon-snap.git
root@bionic:~# cd htpdate-daemon-snap/
... make any edits you want here ...
root@bionic:~/htpdate-daemon-snap# snapcraft --destructive-mode
...
Snapped 'htpdate-daemon_1.2.2_armhf.snap'
root@bionic:~/htpdate-daemon-snap#

4) Exit the container, pull the snap file you built and install it with the –dangerous flag

root@bionic:~/htpdate-daemon-snap# exit
logout
$ sudo lxc file pull bionic/root/htpdate-daemon-snap/htpdate-daemon_1.2.2_armhf.snap .
$ snap install --dangerous htpdate-daemon_1.2.2_armhf.snap
htpdate-daemon 1.2.2 installed
$

This is it … for each new iteration you can just enter the container again, make your edits, build, pull and install the snap.

(One additional Note: if you want to avoid having to use sudo with all the lxc calls above, add your username to the end of the line reading lxd:x:999: in the /var/lib/extrausers/group file)

 

on July 10, 2020 11:36 AM

I thought about using a clickbait title like “Is this the best web security book?”, but I just couldn’t do that to you all. Instead, I want to compare and contrast 3 books, all of which I consider great books about web security. I won’t declare any single book “the best” because that’s too subjective. Best depends on where you’re coming from and what you’re trying to achieve.

The 3 books I’m taking a look at are:

Real-World Bug Hunting: A Field Guide to Web Hacking

Real World Bug Hunting

Real-World Bug Hunting is the most recent of the books in this group, and it shows. It covers up to date vulnerabilities and mitigations, such as the samesite attribute for cookies, Content Security Policy, and more. As its name suggests, it has a clear focus on finding bugs, and goes into just enough detail about each bug class to help you understand the underlying risks posed by a vulnerability.

The book covers the following vulnerability classes:

  • Open Redirect
  • HTTP Parameter Pollution
  • Cross-Site Request Forgery (CSRF)
  • HTML Injection
  • HTTP Response Splitting
  • Cross-Site Scripting (XSS)
  • SQL Injection
  • Server Side Request Forgery (SSRF)
  • XML External Entity (XXE)
  • Remote Code Execution
  • Memory Corruption (lightly covered)
  • Subdomain Takeover
  • Race Conditions
  • Insecure Direct Object References (IDOR)
  • OAUTH Vulnerabilities
  • Logic Bugs

It definitely has a “bug bounty” focus, which has both pros and cons. On the plus side, it’s directly focused on finding and exploiting bugs and is able to use disclosed vulnerabilities from bug bounties as real-world examples of how these bug classes apply. On the other hand, it has almost no discussion of how to address the bugs from an engineering point of view, and it doesn’t do a great job of going beyond a Proof of Concept stage to real exploitation that an attacker might do. (For the developer side, you might want to consider another No Starch publication, Web Security for Developers.)

Chapters are well thought-out and stand alone if you just want familiarity with some of the topics. Examples are incredibly well documented and understandable, and include just enough to get you going without extraneous code/text.

While this book is an obvious win for those with an interest in doing Bug Bounties (e.g., HackerOne or Bugcrowd), I would also recommend this book to new Penetration Testers or Red Teamers who don’t have experience with web security or haven’t kept up with developments. It’s a great way to get bootstrapped, and it’s quite well written, so it’s also an easy read. It’s not overly long either and lends itself to easily doing a chapter at a time and reading over a couple of weeks if you don’t have much time right now.

The Web Application Hacker’s Handbook: Finding and Exploting Security Flaws

The Web Application Hacker's Handbook

  • Authors: Dafydd Stuttard, Marcus Pinto
  • Published: 2011 by Wiley
  • 912 Pages
  • Amazon

This is an older book, but so many of the fundamental issues haven’t changed. Cross-site scripting and cross-site request forgery are still some of the most common web vulnerabilities, remaining in the OWASP Top 10 throughout this time period.

This book is an absolute beast of a reference on web security. It took me several attempts to actually (eventually) make my way through the entire thing. It goes into a great deal of detail about each topic, including the fundamentals of web security and the vulnerabilities that arise from mistakes in design or implementation of web applications.

Because Dafydd is the author of Burp Suite, the premiere web application testing proxy, the examples given in the book rely heavily on the functionality and tooling provided by Burp. Many of the features/tools are available in the Burp Community Edition, but not all of them. (Though, if you’re serious about web security, you really should get a Burp Professional license – it’s totally worth it.)

As opposed to the bug class oriented approach taken by Real World Bug Hunting, The Web Application Hacker’s Handbook focuses more on the component-wise nature of web applications and the common attacks on each area. It covers many of the same bug classes, but looks at it by application component where they’re likely to occur instead. The general areas considered include:

  • Web Application Security Basics
  • Enumeration/Mapping
  • Client-Side Controls
  • Authentication
  • Session Management
  • Access Control
  • Data Storage
  • Backends
  • Application Logic Flaws
  • Cross Site Scripting
  • Attacking Users
  • Automating Attacks
  • Architecture Problems
  • Underlying Application Server Bugs
  • Source Review
  • Web Hacking Methodologies

The Web Application Hacker’s Handbook is the most in-depth web security book I’ve been able to find. Unfortunately, it’s now 9 years old, and a lot has changed in the web space. While most, if not all, of the vulnerabilities still exist, there may be many mitigations that are not discussed in the book. You’ll probably need to do something to get from this book to get fully up to speed, but on the other hand, you’ll have a very deep understanding of the ways in which web applications can go wrong.

Additionally, if you want to become a Burp Suite power user, going through this book will give you a big boost up due to the emphasis on using Burp Suite to its fullest.

The Tangled Web: A Guide to Securing Modern Web Applications

The Tangled Web

(Full disclosure: I formerly worked with Michal on the product security team at Google. I’d first read the book prior to that, and it in no way affects my ability to recommend this as a great book.)

I almost didn’t include this book in comparison to the other two because it’s so different. Rather than focusing strictly on the common classes of web bugs, this focuses on how the web works and how the various vulnerabilities came to be, and how new vulnerabilites might occur. It does this by examining web servers, web applications, and web browsers, and their interactions (which turn out to be quite complex if you’re just familiar with the basics of HTTP).

Instead of vulnerability classes, it focuses on web technologies:

  • HTTP
  • HTML
  • CSS
  • JavaScript
  • Same Origin Policy
  • Security Boundaries

If you’re looking to take a new look at web vulnerabilities and already have a fundamental understanding of the basics, this is a great opportunity to expand your understanding. While it does talk about the common vulnerabilities, it also exposes strange bug classes, like vulnerabilities only exploitable on a single browser due to weird parsing bugs, or the confusion in parsing the same document between a client and a server.

After all, the reason Cross-Site Scripting exists is that something the server understood as “data” is interpreted by the browser as “code” to be executed. HTTP Response Splitting is also a vulnerability brought about by mixing data and metadata (headers) together.

This book is a fascinating read and has wonderful examples, and I feel certain that almost everything will discover something they didn’t already know about web security. Even though The Tangled Web is a little bit old, it’s worth reading to get an understanding of the bad things that can happen and the strange edge cases you might never have considered before.

One of my favorite parts of the book is the presence of a “cheatsheet” in each chapter that summarizes the concepts and how to apply them. This makes the book both a good introduction and a good reference, which is rare to find in the same publication.

It’s worth noting that the book is a little bit less of an easy read than I would like. In some places it seems to jump around and lacks a clear path forward. Another downside that is directly related to the age of the book is the number of examples that focus on Internet Explorer, which is obviously no longer a significant concern on the Internet.

So Which Book?

Well, like I said earlier, I’m not going to declare a “best” book here. If you’re completely new to web security or just looking to do bug bounties, I’d suggest Real-World Bug Hunting as the easiest to digest and most direct to those goals. If you’re looking for the most content but still focusing on attacking applications, I’d go with the Web Application Hacker’s Handbook. Finally, if you’re interested in the most esoteric edge cases, The Tangled Web is your goto, but it’s more of a supplement to the others if you intend to do a lot of web assessments.

Of course, I’ve read all three of the books, and I’ve learned something from all of them. If you have the time and patience (as well as the desire to get much deeper into web security), I think it would be worth your time to read more than one, possibly even all of them, though maybe I’m just an outlier in that case.

on July 10, 2020 07:00 AM

July 09, 2020

This article originally appeared on Joshua Powers’ blog

Ubuntu is the industry-leading operating system for use in the cloud. Every day millions of Ubuntu instances are launched in private and public clouds around the world. Canonical takes pride in offering support for the latest cloud features and functionality.

As of today, all Ubuntu Amazon Web Services (AWS) Marketplace listings are now updated to include support for the new Graviton2 instance types. Graviton2 is Amazon’s next-generation ARM processor delivering increased performance at a lower cost. This announcement includes three new instances types:

  • M6g for general-purpose workloads with a balance of CPU, memory, and network resources
  • C6g for compute-optimized workloads such as encoding, modeling, and gaming
  • R6g for memory-optimized workloads, which process large datasets in memory like databases

Users on Ubuntu 20.04 LTS (Focal) can take advantage of additional optimizations found on newer ARM-based processors. The large-system extensions (LSE) are enabled by using the included libc6-lse package, which can result in orders of magnitude performance improvements. Ubuntu 18.04 LTS (Bionic) will shortly be able to take advantage of this change as well. Additionally, Amazon will soon launch instances with locally attached NVMe storage called M6gd, C6gd, and R6gd. With these instance types, users can further increase performance with additional low-latency, high-speed storage.

Launch Ubuntu instances on the AWS Marketplace today:

on July 09, 2020 11:08 PM

Ep 98 – Nonagon

Podcast Ubuntu Portugal

Uma bonita conversa sobre a vida, os animais, a sociedade de consumo e o estado geral das coisas. Entre pinephones e copos de vinho branco, assim se fez mais 1 episódio do PUP.

Já sabem: oiçam, subscrevam e partilhem!

  • https://www.youtube.com/watch?v=p2Q_SQKK7EQ
  • https://makealinux.app/
  • https://discourse.ubuntu.com/t/graphical-desktop-in-multipass/16229/5
  • https://www.humblebundle.com/software/python-programming-software?partner=pup
  • https://www.humblebundle.com/books/protect-your-stuff-apress-books?partner=pup
  • https://www.humblebundle.com/books/technology-essentials-for-business-manning-publications-books?partner=pup
  • https://www.humblebundle.com/books/data-science-essentials-books?partner=PUP
  • https://www.humblebundle.com/books/circuits-electronics-morgan-claypool-books?partner=PUP
  • https://libretrend.com/specs/librebox

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on July 09, 2020 09:45 PM

KDE is All About the Apps as I hope everyone knows, we have top quality apps that we are pushing out to all channels to spread freedom and goodness.

As part of promoting our apps we updated the kde.org/applications pages so folks can find out what we make.  Today we’ve added some important new features:

Here on the KMyMoney page you can see the lovely new release that they made recently along with the source download link.

The “Install on Linux” link has been there for a while and uses the Appstream ID to open Discover which will offer you the install based on any installation source known to Discover: Packagekit, Snap or Flatpak.

Here in the Krita page you can see it now offers downloads from the Microsoft Store and from Google Play.

Or if you prefer a direct download it links to AppImages, macOS and Windows installs.

And here’s the KDE connect page where you can see they are true Freedom Lovers and have it on the F-Droid store.

All of this needs some attention from people who do the releases.  The KDE Appstream Guidelines has the info on how to add this metadata.  Remember it needs added to master branch as that is what the website scans. 

There is some tooling to help, the appstream-metainfo-release-update script and recently versions of appstreamcli.

Help needed! If you spot out of date info on the site do let me or another Web team spod know.  Future work includes getting more apps on more stores and also making the release service scripts do more automated additions of this metadata.  And some sort of system that scans the download site or maybe uses debian watch files to check for the latest release and notify someone if it’s not in the Appstream file would be super.

Thanks to Carl for much of the work on the website, formidable!

 

on July 09, 2020 04:22 PM

S13E16 – Owls

Ubuntu Podcast from the UK LoCo

This week we’ve been re-installing Ubuntu 20.04. Following WWDC, we discuss Linux Desktop aspirations, bring you some command line love and go over all your wonderful feedback.

It’s Season 13 Episode 16 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:
    • Alan has been re-installing Ubuntu.
  • We discuss Linux Desktop aspirations.
  • We share a Command Line Lurve:
sudo add-apt-repository ppa:bashtop-monitor/bashtop
sudo apt update
sudo apt install bashtop
bashtop

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on July 09, 2020 02:00 PM

Updates are an integral part of the software lifecycle. Quite often, they bring improvements, vital security patches – and sometimes, unfortunately, bugs, too. In mission-critical environments, it is important to assert a high degree of oversight and precision over updates.

Snaps come with a built-in automatic update mechanism, whereby snaps are refreshed to a new version whenever there’s a new release in the Snap Store. Typically, the refresh occurs four times a day, and in the vast majority of cases, they will complete seamlessly, without any issues. However, there are cases when and where snap updates need to be deferred or postponed, or simply managed with a greater, more refined level of control. There are several ways the users can achieve that.

Snap (refresh) control

The snap updates schedule is governed by four system-wide options. These include:

  • Refresh.timer – Defines the refresh frequency and schedule. You can use this parameter to set when the snaps will refresh, so this does not conflict with your other activities – like work meetings, data backups, or similar.
  • Refresh.hold – Delays the next refresh until the defined time and date. The hold option allows you to postpone updates for up to 60 days. In combination with the timer option, you can set very specific time windows for when the snaps are updated.
  • Refresh.metered – Pauses refresh updates when the network connection is metered. By default, snap refresh is enabled over metered connections. However, to conserve data, you can pause the refreshes on such networks.
  • Refresh.retain – Sets how many revisions of a snap are stored on the system. By default, the last three revisions of the installed snap will be kept.

The combination of these four factors gives users quite a fair bit of flexibility in how they control the snap updates. In particular, the timer and hold can be used to create set windows for updates, allowing you to perform any necessary pre- and post-update tasks, like functionality checks, data backups, and more.

The time-snap continuum

Let’s have a look at some practical examples. For instance, you may want to set your snap updates only to run overnight, between 01:00 and 02:00 (in the 24h format).

sudo snap set system refresh.timer=01:00-02:00

After you set the schedule window, you can check what the system reports:

snap refresh --time
timer: 01:00-02:00
last: today at 12:38 PST
next: tomorrow at 01:00 PST

There are quite a few variations available, including the ability to set the schedule at specific hours or time windows for each day of the week, you can omit some days, and you can also set in which particular week of a month you may want to run the update. You can use values 1-4, e.g.: mon3 will be the third Monday of the month, while 5 denotes the last week in the month, as no Earthly calendar currently has more than 31 days.

Setting the hold interval takes a specific date format, as it needs to conform to RFC 3339. This may sound like a page straight out of the Vogon book on time management, so you can use the following command to convert dates into the right format:

date --date="TMZ YYYY-MM-DD HH:MM:SS" +%Y-%m-%dT%H:%M:%S%:z

For instance:

date --date="PST 2020-08-01 13:00:00" +%Y-%m-%dT%H:%M:%S%:z
2020-08-01T13:00:00+01:00

Then, you can set the refresh value with the formatted date string:

sudo snap set system refresh.hold=2020-08-01T13:00:00+01:00
sudo snap get system refresh.hold
2020-08-01T13:00:00+01:00

Once the refresh hold is in place, you can check the refresh schedule:

snap refresh --time
timer: 01:00-02:00
last: today at 12:38 PST
hold: in 31 days, at 13:00 PST
next: tomorrow at 01:00 PST (but held)

As you can see, the information now combines parameters both from the timer and hold settings. The next update is meant to be tomorrow at 01:00, as defined by the timer, but it will be held – for 31 days – until the update deferment expires.

Similarly, you can configure the updates on metered connections. Setting the value to hold will prevent updates, while changing the value to null will allow updates to resume. 

sudo snap set system refresh.metered=hold
sudo snap set system refresh.metered=null

Again, you can combine this option with the timer and hold settings to create a granular and precise update schedule that will not interfere with your critical tasks, ensure maximum consistency, and also allow you to receive the necessary functional and security patches.

Then, occasionally, you may want to check which snaps will update on next refresh. This will give you an indication of the pending list of new snap revisions your system will receive:

snap refresh --list
Name       Version  Rev    Publisher   Notes
lxd        4.3      16044  canonical✓  -
snapcraft  4.1.1    5143   canonical✓  classic

Now, the full list of installed snaps is longer. For example, the system currently has lxd version 4.2 installed:

snap list lxd
Name  Version  Rev    Tracking       Publisher   Notes
lxd   4.2      15878  latest/stable  canonical✓  -

Refresh awareness

This is another feature that you can use to control the updates. In some cases, you may be running a vital task that must not be interrupted in any way. To that end, you can use refresh awareness to make sure the application will not be updated while running. If you try to run a manual snap update while it’s running (and you’re using the awareness feature), you will see a message like below:

snap refresh okular --candidate
error: cannot refresh "okular": snap "okular" has running apps (okular)

Summary

Automatic updates can never be a blanket solution for all use cases. Desktop, server and IoT environments all have their particular requirements and sensitivity, which is why snapd comes with a fairly extensive update control mechanism. The combination of timed schedules, update delay of up to 60 days, metered connections functionality, and snap refresh awareness and update inhibition offer a wide and pragmatic range of options and settings, through which the users can create a robust, reliable software update regime. Furthermore, business customers can configure their own snap store proxy.

Hopefully, this article clears some of the fog surrounding snap updates, and what users can do with them. If you have any comments or suggestions, please join our forum for a discussion.

Photo by Chris Leipelt on Unsplash.

on July 09, 2020 11:14 AM
Ubuntu is the industry-leading operating system for use in the cloud. Every day millions of Ubuntu instances are launched in private and public clouds around the world. Canonical takes pride in offering support for the latest cloud features and functionality. As of today, all Ubuntu Amazon Web Services (AWS) Marketplace listings are now updated to include support for the new Graviton2 instance types. Graviton2 is Amazon’s next-generation ARM processor delivering increased performance at a lower cost.
on July 09, 2020 12:00 AM

July 06, 2020

Welcome to the Ubuntu Weekly Newsletter, Issue 638 for the week of June 28 – July 4, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on July 06, 2020 11:55 PM

Wrong About Signal

Bryan Quigley

A couple years ago I was a part of a discussion about encrypted messaging.

  • I was in the Signal camp - we needed it to be quick and easy to setup for users to get setup. Using existing phone numbers makes it easy.
  • Others were in the Matrix camp - we need to start from scratch and make it distributed so no one organization is in control. We should definitely not tie it to phone numbers.

I was wrong.

Signal has been moving in the direction of adding PINs for some time because they realize the danger of relying on the phone number system. Signal just mandated PINs for everyone as part of that switch. Good for security? I really don't think so. They did it so you could recover some bits of "profile, settings, and who you’ve blocked".

Before PIN

If you lose your phone your profile is lost and all message data is lost too. When you get a new phone and install Signal your contacts are alerted that your Safety Number has changed - and should be re-validated.

>>Where profile data lives1318.60060075387.1499999984981Where profile data livesYour Devices

After PIN

If you lost your phone you can use your PIN to recover some parts of your profile and other information. I am unsure if Safety Number still needs to be re-validated or not.

Your profile (or it's encryption key) is stored on at least 5 servers, but likely more. It's protected by secure value recovery.

There are many awesome components of this setup and it's clear that Signal wanted to make this as secure as possible. They wanted to make this a distributed setup so they don't even need to tbe only one hosting it. One of the key components is Intel's SGX which has several known attacks. I simply don't see the value in this and it means there is a new avenue of attack.

>>Where profile data lives1370.275162.94704773529975250.12499999999997371.0529522647003Where profile data livesYour DevicesSignal servers

PIN Reuse

By mandating user chosen PINs, my guess is the great majority of users will reuse the PIN that encrypts their phone. Why? PINs are re-used a lot to start, but here is how the PIN deployment went for a lot of Signal users:

  1. Get notification of new message
  2. Click it to open Signal
  3. Get Mandate to set a PIN before you can read the message!

That's horrible. That means people are in a rush to set a PIN to continue communicating. And now that rushed or reused PIN is stored in the cloud.

Hard to leave

They make it easy to get connections upgraded to secure, but their system to unregister when you uninstall has been down for at least a week, likely longer. Without that, when you uninstall Signal it means:

  • you might be texting someone and they respond back but you never receive the messages because they only go to Signal
  • if someone you know joins Signal their messages will be automatically upgraded to Signal messages which you will never receive

Conclusion

In summary, Signal got people to hastily create or reuse PINs for minimal disclosed security benefits. There is a possibility that the push for mandatory cloud based PINS despite all of the pushback is that Signal knows of active attacks that these PINs would protect against. It likely would be related to using phone numbers.

I'm trying out the Riot Matrix client. I'm not actively encouraging others to join me, but just exploring the communities that exist there. It's already more featureful and supports more platforms than Signal ever did.

Maybe I missed something? Feel free to make a PR to add comments

Comments

kousu posted

In the XMPP world, Conversastions has been leading the charge to modernize XMPP, with an index of popular public groups (jabber.network) and a server validator. XMPP is mobile-battery friendly, and supports server-side logs wrapped in strong, multi-device encryption (in contrast to Signal, your keys never leave your devices!). Video calling even works now. It can interact with IRC and Riot (though the Riot bridge is less developed). There is a beautiful Windows client, a beautiful Linux client and a beautiful terminal client, two good Android clients, a beautiful web client which even supports video calling (and two others). It is easy to get an account from one of the many servers indexed here or here, or by looking through libreho.st. You can also set up your own with a little bit of reading. Snikket is building a one-click Slack-like personal-group server, with file-sharing, welcome channels and shared contacts, or you can integrate it with NextCloud. XMPP has solved a lot of problems over its long history, and might just outlast all the centralized services.

Bryan Reply

I totally forgot about XMPP, thanks for sharing!

on July 06, 2020 08:18 PM

July 05, 2020

Ep 97 – Pente

Podcast Ubuntu Portugal

Um episódio equilibrado pelo desequilíbrio natural a que os 2 oradores de serviço vos têm habituado. A caminho do centésimo episódio, deleitem-se com mais esta aventura do PUP.

Já sabem: oiçam, subscrevam e partilhem!

  • https://libretrend.com/specs/librebox
  • https://www.humblebundle.com/books/technology-essentials-for-business-manning-publications-books?partner=PUP
  • https://www.humblebundle.com/books/circuits-electronics-morgan-claypool-books?partner=PUP

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on July 05, 2020 10:31 AM

Encryption, Hashing, and Encoding are commonly confused topics by those new to the information security field. I see these confused even by experienced software engineers, by developers, and by new hackers. It’s really important to understand the differences – not just for semantics, but because the actual uses of them are vastly different.

I do not claim to be the first to try to clarify this distinction, but there’s still a lack of clarity, and I wanted to include some exercises for you to give a try. I’m a very hands-on person myself, so I’m hoping the hands-on examples are useful.

Encoding

Encoding is a manner of transforming some data from one representation to another in a manner that can be reversed. This encoding can be used to make data pass through interfaces that restrict byte values (e.g., character sets), or allow data to be printed, or other transformations that allow data to be consumed by another system. Some of the most commonly known encodings include hexadecimal, Base 64, and URL Encoding.

Reversing encoding results in the exact input given (i.e., is lossless), and can be done deterministically and requires no information other than the data itself. Lossless compression can be considered encoding in any format that results in an output that is smaller than the input.

While encoding may make it so that the data is not trivially recognizable by a human, it offers no security properties whatsoever. It does not protect data against unauthorized access, it does not make it difficult to be modified, and it does not hide its meaning.

Base 64 encoding is commonly used to make arbitrary binary data pass through systems only intended to accept ASCII characters. Specifically, it uses 64 characters (hence the name Base 64) to represent data, by encoding each 6 bits of raw data as a single output character. Consequently, the output is approximately 133% of the size of the input. The default character set (as defined in RFC 4648) includes the upper and lower case letters of the English alphabet, the digits 0-9, and + and /. The spec also defines a “URL safe” encoding where the extra characters are - and _.

An example of base 64 encoding, including non-printable characters, using the base64 command line tool (-d is given to decode):

1
2
3
4
5
6
$ echo -e 'Hello\n\tWorld\n\t\t!!!' | base64
SGVsbG8KCVdvcmxkCgkJISEhCg==
$ echo 'SGVsbG8KCVdvcmxkCgkJISEhCg==' | base64 -d
Hello
        World
                !!!

Notice that the tabs and newlines become encoded (along with the other characters) in a format that uses only printable characters and could easily be included in an email, webpage, or almost any other protocol that supports text. It is for this reason that base 64 is commonly used for things like HTTP Headers (such as the Authorization header), tokens in URLs, and more.

Also note that nothing other than the encoded data is needed to decode it. There’s no key, no password, no secret involved, and it’s completely reversible. This demonstrates the lack of any security property offered by encoding.

Encryption

Encryption involves the application of a code or cipher to input plaintext to render it into “ciphertext”. Decryption is the reversal of that process, converting “ciphertext” into “plaintext”. All secure ciphers involve the use of a “key” that is required to encrypt or decrypt. Very early ciphers (such as the Caesar cipher or Vignere cipher) are not at all secure against modern techniques. (Actually they can usually be brute forced by hand even.)

Modern ciphers are designed to withstand “Kerckhoff’s principle”, which refers to the idea that a properly designed cipher assumes your opponent has the cipher algorithm (but not the key):

It should not require secrecy, and it should not be a problem if it falls into enemy hands;

Encryption is intended to provide confidentiality (and sometimes integrity) for data at rest or in transit. By encrypting data, you render it unusable to anyone who does not possess the key. (Note that if your key is weak, someone can perform a dictionary or brute force attack to retrieve your key.) It is a two way process, so it’s only suitable when you want to provide confidentiality but still be able to retrieve the plaintext.

I’ll do a future Security 101 post on the correct applications of cryptography, so I won’t currently go into anything beyond saying that if you roll your own crypto, you will do it wrong. Even cryptosystems designed by professional cryptographers undergo peer review and multiple revisions to arrive at something secure. Do not roll your own crypto.

Using the OpenSSL command line tool to encrypt data using the AES-256 cipher with the password foobarbaz:

1
2
3
4
$ echo 'Hello world' | openssl enc -aes-256-cbc -pass pass:foobarbaz | hexdump -C
00000000  53 61 6c 74 65 64 5f 5f  08 65 ef 7e 17 31 5d 31  |Salted__.e.~.1]1|
00000010  55 3c d3 b7 8b a5 47 79  1d 72 16 ab fe 5a 0e 62  |U<....Gy.r...Z.b|
00000020

I performed a hexdump of the data because openssl would output the raw bytes, and many of those bytes are non-printable sequences that would make no sense (or corrupt my terminal). Note that if you run the exact same command twice, the output is different!

1
2
3
4
$ echo 'Hello world' | openssl enc -aes-256-cbc -pass pass:foobarbaz | hexdump -C
00000000  53 61 6c 74 65 64 5f 5f  d4 36 43 bf de 1c 9c 1e  |Salted__.6C.....|
00000010  e4 d4 72 24 97 d8 da 95  02 f5 3e 3f 60 a4 0a aa  |..r$......>?`...|
00000020

This is because the function that converts a password to an encryption key incorporates a random salt and the encryption itself incorporates a random “initialization vector.” Consequently, you can’t compare two encrypted outputs to confirm that the underlying plaintext is the same – which also means an attacker can’t do that either!

The OpenSSL command line tool can also base 64 encode the output. Note that this is not part of the security of your output, this is just for the reasons discussed above – that the encoded output can be handled more easily through tools expecting printable output. Let’s use that to round-trip some encrypted data:

1
2
3
4
$ echo 'Hello world' | openssl enc -aes-256-cbc -pass pass:foobarbaz -base64
U2FsdGVkX18dIL775O8wHfVz5PVObQDijxwTUHiSlK4=
$ echo 'U2FsdGVkX18dIL775O8wHfVz5PVObQDijxwTUHiSlK4=' | openssl enc -d -aes-256-cbc -pass pass:foobarbaz -base64
Hello world

What if we get the password wrong? Say, instead of foobarbaz I provide bazfoobar:

1
2
3
$ echo 'U2FsdGVkX18dIL775O8wHfVz5PVObQDijxwTUHiSlK4=' | openssl enc -d -aes-256-cbc -pass pass:bazfoobar -base64
bad decrypt
140459245114624:error:06065064:digital envelope routines:EVP_DecryptFinal_ex:bad decrypt:../crypto/evp/evp_enc.c:583:

While the error may be a little cryptic, it’s clear that this is not able to decrypt with the wrong password, as we expect.

Hashing

Hashing is a one way process that converts some amount of input to a fixed output. Cryptographic hashes are those that do so in a manner that is computationally infeasible to invert (i.e., to get the input back from the output). Consequently, cryptographic hashes are sometimes referred to as “one way functions” or “trapdoor functions”. Non-cryptographic hashes can be used as basic checksums or for hash tables in memory.

Examples of cryptographic hashes include MD5 (broken), SHA-1 (broken), SHA-256/384/512, and the SHA-3 family of functions. Do not use anything based on MD5 or SHA-1 for any new applications.

There are three main security properties of a cryptographic hash:

  1. Collision resistance is the inability to find two different inputs that give the same output. If a hash is not collision resistant, you can produce two documents that would both have the same hash value (used in digital signatures). The Shattered Attack was the first Proof of Concept for a collision attack on SHA-1. Both inputs can be freely chosen by the attacker.
  2. Preimage resistance is the inability to “invert” or “reverse” the hash by finding the input to the hash function that produced that hash value. For example, if I tell you I have a SHA-256 hash of 68b1282b91de2c054c36629cb8dd447f12f096d3e3c587978dc2248444633483, it should be computationally infeasible to find the input (“The quick brown fox jumped over the lazy dog.”).
  3. 2nd preimage resistance is the inability to find a 2nd preimage: that is, a 2nd input that gives the same output. In contrast to the collision attack, the attacker only gets to choose one of the inputs here – the other is fixed. (Imagine someone gives you a copy of a file, and you want to modify it but have the same hash as the file they gave you.)

Hashing is commonly used in digital signatures (as a way of condensing the data being signed, since many public key crypto algorithms are limited in the amount of data they can handle. Hashes are also used for storing passwords to authenticate users.

Note that, although preimage resistance may be present in the hashing function, this is defined for an arbitrary input. When hashing input from a user, the input space may be sufficiently small that an attacker can try inputs in the same function and check if the result is the same. A brute force attack occurs when all inputs in a certain range are tried. For example, if you know that the hash is of a 9 digit national identifier number (i.e., a Social Security Number), you can try all possible 9 digit numbers in the hash to find the input that matches the hash value you have. Alternatively, a dictionary attack can be tried where the attacker tries a dictionary of common inputs to the hash function and, again, compares the outputs to the hashes they have.

You’ll often see hashes encoded in hexadecimal, though base 64 is not too uncommon, especially with longer hash values. The output of the hash function itself is merely a set of bytes, so the encoding is just for convenience. Consider the command line tools for common hashes:

1
2
3
4
5
6
$ echo -n 'foo bar baz' | md5sum
ab07acbb1e496801937adfa772424bf7  -
$ echo -n 'foo bar baz' | sha1sum
c7567e8b39e2428e38bf9c9226ac68de4c67dc39  -
$ echo -n 'foo bar baz' | sha256sum
dbd318c1c462aee872f41109a4dfd3048871a03dedd0fe0e757ced57dad6f2d7  -

Even a tiny change in the input results in a completely different output:

1
2
3
4
$ echo -n 'foo bar baz' | sha256sum
dbd318c1c462aee872f41109a4dfd3048871a03dedd0fe0e757ced57dad6f2d7  -
$ echo -n 'boo bar baz' | sha256sum
bd62b6e542410525d2c0d250c4f69b64e42e57e356e5260b4892afef8eacdfd3  -

Salted & Strengthened Hashing

There are special properties that are desirable when using hashes to store passwords for user authentication.

  1. It should not be possible to tell if two users have the same password.
  2. It should not be possible for an attacker to precompute a large dictionary of hashes of common passwords to lookup password hashes from a leak/breach. (Attackers would build lookup tables or more sophisticated structures called “rainbow tables”, enabling them to quickly crack hashes.)
  3. An attacker should have to attack the hashes for each user separately instead of being able to attack all at once.
  4. It should be relatively slow to perform brute force and dictionary attacks against the hashes.

“Salting” is a process used to accomplish the first three goals. A random value, called the “salt” is added to each password when it is being hashed. This way, two cases where the password is the same result in different hashes. This makes precomputing all hash/password combinations prohibitively expensive, and two users with the same password (or a user who uses the same password on two sites) results in different hashes. Obviously, it’s necessary to include the same salt value when validating the hash.

Sometimes you will see a password hash like $1$4zucQGVU$tx2SvCtH7SYaiH.4ASzNt.. The $ characters separate the hash into 3 fields. The first, 1, indicates the hash type in use. The next, 4zucQGVU is the salt for this hash, and finally, tx2SvCtH7SYaiH.4ASzNt. is the hash itself. Storing it like this allows the salt to be easily retrieved to compute a matching hash when the password is input.

The fourth property can be achieved by making the hashing function itself slow, using large amounts of memory, or by repeatedly hashing the password (or some combination thereof). This is necessary because the base hashing functions are fast for even cryptographically secure hashes. For example, the password cracking program hashcat can compute 2.8 Billion plain SHA-256 hashes per second on a consumer graphics card. On the other hand, the intentionally hard function scrypt only hahses at 435 thousand per second. This is more than 6000 times slower. Both are a tiny delay to a single user logging in, but the latter is a massive slowdown to someone hoping to crack a dump of password hashes from a database.

Common Use Cases

To store passwords for user authentication, you almost always want a memory- and cpu-hard algorithm. This makes it difficult to try large quantities of passwords, whether in brute force or a dictionary attack. The current state of the art is the Argon2 function that was the winner of the Password Hashing Competition. (Which, while styled after a NIST process, was not run by NIST but by a group of independent cryptographers.) If, for some reason, you cannot use that, you should consider scrypt, bcrypt, or at least pbkdf2 with a very high iteration count (e.g., 100000+). By now, however, almost all platforms have support for Argon2 available as an open-source library, so you should generally use it.

To protect data from being inspected, you want to encrypt it. Use a high-level crypto library like NaCl or libsodium. (I’ll be expanding on this in a future post.) You will need strong keys (ideally, randomly generated) and will need to keep those keys secret to avoid the underlying data from being exposed. One interesting application of encryption is the ability to virtually delete a collection of data by destroying the key – this is often done for offline/cold backups, for example.

To create an opaque identifier for some data you want to hash it. For example, it’s fairly common to handle uploaded files by hashing the file and storing it under a filename derived from the hash of the file contents. This provides a predictable filename format and length, and prevents two files from ending up with the same filename on the server. (Unless they have the exact same contents, but then the duplication does not matter.) This can also be used for sharding: because the values are uniformly distributed with a good hashing function, you can do things like using the first byte of the hash to identify a storage repository that is distributed.

To allow binary data to be treated like plain text, you can use encoding. You should not use encoding for any security purpose. (And yes, I feel this point deserves repeating numerous times.)

Misconceptions

There are big misconceptions that I see repeated, most often by people outside the hacking/security industry space. A lot of these seem to be over the proper use of these technologies and confusion over when to select one.

Encoding is Not Encryption

For whatever reason, I see lots of references to “base64 encryption.” (In fact, there are currently 20,000 Google results for that!) As I discussed under encoding, base64 (and other encodings) do not do encryption – they offer no confidentiality to the underlying data, and do not protect you in any way. Even though the meaning of the data may not be immediately apparent, it can still be recovered with little effort and with no key or password required. As one article puts it, Base64 encryption is a lie.

If you think you need some kind of security, some kind of encryption, something to be kept secret, do not look to encodings for this! Use proper encryption with a well-developed algorithm and mode of operation, and preferably use a library or tool that completely abstracts this away from you.

Encryption is Not Hashing

There is somewhere upwards of a half million webpages talking about password encryption. Unfortunately, the only time passwords should be encrypted is when they will need to be retrieved in plaintext, such as in a password manager. When using passwords for authentication, you should store them as a strongly salted hash to avoid attackers being able to retrieve them in the case of database access.

Encryption is a two-way process, hashing is one way. To validate that the server and the user have the same password, we merely apply the same transformation (the same hash with the same salt) to the input password and compare the outputs. We do not decrypt the stored password and compare the plaintext.

Conclusion

I hope this has been somewhat useful in dispelling some of the confusion between encryption, hashing, and encoding. Please let me know if you have feedback.

on July 05, 2020 07:00 AM

July 02, 2020

S13E15 – Vertical chopsticks

Ubuntu Podcast from the UK LoCo

This week we’ve been helping HMRC and throwing a 10th birthday party. We discuss “Rolling Rhino”, split personality snaps, UBPorts supporting Project Treble devices, ZFS on Ubuntu 20.04 plus our round-up from the tech news.

It’s Season 13 Episode 15 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on July 02, 2020 02:00 PM

July 01, 2020

While there is a node-red snap in the snap store (to be found at https://snapcraft.io/node-red with the source at https://github.com/dceejay/nodered.snap) it does not really allow you to do a lot with it on i.e. a Raspberry Pi if you want to read sensor data that does not actually come in via the network …

The snap is missing all essential interfaces that could be used for any sensor access (gpio, i2c, Bluetooth, spi or serial-port) and it does not even come with basics like hardware-observe, system-observe or mount-observe to get any systemic info from the device it runs on.

While the missing interfaces are indeed a problem, there is the fact that strict snap packages need to be self contained and hardly have any ability to dynamically compile any software …. Now, if you know nodejs and npm (or yarn or gyp) you know that additional node modules often need to compile back-end code and libraries when you add them to your nodejs install. Technically it is actually possible to make “npm install” work but it is indeed hard to predict what a user may want to install in her installation so you would also have to ship all possible build systems (gcc, perl, python, you name it)
plus all possible development libraries any of the added modules could ever require …

That way you might technically end up with a full OS inside the snap package. Not really a desirable thing to do (beyond the fact that this would even with the high compression snap packages use end up in a gigabytes big snap).

So lets take a look at whats there already in the upstream snapcraft.yaml we can find a line like the following:

npm install --prefix $SNAPCRAFT_PART_INSTALL/lib node-red node-red-node-ping node-red-node-random node-red-node-rbe node-red-node-serialport

This is actually great, so we can just append any modules we need to that line …

Now, as noted above, while there are many node-red modules that will simply work this way, many that are interesting for us to access sensor data will need additional libs that we will need to include in the snap as well …

In Snapcraft you can easily add a dependency via simply adding a new part to the snapcraft.yaml, so lets do this with an example:

Lets add the node-red-node-pi-gpio module, lets also break up the above long line into two and use a variable that we can append more modules to:

DEFAULT_MODULES="npm node-red node-red-node-ping node-red-node-random node-red-node-rbe \
                 node-red-node-serialport node-red-node-pi-gpio"
npm install --prefix $SNAPCRAFT_PART_INSTALL/lib $DEFAULT_MODULES

So this should get us the GPIO support for the Pi into node-red …

But ! Reading the module documentation shows that this module is actually a front-end to the RPi.GPIO python module, so we need the snap to ship this too … luckily snapcraft has an easy to use python plugin that can pip install anything you need. We will add a new part above the node-red part:

parts:
...
  sensor-libs:
    plugin: python
    python-version: python2
    python-packages:
      - RPi.GPIO
  node-red:
    ...
    after: [ sensor-libs ]

Now Snapcraft will pull in the python RPi.GPIO module before it builds node-red (see the “after:” statement i added) and node-red will find the required RPi.GPIO lib when compiling the node-red-node-pi-gpio node module. This will get us all the bits and pieces to have GPIO support inside the node-red application …

Snap packages are running confined, this means they can not see anything of the system we do not allow it to via an interface connection. Remember that i said above the upstream snap is lacking some such interfaces ? So lets better add them to the “apps:” section of our snap (the pi-gpio node module wants to access /dev/gpiomem as well as the gpio device-node itself, so we make sure both these plugs are available to the app):

apps:
  node-red:
    command: bin/startNR
    daemon: simple
    restart-condition: on-failure
    plugs:
      ...
      - gpio
      - gpio-memory-control

And this is it, we have added GPIO support to the node-red snap source, if we re-build the snap, install it on an Ubuntu Core device and do a:

snap connect node-red:gpio-memory-control
snap connect node-red:gpio pi:bcm-gpio-4

We will be able to use node-red flows using this GPIO (for other GPIOs you indeed need to connect to the pi:bcm-gpio-* of your choice … (the mapping for Ubuntu Core follows https://pinout.xyz/ )

I have been collecting a good bunch of possible modules in a forked snap that can be found at https://github.com/ogra1/nodered-snap a binary of this is at https://snapcraft.io/node-red-rpi and i plan a series of more node-red centric posts the next days telling you how to wire things up, with example flows and some deeper insight how to make your node-red snap talk to all the Raspberry Pi interfaces, from i2c to Bluetooth.

Stay tuned !

on July 01, 2020 04:11 PM

June 30, 2020

Are you running the development release of Kubuntu Groovy Gorilla 20.10, or wanting to try the daily live ISO?

Plasma 5.19 has now landed in 20.10 and is available for testing. You can read about the new features and improvements in Plasma 5.19 in the official KDE release announcement.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

The Kubuntu development release is not recommended for production systems. If you require a stable release, please see our LTS releases on our downloads page.

Getting the release:

If you are already running Kubuntu 20.10 development release, then you will receive (or have already received) Plasma 5.19 in new updates.

If you wish to test the live session via the daily ISO, or install the development release, the daily ISO can be obtained from this link.

Testing:

  • If you believe you might have found a packaging bug, you can use a launchpad.net to post testing feedback to the Kubuntu team as a bug, or;
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.18 or whatever version you are familiar with?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Check the changelog in the KDEannouncement:
– Identify items with front/user facing changes capable of specific testing. e.g. “clock combobox instead of tri-state checkbox for 12/24 hour display.”
– Test the ‘fixed’ functionality.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

Plasma 5.19 has 3 more scheduled bugfix releases in the coming months, so by testing you can help to improve the experience for Kubuntu users and the KDE community as a whole.

Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

Note: Plasma 5.19 has not currently been packaged for our backports PPA, as the release requires Qt >= 5.14, while Kubuntu 20.04 LTS has Qt 5.12 LTS. Our backports policy for KDE packages to LTS releases is to provide them where they are buildable with the native available stack on each release.

[1] – irc://irc.freenode.net/kubuntu-devel
[2] – https://t.me/kubuntu_support
[3] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

on June 30, 2020 03:07 PM

June 29, 2020

Welcome to the Ubuntu Weekly Newsletter, Issue 637 for the week of June 21 – 27, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on June 29, 2020 10:11 PM

OpenUK Awards Close Tomorrow

Jonathan Riddell

OpenUK Awards are nearly closed. Do you know of projects that deserve recognition?
 
Entries close midnight ending UTC tomorrow
 
Individual, young person or open source software, open Hardware or open data project or company
 
The awards are open to individuals resident in the UK in the last year and projects and organisations with notable open source contributions from individuals resident in the UK in the last year.
on June 29, 2020 09:52 AM

June 26, 2020

Adapting To Circumstances

Stephen Michael Kellat

I have written prior that I wound up getting a new laptop. Due to the terms of getting the laptop I ended up paying not just for a license for Windows 10 Professional but also for Microsoft Office. As you might imagine I am not about to burn that much money at the moment. With the advent of the Windows Subsystem for Linux I am trying to work through using it to handle my Linux needs at the moment.

Besides, I did not realize OpenSSH was available as an optional feature for Windows 10 as well. That makes handling the herd of Raspberry Pi boards a bit easier. Having the WSL2 window open doing one thing and a PowerShell window open running OpenSSH makes life simple. PowerShell running OpenSSH is a bit easier to use compared to PuTTY so far.

The Ubuntu Wiki mentions that you can run graphical applications using Windows Subsystem for Linux. The directions appear to work for most people. On my laptop, though, they most certainly did not work.

After review the directions were based on discussion in a bug on Github where somebody came up with a clever regex. The problem is that kludge only works if your machine acts as its own nameserver. When I followed the instructions as written my WSL2 installation of 20.04 dutifully tried to open an X11 session on the machine where I said the display was.

Unfortunately that regex took a look at what it found on my machine and said that the display happened to be on my ISP’s nameserver. X11 is a network protocol where you can run a program on one computer and have it paint the screen on another computer though that’s not really a contemporary usage. Thin clients like actual physical X Terminals from a company like Wyse would fit that paradigm, though.

After a wee bit of frustration where I was initially not seeing the problem I had found it there. Considering how strangely my ISP has been acting lately I most certainly do not want to try to run my own nameserver locally. Weirdness by my ISP is a matter for separate discussion, alas.

I inserted the following into my .bashrc to get the X server working:

export DISPLAY=$(landscape-sysinfo --sysinfo-plugins=Network | grep IPv4 | perl -pe 's/ IPv4 address for wifi0: //'):0

Considering that my laptop normally connects to the Internet via Wi-Fi I used the same landscape tool that the message of the day updater uses to grab what my IP happens to be. Getting my IPv4 address is sufficient for now. With usage of grep and a Perl one-liner I get my address in a usable form to point my X server the right way.

Elegant? Not really. Does it get the job done? Yes. I recognize that it will need adjusting but I will cross that bridge when I approach it.

Since the original bug thread on Github is a bit buried the best thing I can do is to share this and to mention the page being on the wiki at https://wiki.ubuntu.com/WSL. WSL2 will be growing and evolving. I suspect this minor matter of graphical applications will be part of that evolution.

on June 26, 2020 10:14 PM

Full Circle Magazine #158

Full Circle Magazine

This month:
* Command & Conquer
* How-To : Python, Ubuntu On a 2-in-1 Tablet, and Rawtherapee
* Graphics : Inkscape
* Graphics : Krita for Old Photos
* Linux Loopback
* Everyday Ubuntu : Starting Again
Ubports Touch
* Review : Kubuntu, and Xubuntu 20.04
* Ubuntu Games : Into The Breach
plus: News, My Opinion, The Daily Waddle, Q&A, and more.

Get it while it’s hot! https://fullcirclemagazine.org/issue-158/

on June 26, 2020 07:04 PM

June 24, 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, 198 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA did 18.0h (out of 14h assigned and 4h from April).
  • Anton Gladky gave back the assigned 10h and declared himself inactive.
  • Ben Hutchings did 19.75h (out of 17.25h assigned and 2.5h from April).
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 17.25h (out of 17.25h assigned).
  • Dylan Aïssi gave back the assigned 6h and declared himself inactive.
  • Emilio Pozuelo Monfort did not manage to work LTS in May and now reported 5h work in April (out of 17.25h assigned plus 46h from April), thus is carrying over 58.25h for June.
  • Markus Koschany did 25.0h (out of 17.25h assigned and 56h from April), thus carrying over 48.25h to June.
  • Mike Gabriel did 14.50h (out of 8h assigned and 6.5h from April).
  • Ola Lundqvist did 11.5h (out of 12h assigned and 7h from April), thus carrying over 7.5h to June.
  • Roberto C. Sánchez did 17.25h (out of 17.25h assigned).
  • Sylvain Beucler did 17.25h (out of 17.25h assigned).
  • Thorsten Alteholz did 17.25h (out of 17.25h assigned).
  • Utkarsh Gupta did 17.25h (out of 17.25h assigned).

Evolution of the situation

In May 2020 we had our second (virtual) contributors meeting on IRC, Logs and minutes are available online. Then we also moved our ToDo from the Debian wiki to the issue tracker on salsa.debian.org.
Sadly three contributors went inactive in May: Adrian Bunk, Anton Gladky and Dylan Aïssi. And while there are currently still enough active contributors to shoulder the existing work, we like to use this opportunity that we are always looking for new contributors. Please mail Holger if you are interested.
Finally, we like to remind you for a last time, that the end of Jessie LTS is coming in less than a month!
In case you missed it (or missed to act), please read this post about keeping Debian 8 Jessie alive for longer than 5 years. If you expect to have Debian 8 servers/devices running after June 30th 2020, and would like to have security updates for them, please get in touch with Freexian.

The security tracker currently lists 6 packages with a known CVE and the dla-needed.txt file has 30 packages needing an update.

Thanks to our sponsors

New sponsors are in bold. With the upcoming start of Jessie ELTS, we are welcoming a few new sponsors and others should join soon.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on June 24, 2020 01:03 PM

June 23, 2020

Don't Download Zoom!

Bryan Quigley

First, I strongly recommend switching to Jitsi Meet:

  • It's free
  • It doesn't require you to sign up at all
  • It's open source
  • It's on the cutting edge of privacy and security features

Second, Anything else that runs in a browser instead of trying to get you to download an specific desktop application. Your browser protects you from many stupid things a company may try to do. Installing their app means you are at more risk. (Apps for phones is a different story.).

A small sampling of other web based options:

  • Talky.io (also open source, no account required)
  • 8x8.vc which is the company that sponsors Jitsi Meet. Their offering has more business options
  • Whatever Google calls their video chat product this week (Duo, Hangouts, Meet).
  • join.me
  • Microsoft Skype (no signups or account required for a basic meeting!)
  • whereby

There are many reasons not to choose Zoom.

😞😞😞

Finally, So you have to use Zoom?

Zoom actually supports joining a call with a web browser. They just don't promote it. Some things may not work as well but you get to keep more of your privacy and security.

  1. On joining the meeting close the request to run a local app.
  2. Click Launch Meeting in middle of screen. Zoom join meeting page
  3. Again close out of the request to open a local app
  4. Ideally, you now get a join from browser, click that! Click join from browser

If it doesn't work try loading the site in another browser. First try Chrome (or those based on it - Brave/Opera) and then Firefox. It's possible that your organization may have disabled the join from web feature.

If you are a Zoom host or admin (why?) you can also ensure that the web feature is not disabled.

on June 23, 2020 09:26 PM

June 20, 2020

New library: libsubid

Serge Hallyn

User namespaces were designed from the start to meet a requirement that unprivileged users be able to make use of them. Eric accomplished this by introducing subuid and subgid delegations through shadow. These are defined by the /etc/subuid and /etc/subgid files, which only root can write to. The setuid-root programs newuidmap and newgidmap, which ship with shadow, respect the subids delegated in those two files.

Until recently, programs which wanted to query available mappings, like lxc-usernsexec, have each parsed these two files. Now, shadow ships a new library, libsubid, to facilitate more programatic querying of subids. The API looks like this:

struct subordinate_range **get_subuid_ranges(const char *owner);
struct subordinate_range **get_subgid_ranges(const char *owner);
void subid_free_ranges(struct subordinate_range **ranges);

int get_subuid_owners(uid_t uid, uid_t **owner);
int get_subgid_owners(gid_t gid, uid_t **owner);

/* range should be pre-allocated with owner and count filled in, start is
 * ignored, can be 0 */
bool grant_subuid_range(struct subordinate_range *range, bool reuse);
bool grant_subgid_range(struct subordinate_range *range, bool reuse);

bool free_subuid_range(struct subordinate_range *range);
bool free_subgid_range(struct subordinate_range *range);

The next step, which I’ve not yet begun, will be to hook these general queries into NSS. You can follow the work in this github issue </p.

on June 20, 2020 06:50 PM

June 19, 2020

ZFS focus on Ubuntu 20.04 LTS: ZSys properties on ZFS datasets

We are almost done in our long journey presenting our ZFS work on Ubuntu 20.04 LTS. The last piece to highlight is how we annotate datasets with some user properties to store metadata needed on boot and on state revert. As we stated on our ZSys presentation article, one of the main principles is to avoid using a dedicated database which can quickly go out of sync with the real system: we store - and thus, rely - only on ZFS properties that are set on the datasets themselves. Taking your pool and moving it to another machine is sufficient.

This will probably give you the necessary information (alongside with the post on partition and dataset layouts) if you want to turn your existing ZFS system to one compatible with ZSys.

Without further ado, it’s time to directly check all that on details!

ZFS properties

Of course, ZFS datasets properties are the main source of information when we build up a representation of the system when starting ZSys. We are using in particular canmount and mountpoint ZFS properties. While the usage of mountpoint is unequivocal, canmount has 3 meaningful states for us:

  • off: we ignore the dataset (apart when reverting) in those cases. This dataset will not be mounted (but still snapshotted if part of system datasets). This is used by default on “container” filesystem dataset, like rpool/ROOT/ubuntu_123456/var/lib in the server layout we demonstrated in our previous post which can then host rpool/ROOT/ubuntu_123456/var/lib/apt for instance.
  • on: those are the current (or previous boot at least) datasets that were mounted. We turn every datasets we want to mount and act on to that value in our zsysd boot-prepare call in the generator to prepare the system booting on the correct set of datasets.
  • noauto: we set that value for any clones (previous system or user states) when reverting. This is another task that zsysd boot-prepare is doing (and thus, as you can reckon, we made sure that zsysd boot-prepare is idempotent): turning previous canmount=on datasets to noauto when reverting and either creating new clones (boot from a saved state made of snapshots) or turning all clones associated to a state your already booted on from canmount=noauto to on.

This is what allows preserving clones and switching throughout them, allowing system bisection (reverting the revert!).

ZFS user properties for ZSys

On top of that, we are using a bunch of user properties to achieve our additional functionalities on top of pure ZFS one. All ZSys related properties starts with com.ubuntu.zsys: prefix.

ZFS user properties on system datasets

Let’s look at those on a filesystem dataset state (current or part of history):

$ zfs get com.ubuntu.zsys:last-booted-kernel,com.ubuntu.zsys:last-used,com.ubuntu.zsys:bootfs rpool/ROOT/ubuntu_e2wti1
NAME                      PROPERTY                            VALUE                               SOURCE
rpool/ROOT/ubuntu_e2wti1  com.ubuntu.zsys:last-booted-kernel  vmlinuz-5.4.0-29-generic            local
rpool/ROOT/ubuntu_e2wti1  com.ubuntu.zsys:last-used           1589438938                          local
rpool/ROOT/ubuntu_e2wti1  com.ubuntu.zsys:bootfs              yes                                 local

You can see 3 properties on those root pool:

Last Booted Kernel

This property is storing the last successful kernel with booted with. This is used when building the GRUB menu, in particular on state saves, to ensure the “Revert” entry boot with the exact same kernel version, even if not last available.

Last used

This is a timestamp of last successful boot. This is used for filesystem datasets related state to help the garbage collector assessing what to collect (on snapshots dataset, we only take their creation time of course) and printing an accurate time (related to your system timezone) in the GRUB menu.

Bootfs

This is a marker on the root and any datasets you want to mount in the initramfs. It identifies as well ZSys machines, so that the daemon avoid treating non ZSys installation.

Contrary to upstream ZFS initramfs support (very early in the machine boot), when reverting on a state associated to snapshot datasets, we only clone and mount datasets having com.ubuntu.zsys:bootfs property set to “yes” (local or inherited) if this property is set (we are thus still compatible with manually crafted ZFS installation). The remaining datasets will be cloned on early boot by the zsysd boot-prepare called in the ZFS mount generator.

Why do we do that? Remember complex layout like the server one? As a reminder, we end up with:

$ zsysctl show --full
[…]
System Datasets:
 - rpool/ROOT/ubuntu_e2wti1
 […]
 - rpool/ROOT/ubuntu_e2wti1/var
 - rpool/ROOT/ubuntu_e2wti1/var/lib/AccountsService
 - rpool/ROOT/ubuntu_e2wti1/var/lib/NetworkManager
 - rpool/ROOT/ubuntu_e2wti1/var/lib/apt
 - rpool/ROOT/ubuntu_e2wti1/var/lib/dpkg
User Datasets:
[…]
Persistent Datasets:
 - rpool/var/lib
 […]

Reminder: rpool/ROOT/ubuntu_e2wti1/var/lib has canmount=off.

Imagine now that on boot, we clone (for reverting) and mount dataset all children on rpool/ROOT/ubuntu_e2wti1@…. This would mount in the initramfs:

  • / from rpool/ROOT/ubuntu_e2wti1@…
  • /var from rpool/ROOT/ubuntu_e2wti1/var@…
  • /var/lib/AccountsService from rpool/ROOT/ubuntu_e2wti1/var/lib/AccountsService@…
  • /var/lib/NetworkManager from rpool/ROOT/ubuntu_e2wti1/var/lib/NetworkManager@…
  • /var/lib/apt from rpool/ROOT/ubuntu_e2wti1/var/lib/apt@…
  • /var/lib/dpkg from rpool/ROOT/ubuntu_e2wti1/var/lib/dpkg@…

Then, the machine starts booting and when reaching the ZFS mount generator, or zfs-mount.service, it will try to mount /var/lib (from rpool/var/lib) which will fail as we already have some /var/lib/* subdirectories created and mounted.

The same idea applies when you create any persistent dataset with canmount=on (like rpool/var/lib) and when some children (in term of mountpoint hierarchy) of this persistent dataset is a system dataset (like rpool/var/lib/apt).

This is why we set on rpool/ROOT/ubuntu_e2wti1/var bootfs=no (and in any direct children of rpool/ROOT/ubuntu_e2wti1) so that you can create any persistent datasets without any headache. This property avoids hardcoding “only mount the root dataset and no child” in the code and is more flexible for the advanced user.

$ zfs get com.ubuntu.zsys:bootfs rpool/ROOT/ubuntu_e2wti1/var
NAME                          PROPERTY                VALUE                   SOURCE
rpool/ROOT/ubuntu_e2wti1/var  com.ubuntu.zsys:bootfs  no                      local

ZFS user properties on system datasets for snapshot datasets

We store slightly more property on snapshots and the syntax is different:

$ zfs get all rpool/ROOT/ubuntu_e2wti1@autozsys_08865s
[…]
rpool/ROOT/ubuntu_e2wti1@autozsys_08865s  com.ubuntu.zsys:bootfs              yes:local                           local
rpool/ROOT/ubuntu_e2wti1@autozsys_08865s  com.ubuntu.zsys:canmount            on:                                 local
rpool/ROOT/ubuntu_e2wti1@autozsys_08865s  com.ubuntu.zsys:last-booted-kernel  vmlinuz-5.4.0-26-generic:local      local
rpool/ROOT/ubuntu_e2wti1@autozsys_08865s  com.ubuntu.zsys:mountpoint          /:local                             local
rpool/ROOT/ubuntu_e2wti1@autozsys_08865s  com.ubuntu.zsys:last-used           1589438938                          inherited from rpool/ROOT/ubuntu_e2wti1

In addition to the previous listed user properties, you can see that we are storing canmount and mountpoint as user properties. We store them to save their value of both those properties when we are taking a state save. For instance, if we change the mountpoint property on rpool/ROOT/ubuntu_e2wti1, this would have impacted all of its snapshots like rpool/ROOT/ubuntu_e2wti1@autozsys_08865s. It means that we wouldn’t have a way to know what was the original value when we saved a particular state, which really corresponds to the given state. Similarly, we “set in stone” the last booted kernel property which would have changed over time otherwise, as inherited from rpool/ROOT/ubuntu_e2wti1 last-booted-kernel value.

You can also see above that we append on the first 4 properties :local (local source) or just : (default property). This is used to know and recreate the exact source of inheritance chain for any property. Indeed, in ZFS:

  • rpool/ROOT/ubuntu_e2wti1@autozsys_08865s inherits from rpool/ROOT/ubuntu_e2wti1 for all its properties that can be inherited
  • rpool/ROOT/ubuntu_e2wti1/var inherits from rpool/ROOT/ubuntu_e2wti1.
  • However rpool/ROOT/ubuntu_e2wti1/var@autozsys_08865s inherits from … rpool/ROOT/ubuntu_e2wti1/var and not from rpool/ROOT/ubuntu_e2wti1@autozsys_08865s!

This inheritance chain of ZFS is logical when you reason dataset per dataset. However, we are creating machines with ZSys and it means that some properties that should logically inherit (if not overridden) from the root snapshots are instead inheriting from the immediate parent in term of ZFS logic.

To recreate a “machine” state inheritance logic, we append thus the source property value to the value for datasets. If we look at rpool/ROOT/ubuntu_e2wti1/var@autozsys_08865s:

$ zfs get all rpool/ROOT/ubuntu_e2wti1/var@autozsys_08865s
[…]
rpool/ROOT/ubuntu_e2wti1/var@autozsys_08865s  com.ubuntu.zsys:mountpoint          /var:inherited                      local
rpool/ROOT/ubuntu_e2wti1/var@autozsys_08865s  com.ubuntu.zsys:last-booted-kernel  vmlinuz-5.4.0-26-generic:inherited  local
rpool/ROOT/ubuntu_e2wti1/var@autozsys_08865s  com.ubuntu.zsys:bootfs              no:local                            local
rpool/ROOT/ubuntu_e2wti1/var@autozsys_08865s  com.ubuntu.zsys:canmount            off:local                           local
rpool/ROOT/ubuntu_e2wti1/var@autozsys_08865s  com.ubuntu.zsys:last-used           1589438938                          inherited from rpool/ROOT/ubuntu_e2wti1

Here, we can see that canmount=off was stored from the value on rpool/ROOT/ubuntu_e2wti1/var when the state was saved and it was a local property on this one: off:local. Same thing for the bootfs property. However, we can see that both the mountpoint or the last-booted-kernel were inherited properties (not set explicitly) from its parent, rpool/ROOT/ubuntu_e2wti1/var from rpool/ROOT/ubuntu_e2wti1 and as such, rpool/ROOT/ubuntu_e2wti1/var@autozsys_08865s from rpool/ROOT/ubuntu_e2wti1@autozsys_08865s.

When doing a revert on a state associated to snapshot datasets, we restore any local values (suffix :local) by resetting the values, and ignore any inherited or default ones, as the normal new clone datasets inheritance will thus restore regular ZFS inheritance scheme. This is how we can recreate high fidelity revert, without being impacted by future changes you have made in ZFS datasets after the state save was taken. This is also one way we track changes in datasets layouts while still reverting to the previous available layout.

You can see that we don’t set any last-used properties on snapshot datasets and let them inherited from the parent. It doesn’t even have any : suffix! This is a value that we just ignore and use, as previously told, the snapshot creation time for those states made of snapshot datasets (they only apply to states made of filesystem datasets, like clones).

ZFS user properties on user datasets

More simple than system datasets, user datasets have a couple of user properties:

$ zfs get all rpool/USERDATA/didrocks_wbdgr3
[…]
rpool/USERDATA/didrocks_wbdgr3  com.ubuntu.zsys:bootfs-datasets  rpool/ROOT/ubuntu_e2wti1         local
rpool/USERDATA/didrocks_wbdgr3  com.ubuntu.zsys:last-used        1589438938                       local

The second one is straightforward: this is the last used time for this dataset to help the garbage collector. Similarly to system states, if the state is made of snapshot datasets, the creation time is used.

com.ubuntu.zsys:bootfs-datasets is the mechanism which associates this user dataset with the rpool/ROOT/ubuntu_e2wti1 state. This is how user state made of filesystem datasets can be linked to system states and zsysctl show --full confirms this:

$ zsysctl show --full
Name:               rpool/ROOT/ubuntu_e2wti1
[…]
System Datasets:
 - bpool/BOOT/ubuntu_e2wti1
 - rpool/ROOT/ubuntu_e2wti1
[…]
User Datasets:
 User: didrocks
 - rpool/USERDATA/didrocks_wbdgr3

User state made of snapshots datasets are way easier: they are linked by the snapshot name, this is why this user state is linked to this system state:

$ zsysctl show --full
History:
  - Name:               rpool/ROOT/ubuntu_e2wti1@autozsys_8cjrb0
    System Datasets:
     - bpool/BOOT/ubuntu_e2wti1@autozsys_8cjrb0
     - rpool/ROOT/ubuntu_e2wti1@autozsys_8cjrb0
     […]
    User Datasets:
      User: didrocks
        - rpool/USERDATA/didrocks_wbdgr3@autozsys_8cjrb0

In the previous case, @autozsys_8cjrb0 is the link. When we revert to that state and create clone filesystem datasets for a new read-write state, we thus associate the user dataset to that state by tagging the user dataset with com.ubuntu.zsys:bootfs-datasets matching the state name (this is done in the zsysd boot-prepare call from the ZFS mount generator).

Shared user states

What happens though if you revert your system state without reverting the user data? This is where things are becoming interesting: for each users, we take the previous active user state and retag it by appending the newly created system state.

You thus have an user state with multiple values, separated by ,, like com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_e2wti1,rpool/ROOT/ubuntu_fa2wz74. This is the mechanism that allows you to revert the revert, with the exact same user data! The user state is then shared.

However, each user state can be different, for instance you may want to add rpool/USERDATA/didrocks_wbdgr3/tools to only one system state. The annotation allows that by tagging com.ubuntu.zsys:bootfs-datasets=rpool/ROOT/ubuntu_fa2wz74 on this one for instance.

We would thus end up with:

$ zsysctl show --full
Name:               rpool/ROOT/ubuntu_fa2wz74
[…]
User Datasets:
 User: didrocks
 - rpool/USERDATA/didrocks_wbdgr3
 - rpool/USERDATA/didrocks_wbdgr3/tools

[…]
History:
  - Name:               rpool/ROOT/ubuntu_e2wti1
    […]
    User Datasets:
      User: didrocks
        - rpool/USERDATA/didrocks_wbdgr3

The user state associated to rpool/ROOT/ubuntu_fa2wz74 is now named rpool/USERDATA/didrocks_wbdgr3.rpool.ROOT.ubuntu-fa2wz74 and his compounded of 2 filesystem datasets: rpool/USERDATA/didrocks_wbdgr3 and rpool/USERDATA/didrocks_wbdgr3/tools.

The user state associated to rpool/ROOT/ubuntu_e2wti1 is now named rpool/USERDATA/didrocks_wbdgr3.rpool.ROOT.ubuntu-e2wti1 and only has one filesystem dataset: rpool/USERDATA/didrocks_wbdgr3.

Those 2 user states sharing some common user datasets are now logically separated and can be referenced directly, without any confusion, but the user. If you revert or remove a system state, we just untag the system state that needs to be detached: the user state can still be useful for other states and only the needed datasets (like the /tools one) will be deleted.

ZFS user properties on user datasets made of snapshot datasets

We already discussed that the snapshot name is the association between system and user states. Similarly to system state snapshots, we store canmount and mountpoint to freeze those properties in time when saving the state. We append the source after : to track parent and children association.

$ zfs get all rpool/USERDATA/didrocks_wbdgr3@autozsys_8cjrb0
[…]
rpool/USERDATA/didrocks_wbdgr3@autozsys_8cjrb0  com.ubuntu.zsys:canmount         on:local                         local
rpool/USERDATA/didrocks_wbdgr3@autozsys_8cjrb0  com.ubuntu.zsys:mountpoint       /home/didrocks:local             local
rpool/USERDATA/didrocks_wbdgr3@autozsys_8cjrb0  com.ubuntu.zsys:bootfs-datasets  rpool/ROOT/ubuntu_e2wti1         inherited from rpool/USERDATA/didrocks_wbdgr3
rpool/USERDATA/didrocks_wbdgr3@autozsys_8cjrb0  com.ubuntu.zsys:last-used        1589438938                       inherited from rpool/USERDATA/didrocks_wbdgr3

As you can see, we let the normal ZFS inheritance value on com.ubuntu.zsys:bootfs-datasets and com.ubuntu.zsys:last-used but those are ignored when restoring a state as the state association is made via snapshot name match and last used is taken from the snapshot dataset creation time.

You now have a way (until we grow a command for doing this) to manually enroll existing user datasets to a system once you understand this concept!

Final thoughts

Phew! We hope that now that you have a complete pictures on all those concepts, you can really appreciate what we try to bring as our ubuntu ZFS on root experience and how. ZSys provides a way to manage and monitor the system, while doing automated state saving to offer you reliable revert capability. We are paving the first steps to provide a strong and bullet-proof desktop system to our audience. The main idea is that most of the features are managed automatically for you, and we will provide more and more facilities in the future.

If you are interested in reading further ahead, you can check our in progress specification and follow our github project. This is a good place, as the upstream ZSys repo is, to see how you can contribute to this effort and help shaping the future on ZFS on root on ubuntu! This is in addition to all the enhancements, maintenance and fixes that are heading Ubuntu 20.04 LTS right now.

I hope this read was pleasing to you and that understanding the internals and challenges has highlighted why built this tool, how much thoughts and great care we took in crafting the whole system. Of course, there would be a lot more to say, like how we built our extensive testsuite on ZSys itself with the internal in-memory mock and real ZFS system stack, how we have hundreds of tests on our grub menu generation, how we handled some optimizations over the go-libzfs package and much more! But we have to stop at point and this is the good place for it. We can always continue the conversation via the dedicated Ubuntu discourse thread.

on June 19, 2020 10:15 AM

June 17, 2020

Ubuntu 20.04 LTS has switched to using the IBus input framework for most (all?) languages, even those based on the Latin, Cyrillic or Greek scripts. Typing in English is not that demanding for your operating system; there is a one to one association between the key you press, and the result you see on your screen. But if you have to type accents, or type in some more complex script, then you need a more advanced input framework.

You wouldn’t notice a difference when typing on Ubuntu 20.04, unless your language has accents and when you type, you press special key combinations to add those accents. For example, «αηδόνι». Notice the accent on the «ο». When you type the accent and then the «ο», you get visual information on the imminent composition of «ό».

Typing in Greek on Ubuntu 20.04. Input is handled by IBus, and by default we can see visually the addition of the accents while typing.

But is it worth the effort to switch to something more complex when the old way used to work just fine? That’s a perennial question. My view is that once you switch to an input framework, you can do much more advanced and exciting things. This post is about using the ibus-typing-booster plugin to IBus that adds predictive typing to the Linux desktop. Oh, you can also type emoji easily. 👍

Installing ibus-typing-booster to Ubuntu 20.04

Gunnar Hjalmarsson maintains a package at the ibus-typing-booster PPA. Follow the instructions to install on Ubuntu 20.04 LTS, for now. Gunnar has uploaded the package on Debian’s NEW queue and it is just a matter of months for the package to get accepted.

sudo add-apt-repository ppa:gunnarhj/ibus-typing-booster
sudo apt-get update
sudo apt-get install ibus-typing-booster

Then, log out and log in into Ubuntu 20.04. A reboot is good as well.

Configuring ibus-typing-booster in Ubuntu 20.04

We are adding the layout for the ibus-typing-booster to Ubuntu 20.04.

Go to Settings and click on Region & Language.You will see the current keyboard layouts (above the red arrow). There are two here, English (US) and Greek. Click on the + sign to add a new one.
We are adding a different layout from the existing English and Greek layouts. Therefore, click on those three vertical dots to open up adding other layouts.
The three vertical dots expand, and the list becomes somewhat longer, still covering any layout based on the two main language on my setup, English and Greek layouts. Still, click on Other as shown with the red arrow.
The list becomes bigger, therefore perform a search at the bottom of the window. Search for “typing”, in order to see “Other (Typing Booster)” in the list. Click to select it.
We have clicked to select it. And we can click on the green button that adds it to our system.
We have added the Typing Booster input method. In this setup, I have English (US), Greek and “Other (Typing Booster)”.
We can see Typing Booster in the keyboard layout applet. We are set to go.

We have configured the Typing Booster and are ready to start typing.

Using the Typing Booster

When you switch to the Typing Booster input method, you are boosting the last keyboard layout that you had selected. This previous sentence was the most important one in this whole post.

You can still use your existing keyboard layouts as normal, but if you want to boost one of them, them switch from that keyboard layout to the Typing Booster input method. The keyboard layout is then boosted, until you switch out of the Typing Booster.

We type using completion, i.e. predictive typing. As we type, there are candidates, which we may select instead of typing the whole thing.

Selecting a candidate can be done in several ways,

  1. Using the mouse.
  2. Using the arrow keys.
  3. Pressing the number of the candidate in the list.
  4. Pressing the corresponding function keys (F1 for 1, F2 for 2, etc).

Conclusion

The Typing Booster has many settings and can do many things. I think the important aspect is to use it and get used to using it. Initially it might be awkward when you type. But as you type more, the Typing Booster learns the words you are typing and becomes from intelligent in suggesting better candidates.

As soon as Gunnar’s package gets promoted out of the NEW queue, the package will make it into the universe repository and will be available to all. If you know someone that can look into the this, point them at the URL above.

on June 17, 2020 08:26 PM

EBBR on RockPro64

Marcin Juszkiewicz

SBBR or GTFO

Me.

But Arm world no longer ends on “SBBR compliant or complete mess”. For over a year there is new specification called EBBR (Embedded Base Boot Requirements).

WTH is EBBR?

In short it is kind of SBBR for devices which can not comply. So you still need to have some subset of UEFI Boot/Runtime Services but it can be provided by whatever bootloader you use. So U-Boot is fine as long it’s EFI implementation is enabled.

ACPI is not required but may be present. DeviceTree is perfectly fine. You may provide both or one of them.

Firmware can be stored wherever you wish. Even MBR partitioning is available if really needed.

Make it nice way

RockPro64 has 16MB of SPI flash on board. This is far more than needed for storing firmware (I remember time when it was enough for palmtop Linux).

During last month I sent a bunch of patches to U-Boot to make this board as comfortable to use as possible. Including storing of all firmware parts into on board SPI flash.

To have U-Boot there you need to fetch two files:

Their sha256 sums:

3985f2ec63c2d31dc14a08bd19ed2766b9421f6c04294265d484413c33c6dccc  idbloader.img
35ec30c40164f00261ac058067f0a900ce749720b5772a759e66e401be336677  u-boot.itb

Store them as files on USB pen drive and plug it into any of RockPro64 USB ports. Then reboot to U-Boot as you did before (stored in SPI or on SD card or on EMMC module).

Next do this set of commands to update U-Boot:

Hit any key to stop autoboot:  0 
=> usb start

=> ls usb 0:1
   163807   idbloader.img
   867908   u-boot.itb

2 file(s), 0 dir(s)

=> sf probe
SF: Detected gd25q128 with page size 256 Bytes, erase size 4 KiB, total 16 MiB

=> load usb 0:1 ${fdt_addr_r} idbloader.img
163807 bytes read in 16 ms (9.8 MiB/s)

=> sf update ${fdt_addr_r} 0 ${filesize}
device 0 offset 0x0, size 0x27fdf
163807 bytes written, 0 bytes skipped in 2.93s, speed 80066 B/s

=> load usb 0:1 ${fdt_addr_r} u-boot.itb
867908 bytes read in 53 ms (15.6 MiB/s)

=> sf update ${fdt_addr_r} 60000 ${filesize}
device 0 offset 0x60000, size 0xd3e44
863812 bytes written, 4096 bytes skipped in 11.476s, speed 77429 B/s

And reboot board.

After this your RockPro64 will have firmware stored in on board SPI flash. No need for wondering which offsets to use to store them on SD card etc.

Booting installation media

The nicest part of it is that no longer you need to mess with installation media. Fetch Debian/Fedora installer ISO, write it to USB pen drive, plug into port and reboot board.

Should work with any generic AArch64 installation media. Of course kernel on media needs to support RockPro64 board. I played with Debian ‘testing’ and Fedora 32 and rawhide and they booted fine.

My setup

My board boots to either Fedora rawhide or Debian ‘testing’ (two separate pen drives).

on June 17, 2020 03:53 PM

June 16, 2020

ZFS focus on Ubuntu 20.04 LTS: ZSys dataset layout

After looking at the global partition layout when you select the ZFS option on ubuntu 20.04 LTS, let’s dive in details what are exactly inside those ZFS pools, I name bpool and rpool!

We are mainly focusing on two kinds of ZFS datasets: filesytem datasets and snapshot datasets. We already eluded to them multiple times in previous blog posts, but if you want to follow this section, I highly recommend following this couple of ZFS tutorials (setup and basics and snapshot and clones) or watching this introduction. This will help guiding you on those concepts.

Current states and its datasets

If you run zsysctl show --full, you will see exactly the datasets that our current state is made of:

$ zsysctl show --full
Name:               rpool/ROOT/ubuntu_e2wti1
ZSys:               true
Last Used:          current
Last Booted Kernel: vmlinuz-5.4.0-29-generic
System Datasets:
 - bpool/BOOT/ubuntu_e2wti1
 - rpool/ROOT/ubuntu_e2wti1
 - rpool/ROOT/ubuntu_e2wti1/srv
 - rpool/ROOT/ubuntu_e2wti1/usr
 - rpool/ROOT/ubuntu_e2wti1/var
 - rpool/ROOT/ubuntu_e2wti1/usr/local
 - rpool/ROOT/ubuntu_e2wti1/var/games
 - rpool/ROOT/ubuntu_e2wti1/var/lib
 - rpool/ROOT/ubuntu_e2wti1/var/log
 - rpool/ROOT/ubuntu_e2wti1/var/mail
 - rpool/ROOT/ubuntu_e2wti1/var/snap
 - rpool/ROOT/ubuntu_e2wti1/var/spool
 - rpool/ROOT/ubuntu_e2wti1/var/www
 - rpool/ROOT/ubuntu_e2wti1/var/lib/AccountsService
 - rpool/ROOT/ubuntu_e2wti1/var/lib/NetworkManager
 - rpool/ROOT/ubuntu_e2wti1/var/lib/apt
 - rpool/ROOT/ubuntu_e2wti1/var/lib/dpkg
User Datasets:
 User: didrocks
 - rpool/USERDATA/didrocks_wbdgr3
 User: root
 - rpool/USERDATA/root_wbdgr3

Note: if you have datasets directly under rpool, you will see a additional category named “Persistent datasets”. For instance, if rpool/var/lib/docker exists, you will see:

Persistent Datasets:
 - rpool/var/lib/docker

You will see that the state name corresponds to the dataset mounting on / (rpool/ROOT/ubuntu_e2wti1). It’s listed under the “System Datasets” category. We also have 2 other “User Datasets” and “Persistent Datasets” categories. Let’s examine them one after another.

System states and their datasets

System datasets are datasets which forms… your installed system! (except /boot/efi and /boot/grub which are on a separate partition as explained in the previous article about partitioning). You can see that kernel and initramfs are located on bpool/BOOT/ubuntu_e2wti1 matching the rpool/ROOT/ubuntu_e2wti1 base name. Any of those and descendant are considered as system datasets.

In general, every datasets (and its children) under /ROOT/ will form a single system state. If a matching is found in any /BOOT/ with the same name, it will be considered as part of the same state. This is not mandatory though and if we have multiple candidates, we always prefer /ROOT//boot over /BOOT/ over /BOOT/ and over /ROOT/ /boot subdirectory on / dataset. Note that we have some optimization for our default layout with a bpool/BOOT/ when generating our grub menu and loading our system, but this isn’t mandatory.

When you save a system state, a snapshot is created on all of them in sync. If you look at the history part (if you already have one system state save) of the same command, you can see this:

$ zsysctl show --full
[…]
  - Name:               rpool/ROOT/ubuntu_e2wti1@autozsys_k8uf7o
    Created on:         2020-05-05 08:35:43
    Last Booted Kernel: vmlinuz-5.4.0-28-generic
    System Datasets:
     - bpool/BOOT/ubuntu_e2wti1@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/srv@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/usr@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/var@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/usr/local@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/var/games@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/var/lib@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/var/log@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/var/mail@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/var/snap@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/var/spool@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/var/www@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/var/lib/AccountsService@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/var/lib/NetworkManager@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/var/lib/apt@autozsys_k8uf7o
     - rpool/ROOT/ubuntu_e2wti1/var/lib/dpkg@autozsys_k8uf7o
    User Datasets:
      User: didrocks
         - rpool/USERDATA/didrocks_wbdgr3@autozsys_k8uf7o
      User: root
         - rpool/USERDATA/root_wbdgr3@autozsys_k8uf7o

Here, all system datasets snapshot names match the generated @autozsys_k8uf7o. As a reminder from a previous blog post, a history state save can also be formed of filesystem dataset clones instead of snapshots. Clones are like snapshots in our case, after a state revert (and thus, have a different suffix after _). However, this changes nothing to this concept.

If you are a ZFS expert, you are probably wondering why we have so many datasets in our default layout. Hold on your excitement, you have been able to wait for 8 blog posts to arrive to this point, I just ask you to wait for a couple of more paragraphs! :)

Before that, let’s jump on the second type of datasets: User Datasets.

User states and their datasets

User state is formed by default of one dataset per user. In our example, the “didrocks” user has one dataset rpool/USERDATA/didrocks_wbdgr3 for the current user state. Similarly, history system state has a linked user state rpool/USERDATA/didrocks_wbdgr3@autozsys_k8uf7o. The same snapshot suffix helps linking there, and we will come later (next blog post) on how we associate user state to system state for filesystem datasets.

As we have /ROOT/ (and /BOOT/) to identify system states, /USERDATA/ is where will live user state datasets, prefixed by user name. Those user datasets can have any children datasets as you need and those will be taken into account by ZSys.

Once linked to a system, we snapshots user datasets on system state saving (to allow reverting to a system with its userdata for all users). We also save hourly for connected users their own state, for finer-grain revert. This is also listed in the Users: section of the zsysctl show command:

$ zsysctl show --full
[…]
Users:
  - Name:    didrocks
    History: 
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_stb57j (2020-05-13 15:53:20): rpool/USERDATA/didrocks_wbdgr3@autozsys_stb57j
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_5ffi7s (2020-05-13 14:52:20): rpool/USERDATA/didrocks_wbdgr3@autozsys_5ffi7s
     - rpool/USERDATA/didrocks_wbdgr3@autozsys_hmolrv (2020-05-13 13:51:20): rpool/USERDATA/didrocks_wbdgr3@autozsys_hmolrv
     - rpool/USERDATA/didrocks_ptdr42 (2020-05-13 13:12:10): rpool/USERDATA/didrocks_ptdr42

Note that this is redundant above as we only have one dataset per user state associated. However, for shared user states, this makes sense:

$ zsysctl show --full
[…]
Users:
  - Name:    didrocks
    History: 
     - rpool/USERDATA/didrocks_e2jj0s-rpool.ROOT.ubuntu-idjvq9 (2020-05-01 14:43:20): rpool/USERDATA/didrocks_e2jj0s

The user states associating dataset rpool/USERDATA/didrocks_e2jj0s to system state named rpool/ROOT/ubuntu_idjvq9 is named rpool/USERDATA/didrocks_e2jj0s-rpool.ROOT.ubuntu-idjvq9.

For both system and user states, you can add or remove children datasets at any time, and when reverting, each state will track exactly what datasets were available at that time and restore only those datasets (not more, not less). This is even true for shared user states between multiple states, but with the user states not having the same number of datasets on each system states! Basically, complex scenarios and changing system or user dataset layout is tracked on the history.

Similarly to /BOOT/, /USERDATA/ can be on any pool. We have the same logic to prefer the current root pool than others, but if you create a /USERDATA/ on another pool, any useradd or system calls will try to do the right thing and reuse what you have already setup.

What is not tracked by the history, and this is a good thing, are the well-named “Persistent Datasets” :)

Persistent datasets

As you can infer from the name, those are datasets that are shared across all states. They are never automatically snapshotted nor reverted. The idea is to have those persistent disk space which are moving a lot in term of data content. The counterpart of this persistence is that you don’t really care about their content or can manually recover if the data are not compatible after a revert. Also, you may want to permanently persistent some data in various cases.

Datasets are considered persistent if you create directly any of them outside of any /ROOT/, /BOOT/ and /USERDATA/ namespace. Of course, persistent datasets are ignored by garbage collection as there is no history to collect and we ignore your manual snapshots over it.

For instance. the docker ZFS storage driver has a tendency of creating a lot of datasets automatically on each docker run invocation, and keeping them on stopped containers. I don’t want to store and have any history on them. Consequently, I have created a rpool/var/lib/docker persistent datasets which is taken into account in the system:

$ zsysctl show --full
[…]
Persistent Datasets:
 - rpool/var/lib/docker

You will note that we don’t repeat them over each history system state entry for consistency, but they are indeed available and shared between any of them.

More practically, to do this, we create 3 datasets:

  • rpool/var with canmount=off (rpool/ROOT/ubuntu_e2wti1/var is the real mountpoint for /var) with zfs create -o canmount=off rpool/var
  • Similarly, rpool/var/lib with canmount=off (rpool/ROOT/ubuntu_e2wti1/var/lib is the real mountpoint for /var/lib) with zfs create -o canmount=off rpool/var/lib
  • And finally our desired rpool/var/lib/docker which will be mounted over /var/lib/docker as our persistent dataset with zfs create rpool/var/lib/docker

This is the reason why rpool has a mountpoint of / with canmount=off, you can directly take advantage of ZFS inheritance by creating datasets under it without having to set mountpoints on each of them. You could also create directly rpool/var-lib-docker with a mountpoint set to /var/lib/docker if you prefer, but I find the other scheme to be more explicit, especially if you create more datasets under the same directory.

Once done, we get:

zfs list -r -o name,canmount,mountpoint rpool/var
NAME                          CANMOUNT  MOUNTPOINT
rpool/var                          off  /var
rpool/var/lib                      off  /var/lib
rpool/var/lib/docker               on   /var/lib/docker

Note that we are currently fixing the docker package in ubuntu to create this automatically for you. You can think of a similar scheme for any VM images that you don’t want to version or LXD containers.

By default on the desktop, we don’t have any persistent datasets and people can tweak exceptional cases as listed above on their will. We focus heavily on getting a bullet proof ubuntu desktop, where reverting will reliably restore you to a good working state. However, you can forsee different needs for servers.

If you are a sysadmin, other use cases will come to your mind: you maybe want a persistent database, or webserver assets or any other server-related tasks you want to avoid rolling back in any case - but if you downgrade to an earlier version of your system, and for instance, your database tool, you will have to ensure that you can still load data created with a newer version of it. This is why we created such a layout scheme!

Why so many System datasets?

Remember the number of System dataset we have on desktop:

$ zsysctl show --full
[…]
System Datasets:
 - bpool/BOOT/ubuntu_e2wti1
 - rpool/ROOT/ubuntu_e2wti1
 - rpool/ROOT/ubuntu_e2wti1/srv
 - rpool/ROOT/ubuntu_e2wti1/usr
 - rpool/ROOT/ubuntu_e2wti1/var
 - rpool/ROOT/ubuntu_e2wti1/usr/local
 - rpool/ROOT/ubuntu_e2wti1/var/games
 - rpool/ROOT/ubuntu_e2wti1/var/lib
 - rpool/ROOT/ubuntu_e2wti1/var/log
 - rpool/ROOT/ubuntu_e2wti1/var/mail
 - rpool/ROOT/ubuntu_e2wti1/var/snap
 - rpool/ROOT/ubuntu_e2wti1/var/spool
 - rpool/ROOT/ubuntu_e2wti1/var/www
 - rpool/ROOT/ubuntu_e2wti1/var/lib/AccountsService
 - rpool/ROOT/ubuntu_e2wti1/var/lib/NetworkManager
 - rpool/ROOT/ubuntu_e2wti1/var/lib/apt
 - rpool/ROOT/ubuntu_e2wti1/var/lib/dpkg

The whole idea of this separation is to offer bridges with the second kind of layout we want to offer, because, as you may reckon, ZSys and ZFS on root option will be available in a future releases as a server install option. We made great care to have a compatible layout with it with sensible different defaults, and we will support a way to switch from one layout to the other.

On a server, as said, you have way more occasions on wanting persistent datasets: databases, services, website contents… You will then of course have to deal with forward compatibility of your data with the various softwares in case of revert, but this layout is destinated to more advanced users who can handle those issues.

By default, the server layout would look something like that:

$ zsysctl show --full
[…]
System Datasets:
 - bpool/BOOT/ubuntu_e2wti1
 - rpool/ROOT/ubuntu_e2wti1
 - rpool/ROOT/ubuntu_e2wti1/usr
 - rpool/ROOT/ubuntu_e2wti1/var
 - rpool/ROOT/ubuntu_e2wti1/var/lib/AccountsService
 - rpool/ROOT/ubuntu_e2wti1/var/lib/NetworkManager
 - rpool/ROOT/ubuntu_e2wti1/var/lib/apt
 - rpool/ROOT/ubuntu_e2wti1/var/lib/dpkg
User Datasets:
[…]
Persistent Datasets:
 - rpool/srv
 - rpool/usr/local
 - rpool/var/lib
 - rpool/var/games
 - rpool/var/log
 - rpool/var/mail
 - rpool/var/snap
 - rpool/var/spool
 - rpool/var/www

Looks familiar isn’t it? What we can note is that we now have way more persistent directories:

  • logs files in /var/log and local mails (/var/mail), which is interesting to troubleshoot issues and have a continuity in auditing, even after a revert
  • some server related webserver content files like /srv and /var/www
  • locally compiled sofwares in /usr/local
  • snaps (/var/snap) which handle their revisions themselves
  • some other directories /var/games and printing related tasks /var/spool
  • and more importantly, /var/lib becomes persistent, it means that database content and a lot of other programs data that you want to persist on the server will automatically be persistent (grafana, influxdb, jenkins, mysql, postgresql, just to name a few).

However, there are some system-related data in /var/lib, like your network configuration and primarily, your package manager state (apt and dpkg). Those are thus store and linked to System Datasets as the system revert will be inconsistent if the apt or dpkg databases for instance were kept persistent. From that scheme, you can infer there is a canmount=off set on rpool/ROOT/ubuntu_e2wti1/var/lib/ so act as a dataset container for rpool/ROOT/ubuntu_e2wti1/var/lib/dpkg (dpkg dabatase) for instance.

This is how we combine having both a bullet-proof system, but still giving flexibility for sysadmins to perform their tasks and keep all logs, service data and general local content even if a system revert is needed.

We have thus a lot of dataset (which is basically no cost, just more code in how we have to handle it) on the desktop, but this brings this whole flexible where sysadmin can switch between types with a simple zfs rename.

Final thoughts

Note that ZSys has no hardcoded layout knowledge. It only segregated datasets in different types, treating /ROOT///BOOT/ as system datasets containers, /USERDATA/ for user related dataset and the rest as persistent datasets. You can create any number of children datasets as you wish and need, knowing now exactly what the impacts will be in term of history saving, space disk and such. We hope this sheds some lights on the decisions we took and why we have all those default datasets created by our installer.

This being done, I think there is few but important domains remaining to be covered. As you saw in the previous command, we know exactly what kernel you booted with, we also associat user datasets to system datasets despite having different names, we keep track of the last successful boot time and last used. How is that all stored? That will be our next adventure in this blog post series. See you there :)

Meanwhile, join the discussion via the dedicated Ubuntu discourse thread.

on June 16, 2020 01:51 PM

June 15, 2020

I work with OpenDev CI for a while. My first Kolla patches were over three years ago. We (Linaro) added AArch64 nodes few times — some nodes were taken down, some replaced, some added.

Speed or lack of it

Whenever you want to install some Python package using pip it is downloaded from Pypi (directly or mirror). If there is a binary package then you get it, if not then “noarch” package is fetched.

In worst case source tarball is downloaded and whole build process starts. You need to have all required compilers installed, development headers for Python and all required libraries and rest of needed tools. And then wait. And wait as some packages require a lot of time.

And then repeat it again and again as you are not allowed to upload packages into Pypi for projects you do not own.

Argh you, protobuf

There was a new release of protobuf package. OpenStack bot picked it up, sent patch for review and it got merged.

And all AArch64 CI jobs failed…

Turned out that protobuf 3.12.0 was released with x86 wheels only. No source tarball. At all.

This turned out to be new maintainer mistake — after 2-3 weeks it was fixed in 3.12.2 release.

Another CI job then

So I started looking at ‘requirements’ project and created a new CI job for it. To check are new package versions are available for AArch64. Took some time and several side updates as well (yak shaving all the way again).

Stuff got merged and works now.

Wheels cache

While working on above CI job I had a discussion with OpenDev infra team how to make it work properly. Turned out that there were old jobs doing exactly what I wanted: building wheels and caching them for next CI tasks.

It took several talks and patches from Ian Wienand, Clark Boylan, Jeremy ‘fungi’ Stanley and others. Several CI jobs got renamed, some were moved from one project to another. Servers got configuration changes etc.

Now we have wheels built for both x86-64 and AArch64 architectures. Covering CentOS 7/8, Debian ‘buster’ and Ubuntu ‘xenial/bionic/focal’ releases. For OpenStack ‘master’ and few stable branches.

Effect

Requirements project has quick ‘check-uc’ job running on AArch64 to make sure that all packages are available for both architectures. All OpenStack projects profit from it.

In Kolla ‘openstack-base’ image went from 23:49 to just 5:21 minutes. Whole Debian/source build is now 57 minutes instead of 2 hours 20 minutes.

Nice result, isn’t it?

on June 15, 2020 03:53 PM

June 14, 2020

Quick and dirty

  • Install python3-virtualenvwrapper (via pip or via package manager)
  • Export a workon directory: export WORKON_HOME=/home/foursixnine/Projects/python-virtualenv
  • source virtualenvwrapper
foursixnine@deimos:~/Projects> source virtualenvwrapper    
virtualenvwrapper.user_scripts creating /home/foursixnine/Projects/python-virtualenv/premkproject
...
virtualenvwrapper.user_scripts creating /home/foursixnine/Projects/python-virtualenv/get_env_details
  • mkvirtualenv newenv
foursixnine@deimos:~/Projects> mkvirtualenv newenv
created virtual environment CPython3.8.3.final.0-64 in 115ms
  creator CPython3Posix(dest=/home/foursixnine/Projects/python-virtualenv/newenv, clear=False, global=False)
  seeder FromAppData(download=False, pip=latest, setuptools=latest, wheel=latest, via=copy, app_data_dir=/home/foursixnine/.local/share/virtualenv/seed-app-data/v1.0.1)
  activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator
virtualenvwrapper.user_scripts creating /home/foursixnine/Projects/python-virtualenv/newenv/bin/predeactivate
...
virtualenvwrapper.user_scripts creating /home/foursixnine/Projects/python-virtualenv/newenv/bin/get_env_details
  • By this point, you’re already inside newenv:
(newenv) foursixnine@deimos:~/Projects> 
  • You can create multiple virtual environments and switch among them using workon $env so long as you have sourced virtualenvwrapper and your $WORKON_HOME is properly defined.

Real business

  • Now, if you want to use vscode, remember that you will need to define properly python.PythonPath for your workspace/project (I’m new to this, don’t hang me in a public square ok?), in this case, my env is called linkedinlearningaiml
{
    "python.pythonPath": "/home/foursixnine/Projects/python-virtualenv/linkedinlearningaiml/bin/python"
}

Now your python code will be executed within the context of your virtual environment, so you can get down to serious (or not at all) python development, without screweing up your host or polluting the dependencies and stuff.

PS: Since wanted to be able to run standalone python files, I also needed to change a bit my launch.json (Maybe this is not needed?)

    "version": "0.2.0",
    "configurations": [
        {
            "name": "Python: Current File",
            "type": "python",
            "request": "launch",
            "program": "${file}",
            "console": "integratedTerminal",
            "cwd": "${fileDirname}"
        }
    ]
}

And off you go, how to use python virtualenv inside vscode

Et voilà, ma chérie! It's alive!

on June 14, 2020 12:00 AM

June 09, 2020

Review: Chromebook Duet

Julian Andres Klode

Sporting a beautiful 10.1” 1920x1200 display, the Lenovo IdeaPad Duet Chromebook or Duet Chromebook, is one of the latest Chromebooks released, and one of the few slate-style tablets, and it’s only about 300 EUR (300 USD). I’ve had one for about 2 weeks now, and here are my thoughts.

Build & Accessories

The tablet is a fairly Pixel-style affair, in that the back has two components, one softer blue one housing the camera and a metal feeling gray one. Build quality is fairly good.

The volume and power buttons are located on the right side of the tablet, and this is one of the main issues: You end up accidentally pressing the power button when you want to turn your volume lower, despite the power button having a different texture.

Alongside the tablet, you also find a kickstand with a textile back, and a keyboard, both of which attach via magnets (and pogo pins for the keyboard). The keyboard is crammed, with punctuation keys being halfed in size, and it feels mushed compared to my usual experiences of ThinkPads and Model Ms, but it’s on par with other Chromebooks, which is surprising, given it’s a tablet attachment.

fully assembled chromebook duet

fully assembled chromebook duet

I mostly use the Duet as a tablet, and only attach the keyboard occasionally. Typing with the keyboard on your lap is suboptimal.

My first Duet had a few bunches of dead pixels, so I returned it, as I had a second one I could not cancel ordered as well. Oh dear. That one was fine!

Hardware & Connectivity

The Chromebook Duet is powered by a Mediatek Helio P60T SoC, 4GB of RAM, and a choice of 64 or 128 GB of main storage.

The tablet provides one USB-C port for charging, audio output (a 3.5mm adapter is provided in the box), USB hub, and video output; though, sadly, the latter is restricted to a maximum of 1080p30, or 1440x900 at 60 Hz. It can be charged using the included 10W charger, or use up to I believe 18W from a higher powered USB-C PD charger. I’ve successfully used the Chromebook with a USB-C monitor with attached keyboard, mouse, and DAC without any issues.

On the wireless side, the tablet provides 2x2 Wifi AC and Bluetooth 4.2. WiFi reception seemed just fine, though I have not done any speed testing, missing a sensible connection at the moment. I used Bluetooth to connect to my smartphone for instant tethering, and my Sony WH1000XM2 headphones, both of which worked without any issues.

The screen is a bright 400 nit display with excellent viewing angles, and the speakers do a decent job, meaning you can use easily use this for watching a movie when you’re alone in a room and idling around. It has a resolution of 1920x1200.

The device supports styluses following the USI standard. As of right now, the only such stylus I know about is an HP one, and it costs about 70€ or so.

Cameras are provided on the front and the rear, but produce terrible images.

Software: The tablet experience

The Chromebook Duet runs Chrome OS, and comes with access to Android apps using the play store (and sideloading in dev mode) and access to full Linux environments powered by LXD inside VMs.

The screen which has 1920x1200 is scaled to a ridiculous 1080x675 by default which is good for being able to tap buttons and stuff, but provides next to no content. Scaling it to 1350x844 makes things more balanced.

The Linux integration is buggy. Touches register in different places than where they happened, and the screen is cut off in full screen extremetuxracer, making it hard to recommend for such uses.

Android apps generally work fine. There are some issues with the back gesture not registering, but otherwise I have not found issues I can remember.

One major drawback as a portable media consumption device is that Android apps only work in Widevine level 3, and hence do not have access to HD content, and the web apps of Netflix and co do not support downloading. Though one of the Duets actually said L1 in check apps at some point (reported in issue 1090330). It’s also worth noting that Amazon Prime Video only renders in HD, unless you change your user agent to say you are Chrome on Windows - bad Amazon!

The tablet experience also lags in some other ways, as the palm rejection is overly extreme, causing it to reject valid clicks close to the edge of the display (reported in issue 1090326).

The on screen keyboard is terrible. It only does one language at a time, forcing me to switch between German and English all the time, and does not behave as you’d expect it when editing existing words - it does not know about them and thinks you are starting a new one. It does provide a small keyboard that you can move around, as well as a draw your letters keyboard, which could come in handy for stylus users, I guess. In any case, it’s miles away from gboard on Android.

Stability is a mixed bag right now. As of Chrome OS 83, sites (well only Disney+ so far…) sometimes get killed with SIGILL or SIGTRAP, and the device rebooted on its own once or twice. Android apps that use the DRM sometimes do not start, and the Netflix Android app sometimes reports it cannot connect to the servers.

Performance

Performance is decent to sluggish, with micro stuttering in a lot of places. The Mediatek CPU is comparable to Intel Atoms, and with only 4GB of RAM, and an entire Android container running, it’s starting to show how weak it is.

I found that Google Docs worked perfectly fine, as did websites such as Mastodon, Twitter, Facebook. Where the device really struggled was Reddit, where closing or opening a post, or getting a reply box could take 5 seconds or more. If you are looking for a Reddit browsing device, this is not for you. Performance in Netflix was fine, and Disney+ was fairly slow but still usable.

All in all, it’s acceptable, and given the price point and the build quality, probably the compromise you’d expect.

Summary

tl;dr:

  • good: Build quality, bright screen, low price, included accessories
  • bad: DRM issues, performance, limited USB-C video output, charging speed, on-screen keyboard, software bugs

The Chromebook Duet or IdeaPad Duet Chromebook is a decent tablet that is built well above its price point. It’s lackluster performance and DRM woes make it hard to give a general recommendation, though. It’s not a good laptop.

I can see this as the perfect note taking device for students, and as a cheap tablet for couch surfing, or as your on-the-go laptop replacement, if you need it only occasionally.

I cannot see anyone using this as their main laptop, although I guess some people only have phones these days, so: what do I know?

I can see you getting this device if you want to tinker with Linux on ARM, as Chromebooks are quite nice to tinker with, and a tablet is super nice.

on June 09, 2020 07:23 PM

June 04, 2020

I’m not outside

Stuart Langridge

I’m not outside.

Right now, a mass of people are in Centenary Square in Birmingham.

They’ll currently be chanting. Then there’s music and speeches and poetry and a lie-down. I’m not there. I wish I was there.

This is part of the Black Lives Matter protests going on around the world, because again a black man was murdered by police. His name was George Floyd. That was in Minneapolis; a couple of months ago Breonna Taylor, a black woman, was shot eight times by police in Louisville. Here in the UK black and minority ethnicity people die in police custody twice as much as others.

It’s 31 years to the day since the Tiananmen Square protests in China in which a man stood in front of a tank, and then he disappeared. Nobody even knows his name, or what happened to him.

The protests in Birmingham today won’t miss one individual voice, mine. And the world doesn’t need the opinion of one more white guy on what should be done about all this, about the world crashing down around our ears; better that I listen and support. I can’t go outside, because I’m immunocompromised. The government seems to flip-flop on whether it’s OK for shielding people to go out or not, but in a world where there are more UK deaths from the virus than the rest of the EU put together, where as of today nearly forty thousand people have died in this country — not been inconvenienced, not caught the flu and recovered, died, a count over half that of UK civilian deaths in World War 2 except this happened in half a year — in that world, I’m frightened of being in a large crowd, masks and social distancing or not. But the crowd are right. The city is right. When some Birmingham council worker painted over an I Can’t Breathe emblem, causing the council to claim there was no political motive behind that (tip for you: I’m sure there’s no council policy to do it, and they’ve unreservedly apologised, but whichever worker did it sure as hell had a political motive), that emblem was back in 24 hours, and in three other places around the city too. Good one, Birmingham.

Protestors in Centenary Square today

There are apparently two thousand of you. I can hear the noise from the window, and it’s wonderful. Shout for me too. Wish I could be there with you.

on June 04, 2020 03:55 PM

The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Ussuri on Ubuntu 20.04 LTS and on Ubuntu 18.04 LTS via the Ubuntu Cloud Archive. Details of the Ussuri release can be found at:  https://www.openstack.org/software/ussuri

To get access to the Ubuntu Ussuri packages:

Ubuntu 20.04 LTS

OpenStack Ussuri is available by default for installation on Ubuntu 20.04.

Ubuntu 18.04 LTS

The Ubuntu Cloud Archive pocket for OpenStack Ussuri can be enabled on Ubuntu 18.04 by running the following commands:

sudo add-apt-repository cloud-archive:ussuri

The Ubuntu Cloud Archive for Ussuri includes updates for:

aodh, barbican, ceilometer, ceph octopus (15.2.1), cinder, designate, designate-dashboard, dpdk (19.11.1), glance, gnocchi, heat, heat-dashboard, horizon, ironic, keystone, libvirt (6.0.0), magnum, manila, manila-ui, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, networking-odl, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-fwaas-dashboard, neutron-vpnaas, nova, octavia, octavia-dashboard, openstack-trove, trove-dashboard, openvswitch (2.13.0), ovn (20.03.0), ovn-octavia-provider, panko, placement, qemu (4.2), sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, watcher-dashboard, and zaqar.

For a full list of packages and versions, please refer to:

http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/ussuri_versions.html

Branch package builds

If you would like to try out the latest updates to branches, we deliver continuously integrated packages on each upstream commit via the following PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
sudo add-apt-repository ppa:openstack-ubuntu-testing/queens
sudo add-apt-repository ppa:openstack-ubuntu-testing/rocky
sudo add-apt-repository ppa:openstack-ubuntu-testing/stein
sudo add-apt-repository ppa:openstack-ubuntu-testing/train
sudo add-apt-repository ppa:openstack-ubuntu-testing/ussuri

Reporting bugs

If you have any issues please report bugs using the ‘ubuntu-bug’ tool to ensure that bugs get logged in the right place in Launchpad:

sudo ubuntu-bug nova-conductor

Thank you to everyone who contributed to OpenStack Ussuri. Enjoy and see you in Victoria!

Corey

(on behalf of the Ubuntu OpenStack Engineering team)

on June 04, 2020 02:00 PM

June 03, 2020

Ardour 6.0 Information

Ubuntu Studio

Our friends at Ardour have released Version 6.0, and we would like to offer them a huge congratulations! While the source code and their own builds were available on release day, many of you have been waiting for Ardour 6.0 to come to Ubuntu’s repositories. Today, that day came. Ardour... Continue reading
on June 03, 2020 05:28 AM

June 01, 2020

I would say that this was a crazy month, but with everything ever escalating, does that even mean anything anymore?

I lost track of tracking my activities in the second half of the month, and I’m still not very good at logging the “soft stuff”, that is, things like non-technical work but that also takes up a lot of time, but will continue to work on it.

Towards the end of the month I spent a huge amount of time on MiniDebConf Online, I’m glad it all worked out, and will write a seperate blog entry on that. Thank you again to everyone for making it a success!

I’m also moving DPL activities to the DPL blog, so even though it’s been a busy month in the free software world… my activity log here will look somewhat deceptively short this month…

MiniDebConf Online

2020-05-06: Help prepare initial CfP mail.

2020-05-06: Process some feedback regarding accessibility on Jitsi.

Debian Packaging

2020-05-02: Upload package gnome-shell-extension-workspaces-to-dock (53-1) to Debian unstable.

2020-05-02: Upload package tetzle (2.1.6-1) to Debian unstable.

2020-05-06: Upload package bundlewrap (3.9.0-1) to Debian unstable.

2020-05-06: Accept MR#1 for connectagram.

2020-05-06: Upload package connectagram (1.2.11-1) to Debian unstable.

2020-05-07: Upload package gnome-shell-extension-multi-monitors (20-1) to Debian unstable (Closes: #956169).

2020-05-07: Upload package tanglet (1.5.6-1) to Debian unstable.

2020-05-16: Upload package calamares (3.2.24-1) to Debian unstable.

2020-05-16: Accept MR#1 for tuxpaint-config.

2020-05-16: Accept MR#7 for debian-live.

2020-05-18: Upload package bundlewrap (3.10.0) to Debian unstable.

Debian Mentoring

2020-05-02: Sponsor package gamemode (1.5.1-3) (Games team request).

2020-05-16: Sponsor package gamemode (1.5.1-4) (Games team request).

on June 01, 2020 04:57 PM

May 30, 2020


Hello students,

I no longer have access to your proposal or emails, thus the open letter on my blog.

If you allowed commenting before the student proposal deadline, I along with other admins and mentors tried to help you improve your proposal. Some of you took the suggestions and sharpened your presentation, fleshed out your timeline and in general created a proposal you can be proud of.

If you did not allow commenting or only uploaded your proposal right before the deadline, you missed out on this mentoring opportunity, and for that I am sorry. That cut us off from a vital communication link with you.

This proposal process, along with fixing some bugs and creating some commits mean that you have real experience you can take with you into the future
. I hope you also learned how to use IRC/Matrix/Telegram channels to get information, and help others as well. Even if you do not continue your involvement with the KDE Community, we hope you will profit from these accomplishments, as we have.

We hope that your experiences with the KDE community up to now make you want to continue to work with us, and become part of the community. Many students whom we were not able to accept previously were successfully accepted later. Some of those students now are mentoring and/or part of the administration team, which is, in our eyes, the zenith of GSoC success.

Some of you we were unable to accept because we could not find suitable mentors. The GSoC team is asking us this year to have three mentors per student, because the world has become so uncertain in this pandemic time. So more developers who will mentor are a precious resource.

Almost every single proposal we got this year is work we want and need, or we wouldn't have published "Ideas" to trigger those proposals. If you are interested in doing this work and do not need the funding and deadlines that GSoC provides, we would welcome working with you outside of GSoC. In fact, each year we have Season of KDE which provides some mentoring, structure and timeline and no funding. This has been very successful for most mentees. And of course all are welcome to join our worldwide army of volunteers, who code, fix bugs, triage bug reports, write, analyze, plan, administer, create graphics, art, promo copy, events, videos, tutorials, documentation, translation, internationalization, and more! It is the KDE community who makes the software, keeps it up-to-date, plans and hosts events, and engages in events planned and hosted by others.

Please join the KDE-Community mail list and dig in! Hope to see you at KDE Akademy.


Oh hey, late update: I just learned today in #gsoc that I *do* have access to all the proposals -- and names and emails! So at the very least I will send a link to this open letter to all of our prospective students. Talk to you then. -v
on May 30, 2020 09:36 PM

Ubuntu Desktop Makeover

Rolando Blanco

I must confess that since Ubuntu started, there have been a lot of changes that we have experienced on our desktop (each time for the better). However, I have always loved changing its appearance, to one more according to my particular tastes, sometimes up to 3 changes per year. This is one of the features that I like most about GNU / Linux, the freedom to adapt everything to my liking.

This time, I wanted to make some slight changes in search of elegant minimalism.

This is how I started testing a new icon pack and a tool that works as a widget and that animates my desktop, for this I used Conky.

The end result has been this.

In this sense, I describe in detail the steps taken to reach this result.

Installing Conky on Ubuntu 20.04.

sudo apt update
sudo apt install conky-all conky 

when it finally installed I proceeded to create a hidden file in my home directory called .conkyrc

vi ~/.conkyrc

Then I did insert this content in the file and then save and exit

conky.config = {
-------------------------------------
--  Generic Settings
-------------------------------------
background=true,
update_interval=1,
double_buffer=true,
no_buffers=true,
imlib_cache_size=10,
draw_shades=false,
draw_outline=false,
draw_borders=false,
        update_interval = 1,
        cpu_avg_samples = 2,
        net_avg_samples = 2,
        out_to_console = false,
        override_utf8_locale = true,
        double_buffer = true,
        no_buffers = true,
        text_buffer_size = 32768,
        imlib_cache_size = 0,
        own_window = true,
        own_window_type = 'normal',
        own_window_argb_visual = true,
        own_window_argb_value = 50,
        own_window_hints = 'undecorated,below,sticky,skip_taskbar,skip_pager',
        border_inner_margin = 5,
        border_outer_margin = 0,
        xinerama_head = 1,
        alignment = 'bottom_right',
        gap_x = 0,
        gap_y = 33,
        draw_shades = false,
        draw_outline = false,
        draw_borders = false,
        draw_graph_borders = false,
        use_xft = true,
        font = 'Ubuntu Mono:size=12',
        xftalpha = 0.8,
        uppercase = false,
        default_color = 'white',
        own_window_colour = '#000000',
        minimum_width = 300, minimum_height = 0,
        alignment = 'top_right',
-------------------------------------
--  Window Specifications
-------------------------------------
gap_x=0,
gap_y=0,
alignment="middle_middle",
minimum_height=400,
minimum_width=600,
own_window=true,
own_window_type="dock",
own_window_transparent=true,
own_window_hints="undecorated,below,sticky,skip_taskbar,skip_pager",
own_window_argb_visual=true,
own_window_argb_value=0,
-------------------------------------
--  Text Settings
-------------------------------------
use_xft=true,
xftalpha=1,
font="Droid Sans:size=10",
text_buffer_size=256,
override_utf8_locale=true,
-------------------------------------
--  Color Scheme
-------------------------------------
default_color='FFFFFF',
color0='FFFFFF', -- clock
color1='FFFFFF', -- date
-------------------------------------
--  Locale (e.g. "es_ES.UTF-8")
--  Leave empty for default
-------------------------------------
template9=""
}
---------------------------------------------------
---------------------------------------------------
conky.text = [[
\
\
\
\
${font Ubuntu:bold One:weight=Light:size=96}${color0}\
${alignc}${time %H:%M:%S}\
${font}${color}
\
\
\
\
${font Poiret One:weight=Light:size=28}${color1}\
${voffset 30}\
${alignc}${execi 300 LANG=${template9} LC_TIME=${template9} date +"%A, %B %d"}\
${font}${color}
\
\
\
\

${font}${voffset -4}
${font sans-serif:bold:size=10}SYSTEM ${hr 2}
${font sans-serif:normal:size=8}$sysname $kernel $alignr $machine
Host:$alignr$nodename
Uptime:$alignr$uptime
File System: $alignr${fs_type}
Processes: $alignr ${execi 1000 ps aux | wc -l}

${font sans-serif:bold:size=10}CPU ${hr 2}
${font sans-serif:normal:size=8}${execi 1000 grep model /proc/cpuinfo | cut -d : -f2 | tail -1 | sed 's/\s//'}
${font sans-serif:normal:size=8}${cpugraph cpu1}
CPU: ${cpu cpu1}% ${cpubar cpu1}

${font sans-serif:bold:size=10}MEMORY ${hr 2}
${font sans-serif:normal:size=8}RAM $alignc $mem / $memmax $alignr $memperc%
$membar
SWAP $alignc ${swap} / ${swapmax} $alignr ${swapperc}%
${swapbar}

]]

When done, I did start conky from the console to test it.

To finish, make sure that conky loads automatically when you start my desktop, for this I added it in the list of applications that load the start, for it run the “startup application” and add it like this:

Restart my computer and everything will be working satisfactorily.


Now I had to proceed to change the icons. For this and as usual, I chose https://www.gnome-look.org/s/Gnome/p/1279924 followed the installation instructions.

Once the icons were installed, I changed them from my tweeks app and voila.

on May 30, 2020 04:53 AM