October 28, 2020

The web team here at Canonical run two-week iterations. This iteration was slightly different as we began a new cycle. A cycle represents six months of work. Therefore, we spent the first-week planning and scheduling the cycles goals. Therefore, the following highlights of our completed work from the previous week.

Meet the team

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/EjL5utT4CMp1bm2_68TSnSwaNgWUT_oE0fyMUesDww1i2NSeJiQKPBU2EmO1ZNOFaGmqajcesupE7jDxQ63WzZ2tBs_RU7pWHHowmF_PRntlQ-QoEmT18yq7m6GlU2Yq39lMuHEl" width="720" /> </noscript>

Photo credit: Claudio Gomboli

Hello, I am Peter.  I struggle to define myself these days, especially as the past year has brought so much change to our lives.  I will start with the easy stuff. I am an American (Wisconsin and New York City) living outside London and working from home these days. I am married and a recent empty-nester as my two boys are away at university. 

I have been working on the web since 1995, doing everything from design to code, but mostly as a product manager, editor and now running the Canonical web team.  I have worked in a few industries; financial information services, IT research, children’s and educational publishing, but the nine years at Canonical has been the most memorable.

Outside work I mostly read, jog, hike and garden.

Web squad

Our Web Squad develops and maintains most of Canonical’s promotional sites like ubuntu.com, canonical.com and more.

Ubuntu 20.10 “Groovy Gorilla” release

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/_jK0YjZPw-F1pL1EIT1qAyOGBIv6AsInIvPBSFSuWabw6WSWuWxIHgwxdK4X9UPzt7RdpSoe6_r__ZkogGVmhK01y8pBPI8WazBWGWdozUXoX728cqn6-FWJeUFY2ppo1zXGkPAO" width="720" /> </noscript>

Ubuntu is released twice a year, in April and October. There is always a lot of work to get right and a fair bit of pressure to make it perfect as we get a lot of visitors coming to learn about and download new versions of Ubuntu.  The 20.10 ‘Groovy Gorilla’ release was no exception.  We updated all the download pages, added a  ‘What’s new in 20.10’ strip in the desktop and server sections and a new homepage takeover.

Have you downloaded 20.10 yet?

All new Raspberry Pi pages

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/eRIc02YoUhZLr4wawc8Rz1a7Ihn4a_WukPj0NP8bOgfVsH1cUL5XO3uQ6aajw92vE6X0Vs25bMVmciRniWqHBVCLeFYJwrZ_Q8yPVo25UeA0D2c5awAjhekQHpyZ8ljTDRk2B-IA" width="720" /> </noscript>

This release, we announced Ubuntu on Raspberry Pi for desktops.  To support the release, we created three new pages about Ubuntu on Raspberry Pi – an Overview page, an Ubuntu Desktop page and Server page for Raspberry Pi with Ubuntu CLI.

Check out the new Raspberry Pi pages

Brand

The Brand team develop our design strategy and create the look and feel for the company across many touch-points, from web, documents, exhibitions, logos and video.

Creating assets to support the 20.10 release

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/uRKiNPI1Si4MbR5kHs2kwv3EwUYh1qdDbPPSwU7R7H3hRbeSXPjr6lnSLq8IGwhbLBzRvzJ-CxGwTMQFQFYNWq5biZ59axcUe78zrJFJiTSgQL3RZj2hyAGgzfJLDTe-1yx2D1sS" width="720" /> </noscript>

View all the options on our team Instagram account.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/q-CFRFlEqw-RZOSEbjPTqPLb-ND3CMk6mesrN0EJVuq1ZG6KUJ06gSczH2s7DHwZtv55BSKQNNFRrDfG4oLutm7zGCjeHX4JTLcnurqO2YpYiMp9bVzswbQfzwsAJX8dencgPiO2" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/bQGw5yPzy1oK1Zqlf1YwfFNr7F3HNegAblAvmZVt8H-Rc-ortbJPKOvy0nHJ-RR4yFWsSpomcx42os2yQb44aZ5o9ng-i6cZZh2j-4aAOgdWOUGMk98Xm3KgUFrRWGKd17ErY5cU" width="720" /> </noscript>

MAAS

The MAAS squad develops the UI for the MAAS project.

QA and debugging for the release of 2.9

We’ve been primarily focused on QA for the upcoming release of MAAS 2.9. Many bugs have been fixed, as well as a significant performance improvement to loading the machine details view on large scale MAAS installations.

Machine details React migration

Work has begun on migrating the machine details views to React, which will allow us to iterate faster on new features, and improve UI performance.

Machine details events and logs consolidation design

In the course of migrating the machine details to React, we’ll be improving the logs experience, by consolidating both the logs and events tabs and fixing some long-standing confusion

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/5wQHoSjGf8X95hIaBEYNpkhhbY-yUW6LR-QLV_qgJu0B7gpzoTTaAX-yDvwGCiZnIS_-xwtlbbL9-QcFOjEixltk7cGiSBKkafOE2S_7dmBTUY_1HYHx2rxTDO6KT4o4fiagD7OW" width="720" /> </noscript>

JAAS

The JAAS squad develops the UI for the JAAS store and Juju Dashboard projects.

The web CLI

Coming in Juju 2.9 the Juju Dashboard will include access to the Juju Web CLI. A UI that allows you to run a subset of the Juju model commands right from within your browser.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/WC27jYvsazqoRHfiBE5CxggWHh12qW7PzIFu1tOECOeZNF8Hh3jmBZGbk3gLI7ysCIsr5Bk2mRO6I7NR1miSCE3m-lXlXPv8p4JF9uudryMJP0l7GJ10ENRQzNAhAzuZnAkykyuc" width="720" /> </noscript>

The beta release of this feature is available for those running the Juju 2.9 beta snap and then after bootstrap running juju upgrade-dashboard --gui-stream=devel.

Defining the data structure of the future dashboard

We are iterating our recently implemented layout for the Juju Dashboard, improving the information architecture and the interactions of the flow of the app.

Users will be able to slide down the model view even further, from different points of view: apps, integrations, machines, actions, networking. This work is going to enable the layout and the entry points for other coming features this cycle.

Vanilla

The Vanilla squad designs and maintains the design system and Vanilla framework library. They ensure a consistent style throughout web assets.

In-depth discussions on fixes for hiding table cells

One of the issues that we worked on during our regular maintenance involved updating a utility for hiding elements on the page to fix the issues when hiding table cells.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/OucXRPK_MLXBQiYrbmyedmvke3UtrdvyFuAdhsQg6fdBYsTLocpYNO9JOvC3tELHntwLyouTHKokjnfQroUdV7PvgeuRLm53CI3W0i3Ds9pmAIGfwuQ674xkyRXlti5xagS7eDpf" width="720" /> </noscript>

What may seem to be a small bugfix required quite an in-depth discussion about various aspects of the issue. We wanted to make sure we don’t introduce any breaking changes for existing use of the utility in different patterns, we discussed where within the framework should these changes be implemented, how potential new utilities should be named, etc.
While such discussions take time and slow down the review process, we know they are important because they always lead us to the best possible solutions and allow us to view the issues from different perspectives (for example taking into account future maintenance, code responsibility and backwards compatibility).

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/2x6_25cUOT1-usG3aBp5_OzHAHldurPcZss73tD4Br6IbQxDk-eOsIy791-g4Q1Hp1pU3xv8GnR1-PnZYhEkmycjGvxfmcuqypcAHxCWzcYYWSuWpm9cHYMp7AZXFjComyRPBiQN" width="720" /> </noscript>

If you are interested in having a little sneak peek into our process feel free to have a look at the discussion on the pull request.

Wrapping up the accessibility work

We’ve been finalising the accessibility work from the last couple of weeks and preparing a summary blog post on that topic that will be ready for publishing soon.

Snapcraft and Charmhub

The Snapcraft team works closely with the Store team to develop and maintain the Snap Store site and Charmhub site.

CLI Guidelines on discourse

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/K4nF2LKQWLev1xZyzIz2-Q6cbP-P9tLhPnRTVY0SNCmp9ulnR5E3F_zQ1je3DRA7TBHo8nbDfbgrXRu4w1L4Zvguxw0jdodTD71Mwt2WZxsqro3s_i-eZ5weyPTn_lFJW78wUA8N" width="720" /> </noscript>

The last cycle we worked with various engineering teams to begin defining guidelines for the design of both input and output of CLI commands. The first set of guidelines are now available on the Ubuntu discourse, waiting for comments, suggestions, and any other feedback. We will be working on expanding the guidelines over the next few months. By looking at more complex interactions and issues. 

Actions view for Charms

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/Simh0JIoOsUHV8xY8IdTB3pnf17txihUUlJKHhYHounPXOhUbFKp6s5GntLoNp0RQIJJjcKYFnQgepfv2HadS0bUOWm8SZHlh-IVm4oG8KAxejedieK_ChPxkpzKOSmpmuNh_L4k" width="720" /> </noscript>

We have built the actions tab in charm details pages on Charmhub. Listing available actions and their parameters.

History tab on a Charm details page on Charmhub

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/ptp9WcISSOpeRXW0ja1F_srKoekehcSf4K_mHmq9Rr5DGQev77ZLcX19CXqsU2GunYBi2grzTf5EStjg7gPEFMAOh7rFjbFfwWdXEW4d3WCoxoXI1kCw2kRBAPQVrdyiEshXP7LW" width="720" /> </noscript>

The history details page was implemented using a new “Show more” pattern implemented in the “Modular Table” react component

Updated the Juju discourse navigation

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/ud1emugyTIH7HeuFIWUfKzveRHEM8cJ3OiTVE6tbgyfNf4qBywqul7xY9JA0Vx6VqKeAZnpz2agOqN3i5KgKTNvcEWVejj6sfskHqF-1g5hWp5QvSDMfdWNgrQcYFl0C6_uU693g" width="720" /> </noscript>

A new Canonical customised discourse navigation was implemented on https://discourse.juju.is/. It consists of the Canonical global navigation, main navigation and secondary navigation. This pattern will be implemented on all our discourses over the next few weeks.

Graylog dashboards

Graylog is a tool that we use to centralised log management, it’s built to open standards for capturing, storing, and enabling real-time analysis of logs. During this iteration, we created dashboards to track performance and usage of our services.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/cDPPWUNkqmMFOkE1jz9IfY3vLDmWiFz7i3xE8757dzt-qQcHns-194rAlyDn9yBYGyvxAQ2RYNRgk_XscO1yP3wGJYV9MZ1lONk0yh4IZI5c-LcXSgodkmZTDBH7EIl1clHZTVVy" width="720" /> </noscript>

User testing on the charm details pages

We performed some user testing sessions on the current live pages on charmhub.io and some of the designs for the upcoming detail pages of Charms. From the feedback, we realised that the proposed tab “Libraries” is a point of confusion for many users who are not familiar with this new concept which is introduced with operators. Because of this, we have now created a new section on this page to introduce the concept and help users become familiar with the use of libraries

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/2hYYrBZ3Ne8BysnYs9CIykvWRQcSEW2fMBiMCr7oz4j0USQnF3vU_YvE8V_7vM6CxWaAoDu-j_u9aZHIa3XiFFKuBVi3RI0T-IZ4Yw_4UDdEzv4Ko_vq7VQVA47DLZufnnT_fs9y" width="720" /> </noscript>

Follow the team on Instagram

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/0sjc2e9HR9bl_iqTOQPS7PmIluL1RV9_mNSFSh1chyvyDfa0EQ5INVsjoQMaXeHI7gbQGBSIoo0yEuFYrrGCNhrmkWN8pT4Z8yvralblPuLaYrDlPc3y7XKj1IfiYax7bRoTJpGP" width="720" /> </noscript>

Ubuntu designers on Instagram

With ♥ from Canonical web team.

on October 28, 2020 09:01 AM

October 27, 2020

Security and performance are often mutually exclusive concepts. A great user experience is one that manages to blend the two in a way that does not compromise on robust, solid foundations of security on one hand, and a fast, responsive software interaction on the other.

Snaps are self-contained applications, with layered security, and as a result, sometimes, they may have reduced perceived performance compared to those same applications offered via traditional Linux packaging mechanisms. We are well aware of this phenomenon, and we have invested significant effort and time in resolving any speed gaps, while keeping security in mind. Last year, we talked about improved snap startup times following fontconfig cache optimization. Now, we want to tell you about another major milestone – the use of a new compression algorithm for snaps offers 2-3x improvement in application startup times!

LZO and XZ algorithms

By default, snaps are packaged as a compressed, read-only squashfs filesystem using the XZ algorithm. This results in a high level of compression but consequently requires more processing power to uncompress and expand the filesystem for use. On the desktops, users may perceive this as a “slowness” – the time it takes for the application to launch. This is also far more noticeable on first launch only, before the application data is cached in memory. Subsequent launches are fast and typically, there’s little to no difference compared to traditionally packaged applications.

To improve startup times, we decided to test a different algorithm – LZO – which offers lesser compression, but needs less processing power to complete the action.

As a test case, we chose the Chromium browser (stable build, 85.X). We believe this is a highly representative case, for several reasons. One, the browser is a ubiquitous (and popular) application, with frequent usage, so any potential slowness is likely to be noticeable. Two, Chromium is a relatively large and complex application. Three, it is not part of any specific Linux desktop environment, which makes the testing independent and accurate.

For comparison, the XZ-compressed snap weighs ~150 MB, whereas the one using the LZO compression is ~250 MB in size.

Test systems & methodology

We decided to conduct the testing on a range of systems (2015-2020 laptop models), including HDD, SSD and NVMe storage, Intel and Nvidia graphics, as well as several operating systems, including Kubuntu 18.04, Ubuntu 20.04 LTS, Ubuntu 20.10 (pre-release at the time of writing), and Fedora 32 Workstation (just before Fedora 33 release). We believe this offers a good mix of hardware and software, allowing us a broader understanding of our work.

  • System 1 with 4-core/8-thread Intel(R) i5(TM) processor, 16GB RAM, 500GB SSD, and Intel(R) UHD 620 graphics, running Kubuntu 18.04.
  • System 2 with 4-core Intel(R) i3(TM) processor, 4GB RAM, 1TB 5,400rpm mechanical hard disk, and Intel(R) HD 440 graphics, running Ubuntu 20.04 LTS.
  • System 3 with 4-core Intel(R) i3(TM) processor, 4GB RAM, 1TB 5,400rpm mechanical hard disk, and Intel(R) HD 440 graphics, running Fedora 32 Workstation.
  • System 4 with 4-core/8-thread Intel(R) i7(TM) processor, 64GB RAM, 1TB NVMe hard disk, and Nvidia GM204M (GeForce GTX 980M), running Ubuntu 20.10.
PlatformSystem 1System 2System 3System 4
Snapd version2.46.1+18.042.472.45.3.1-1.fc322.47.1+20.10
Kernel4.15.0-118-generic5.4.0-48-generic5.8.13-200.fc325.8.0-21-generic
DEPlasmaGNOMEGNOMEGNOME

On each of the selected systems, we examined the time it takes to launch and display the browser window for:

  • Native package (DEB or RPM) where available (Kubuntu 18.04 and Fedora 32).
  • Snap with XZ compression (all systems).
  • Snap with LZO compression (all systems).

We compared the results in the following way:

  • Cold start – There is no cached data in the memory.
  • Hot start – The browser data is cached in the memory.

Results!

We measured the startup time for the Chromium browser with a new, unused profile. Please note that these results are highly indicative, but there is always a degree of variance in interactive usage measurements, which can result from things like your overall system state, the current system load due to other, background activities, disk usage, your browser profile and add-ons, and other factors.

Chromium startup timeNative package (DEB/RPM)
Cold/hot (s)
Snap with XZ compression
Cold/hot (s)
Snap with LZO compression
Cold/hot (s)
System 11.7/0.68.1/0.73.1/0.6
System 2NA18.4/1.211.1/1.2
System 315.3/1.334.9/1.110.1/1.3
System 4NA10.5/1.42.6/0.9
  • The results in the table are average values over multiple runs. The standard deviation is ~0.7 seconds for the cold startups, and ~0.1 seconds for the hot startups.
  • The use of the LZO compression offers 40-74% cold startup improvements over the XZ compression.
  • On the Kubuntu 18.04 system, which still has Chromium available as a DEB package, the LZO-compressed snap now offers near-identical startup performance!
  • On Fedora 32 Workstation, the LZO-compressed snap cold startup is faster than the RPM package by a rather respectable 33% (actual ~5.0 seconds difference).
  • Hot startups are largely independent of the packaging format selection.

If you’d like to test for yourself…

You may be interested in profiling the startup time of your browser – or any application for that matter. To that end, we’ve compiled a script, which you can download (link to a GitHub Gist), make the file executable, and run on your system. The script allows you to compare the startup time of any native-packaged software with snaps, and is designed to work with any package manager, so you can use this on Ubuntu, Fedora, openSUSE, Manjaro, etc.

To prevent any potential data loss, the functions are commented out in the main section of the script, so you will need to uncomment them manually before the script does anything.

Summary

We are happy with the improvements that the LZO compression introduces, as it allows users to have a faster, more streamlined experience with their snaps. We can now examine the optimal way to introduce and roll out similar changes with other snaps.

And this is by no means the end of the journey! Far from it. We are working on a whole range of additional improvements and optimizations. When it comes to size, you can use content snaps and stage snaps to reduce the side of your snaps, as well as utilize snapcraft extensions. We’re also working on a fresh set of font cache fixes, and there’s a rather compelling story on this topic, as well, which we will share soon. In the near future, we intend to publish a guide that helps developers trim down their snaps and reduce their overall size, all of which can help create leaner, faster applications.

If you have any comments or suggestions on this topic, we’d like to hear them. You can tell us about your own findings on snap startup performance, and point us to any glaring issues or problems you believe should be addressed, including any specific snaps you think should be profiled and optimized. We are constantly working on improving the user experience, and we take note of any feedback you may have. Meanwhile, enjoy your snappier browsing!

Photo by Ralph Blvmberg on Unsplash.

on October 27, 2020 04:20 PM

October 26, 2020

Welcome to the Ubuntu Weekly Newsletter, Issue 654 for the week of October 18 – 24, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on October 26, 2020 09:47 PM

October 24, 2020

Thanks to all the Ubuntu Members that voted in the election, I am proud to announce our new Ubuntu Community Council!

The full results of the election can be seen here but our winners are:

  • Walter Lapchynski
  • Lina Elizabeth Porras Santana
  • Thomas Ward
  • José Antonio Rey
  • Nathan Haines
  • Torsten Franz
  • Erich Eichmeyer

Congratulations to all of them! They will serve on the Council for the next two years.

Should there be any pressing business that the Council should deal with, especially given the long absence of the Council, please contact the Council mailing list at community-council@lists.ubuntu.com.

Again, thanks to everyone involved for making Ubuntu and its community better!

on October 24, 2020 07:37 PM

October 23, 2020

Thanks to all the hard work from our contributors, Lubuntu 20.10 has been released! With the codename Groovy Gorilla, Lubuntu 20.10 is the 19th release of Lubuntu, the fifth release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 20.10 will be supported until July 2021. Our main focus will be on […]
on October 23, 2020 01:55 AM

October 22, 2020

Ep 113 – Cirurgia

Podcast Ubuntu Portugal

Já votaram no Podcast Ubuntu Portugal em podes.pt? Não? Então não leias mais e vai até https://podes.pt/votar/ escreve Podcast Ubuntu Portugal e clica em VOTAR. Não falhes a aritmética e repete as vezes que conseguires.

Já sabem: oiçam, subscrevam e partilhem!

  • https://forum.pine64.org/showthread.php?tid=11772
  • https://forum.snapcraft.io/t/call-for-suggestions-featured-snaps-friday-9th-october-2020/20384
  • https://github.com/ubports/ubports-installer/releases
  • https://joplinapp.org/
  • https://snapcraft.io/joplin-james-carroll
  • https://snapstats.org/snaps/flameshot
  • https://twitter.com/m_wimpress/status/1314315931468914689
  • https://twitter.com/m_wimpress/status/1314497286425268224
  • https://twitter.com/stgraber/status/1314625640629448705
  • https://twitter.com/thefxtec/status/1314550781509541889
  • https://twitter.com/thepine64/status/1314911896177389570
  • https://ubuntu.com/blog/how-to-make-snaps-and-configuration-management-tools-work-together
  • https://www.meshtastic.org/
  • https://www.npmjs.com/package/android-tools-bin
  • https://www.pine64.org/2020/10/15/update-new-hacktober-gear/
  • https://www.youtube.com/channel/UCuP6xPt0WTeZu32CkQPpbvA/
  • https://podes.pt/
  • Apoios

    Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
    E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
    Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

    Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

    Atribuição e licenças

    Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

    A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

    Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on October 22, 2020 09:45 PM

KDE Plasma-Desktop

The Kubuntu community are delighted to announce the release of Kubuntu 20.10 Groovy Gorilla. For this release Kubuntu ships with Plasma 5.19.5 and Applications 20.08. The desktop carries the fresh new look and gorgeous wallpaper design selected by the KDE Visual Design Group.

 

Cloud Ready

With the rapid growth in cloud native technologies the kubuntu community recognise that Kubuntu users need access to cloud and container technologies.
Kubuntu 20.10 also includes LXD 4.6 and MicroK8s 1.19 for resilient micro clouds, small clusters of servers providing VMs and Kubernetes.

Kubuntu 20.10 includes KDE Applications 20.08.

Dolphin, KDE’s file explorer, for example, adds previews for more types of files and improvements to the way long names are summarized, allowing you to better see what each file is or does. Dolphin also improves the way you can reach files and directories on remote machines, making working from home a much smoother experience. It also remembers the location you were viewing the last time you closed it, making it easier to pick up from where you left off.

For those of you into photography, KDE’s professional photo management application, digiKam has just released its version 7.0.0. The highlight here is the smart face recognition feature that uses deep-learning to match faces to names and even recognizes pets.

If it is the night sky you like photographing, you must try the new version of KStars. Apart from letting you explore the Universe and identify stars from your desktop and mobile phone, new features include more ways to calibrate your telescope and get the perfect shot of heavenly bodies.

And there’s much more: KDE’s terminal emulator Konsole and Yakuake; Elisa, the music player that looks great ; the text editor Kate; KDE’s image viewer Gwenview; and literally dozens of other applications are all updated with new features, bugfixes and improved interfaces to help you become more productive and making the time you spend with KDE software more pleasurable and fun.

on October 22, 2020 09:15 PM

The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Victoria on Ubuntu 20.10 (Groovy Gorilla) and Ubuntu 20.04 LTS (Focal Fossa) via the Ubuntu Cloud Archive. Details of the Victoria release can be found at:  https://www.openstack.org/software/victoria.

To get access to the Ubuntu Victoria packages:

Ubuntu 20.10

OpenStack Victoria is available by default for installation on Ubuntu 20.10.

Ubuntu 20.04 LTS

The Ubuntu Cloud Archive for OpenStack Victoria can be enabled on Ubuntu 20.04 by running the following command:

sudo add-apt-repository cloud-archive:victoria

The Ubuntu Cloud Archive for Victoria includes updates for:

aodh, barbican, ceilometer, cinder, designate, designate-dashboard, glance, gnocchi, heat, heat-dashboard, horizon, ironic, keystone, magnum, manila, manila-ui, masakari, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-baremetal, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, networking-odl, networking-sfc, neutron, neutron-dynamic-routing, neutron-vpnaas, nova, octavia, octavia-dashboard, openstack-trove, trove-dashboard, ovn-octavia-provider, panko, placement, sahara, sahara-dashboard, sahara-plugin-spark, sahara-plugin-vanilla, senlin, swift, vmware-nsx, watcher, watcher-dashboard, and zaqar.

For a full list of packages and versions, please refer to:

http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/victoria_versions.html

Reporting bugs

If you have any issues please report bugs using the ‘ubuntu-bug’ tool to ensure that bugs get logged in the right place in Launchpad:

sudo ubuntu-bug nova-conductor

Thank you to everyone who contributed to OpenStack Victoria. Enjoy and see you in Wallaby!

Corey

(on behalf of the Ubuntu OpenStack Engineering team)

on October 22, 2020 08:11 PM

The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 20.10, code-named “Groovy Gorilla”. This marks Ubuntu Studio’s 28th release. This release is a regular release, and as such it is supported for nine months until July 2021.

Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list of changes and known issues.

You can download Ubuntu Studio 20.10 from our download page.

If you find Ubuntu Studio useful, please consider making a contribution.

Upgrading

Due to the change in desktop environment this release, direct upgrades to Ubuntu Studio 20.10 are not supported. We recommend a clean install for this release:

  1. Backup your home directory (/home/{username})
  2. Install Ubuntu Studio 20.10
  3. Copy the contents of your backed-up home directory to your new home directory.

New This Release

The biggest new feature is the switch of desktop environment to KDE Plasma. We believe this will provide a more cohesive and integrated experience for many of the applications that we include by default. We have previously outlined our reasoning for this switch as part of our 20.04 LTS release announcement.

This release includes Plasma 5.19.5. If you would like a newer version, the Kubuntu Backports PPA may include a newer version of Plasma when ready.

We are excited to be a part of the KDE community with this change, and have embraced the warm welcome we have received.

You will notice that our theming and layout of Plasma looks very much like our Xfce theming. (Spoiler: it’s the same theme and layout!)

Audio

Studio Controls replaces Ubuntu Studio Controls

Ubuntu Studio Controls has been spun-off into an independent project called Studio Controls. It contains much of the same functionality but also is available in many more projects than Ubuntu Studio. Studio Controls remains the easiest and most straightforward way to configure the Jack Audio Connection Kit and provide easy access to tools to help you with using it.

Ardour 6.3

We are including the latest version of Ardour, version 6.3. This version has plenty of new features outlined at the Ardour website, but contains one caviat:

Projects imported from Ardour 5.x are permanently changed to the new format. As such, plugins, if they are not installed, will not be detected and will result in a “stub” plugin. Additionally, Ardour 6 includes a new Digital Signal Processor, meaning projects may not sound the same. If you do not need the new functionality of Ardour 6, do not upgrade to Ubuntu Studio 20.10.

Other Notable Updates

We’ve added several new audio plugins this cycle, most notably:

  • Add64
  • Geonkick
  • Dragonfly Reverb
  • Bsequencer
  • Bslizr
  • Bchoppr

Carla has been upgraded to version 2.2. Full release announcement at kx.studio.

Video

OBS Studio

Our inclusion of OBS Studio has been praised by many. Our goal is to become the #1 choice for live streaming and recording, and we hope that including OBS Studio out of the box helps usher this in. With the game availability on Steam, which runs native on Ubuntu Studio and is easily installed, and with Steam’s development of Proton for Windows games, we believe game streamers and other streamers on Youtube, Facebook, and Twitch would benefit from such an all-inclusive operating system that would save them both money and time.

Included this cycle is OBS Studio 26.0.2, which includes several new features and additions, too numerous to list here.

For those that would like to use the advanced audio processing power of JACK with OBS Studio, OBS Studio is JACK-aware!

Kdenlive

We have chosen Kdenlive to be our default video editor for several reasons. The largest of which is that it is the most professional video editor included in the Ubuntu repositories, but also it integrates very well with the Plasma desktop.

This release brings version 20.08.1, which includes several new features that have been outlined at their website.

Graphics and Photography

Krita

Artists will be glad to see Krita upgraded to version 4.3. While this may not be the latest release, it does include a number of new features over that included with Ubuntu Studio 20.04.

For a full list of new features, check out the Krita website.

Darktable

This version of the icon seemed appropriate for an October release. :)

For photographers, you’ll be glad to see Darktable 3.2.1 included by default. Additionally, Darktable has been chosen as our default RAW Image Processing Platform.

With Darktable 3.2 comes some major changes, such as an overhaul to the Lighttable, A new snapshot comparison line, improved tooltips, and more! For a complete list, check out the Darktable website.

Introducing Digikam

For the first time in Ubuntu Studio, we are including the KDE application Digikam by default. Digikam is the most-advanced photo editing and cataloging tool in Open Source and includes a number of major features that integrate well into the Plasma desktop.

The version we have by default is version 6.4.0. For more information about Digikam 6.4.0, read the release announcement.

We realize that the version we include, 6.4.0, is not the most recent version, which is why we include Digikam 7.1.0 in the Ubuntu Studio Backports PPA.

For more information about Digikam 7.1.0, read the release announcement.

More Updates

There are many more updates not covered here but are mentioned in the Release Notes. We highly recommend reading those release notes so you know what has been updated and know any known issues that you may encounter.

Introducing the Ubuntu Studio Marketplace

Have you ever wanted to buy some gear to show off your love for Ubuntu Studio? Now you can! We just launched the Ubuntu Studio Marketplace. From now until October 27th, you can get our special launch discount of 15% off.

We have items like backpacks, coffee mugs, buttons, and more! Items for men, women, and children, even babies! Get your gear today!

Proceeds from commissions go toward supporting further Ubuntu Studio development.

Now Accepting Donations!

If you find Ubuntu Studio useful, we highly encourage you to donate toward its prolonged development. We would be grateful for any donations given!

Three ways to donate!

Patreon

Become a Patron!

The official launch date of our Patreon campaign is TODAY! We have many goals, including being able to pay one or more developers at least a part-time wage for their work on Ubuntu Studio. However, we do have some benefits we would like to offer our patrons. We are still hammering-out the benefits to patrons, and we would love to hear some feedback about what those benefits might be. Become a patron, and we can have that conversation together!

Liberapay

Liberapay is a great way to donate to Ubuntu Studio. It is built around projects, like ours, that are made of and using free and open source software. Their system is designed to provide stable crowdfunded income to creators.

PayPal

You can also donate directly via PayPal. You can establish either monthly recurring donations or make one-time donations. Whatever you decide is appreciated!

Get Involved!

Another great way to contribute is to get involved with the project directly! We’re always looking for new volunteers to help with packaging, documentation, tutorials, user support, and MORE! Check out all the ways you can contribute!

Special Thanks

Huge special thanks for this release go to:

  • Len Ovens: Studio Controls, Ubuntu Studio Installer, Coding
  • Thomas Ward: Packaging, Ubuntu Core Developer for Ubuntu Studio
  • Eylul Dogruel: Artwork, Graphics Design, Website Lead
  • Ross Gammon: Upstream Debian Developer, Guidance
  • Rik Mills: Kubuntu Council Member, help with Plasma desktop
  • Mauro Gaspari: Tutorials, promotion, and documentation
  • Krytarik Raido: IRC Moderator, Mailing List Moderator
  • Erich Eickmeyer: Project Leader, Packaging, Direction, Treasurer, KDE Plasma Transition
on October 22, 2020 06:30 PM

S13E31 – Cheers with water

Ubuntu Podcast from the UK LoCo

This week we’ve been upgrading computers and Ebaying stuff. We discuss the Windows Calculator coming to Linux, Microsoft Edge browser coming to Linux, Ubuntu Community Council elections and LibreOffice office getting Yaru icons. We also round up our picks from the general tech news.

It’s Season 13 Episode 31 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on October 22, 2020 02:00 PM

The releases following an LTS are always a good time ⌚ to make changes the set the future direction 🗺️ of the distribution with an eye on where we want to be for the next LTS release. Therefore, Ubuntu MATE 20.10 ships with that latest MATE Desktop 1.24.1, keeps paces with other developments within Ubuntu (such as Active Directory authentication) and migrated to the Ayatana Indicators project.

If you want bug fixes :bug:, kernel updates :corn:, a new web camera control :movie_camera:, and a new indicator :point_right: experience, then 20.10 is for you :tada:. Ubuntu MATE 20.10 will be supported for 9 months until July 2021. If you need Long Term Support, we recommend you use Ubuntu MATE 20.04 LTS.

Read on to learn more… :point_down:

Ubuntu MATE 20.10 (Groovy Gorilla) Ubuntu MATE 20.10 (Groovy Gorilla)

What’s changed since Ubuntu MATE 20.04?

MATE Desktop

If you follow the Ubuntu MATE twitter account 🐦 you’ll know that MATE Desktop 1.24.1 was recently released. Naturally Ubuntu MATE 20.10 features that maintenance release of MATE Desktop. In addition, we have prepared updated MATE Desktop 1.24.1 packages for Ubuntu MATE 20.04 that are currently in the SRU process. Given the number of MATE packages being updated in 20.04, it might take some time ⏳ for all the updates to land, but we’re hopeful that the fixes and improvements from MATE Desktop 1.24.1 will soon be available for those of you running 20.04 LTS 👍

Active Directory

The Ubuntu Desktop team added the option to enroll your computer into an Active Directory domain 🔑 during install. We’ve been tracking that work and the same capability is available in Ubuntu MATE too.

Active Directory Enrollment Enroll your computer into an Active Directory domain

Ayatana Indicators

There is a significant under the hood change 🔧 in Ubuntu MATE 20.10 that you might not even notice 👀 at a surface level; we’ve replaced Ubuntu Indicators with Ayatana Indicators.

We’ll explain some of the background, why we’ve made this change, the short term impact and the long term benefits.

What are Ayatana Indicators?

In short, Ayatana Indicators is a fork of Ubuntu Indicators that aims to be cross-distro compatible and re-usable for any desktop environment 👌 Indicators were developed by Canonical some years ago, initially for the GNOME2 implementation in Ubuntu and then refined for use in the Unity desktop. Ubuntu MATE has supported the Ubuntu Indicators for some years now and we’ve contributed patches to integrate MATE support into the suite of Ubuntu Indicators. Existing indicators are compatible with Ayatana Indicators.

We have migrated Ubuntu MATE 20.10 to Ayatana Indicators and Arctica Greeter. I live streamed 📡 the development work to switch from Ubuntu Indicators to Ayatana Indicators which you can find below if you’re interested in some of the technical details 🤓

The benefits of Ayatana Indicators

Ubuntu MATE 20.10 is our first release to feature Ayatana Indicators and as such there are a couple of drawbacks; there is no messages indicator and no graphical tool to configure the display manager greeter (login window) 😞

Both will return in a future release and the greeter can be configured using dconf-editor in the meantime.

Arctica Greeter dconf configuration Configuring Arctica Greeter with dconf-editor

That said, there are significant benefits that result from migrating to Ayatana Indicators:

  • Debian and Ubuntu MATE are now aligned with regards to Indicator support; patches are no longer required in Ubuntu MATE which reduces the maintenance overhead.
  • MATE Tweak is now a cross-distro application, without the need for distro specific patches.
  • We’ve switched from Slick Greeter to Arctica Greeter (both forks of Unity Greeter)
    • Arctica Greeter integrates completely with Ayatana Indicators; so there is now a consistent Indicator experience in the greeter and desktop environment.
  • Multiple projects are now using Ayatana Indicators, including desktop environments, distros and even mobile phone projects such as UBports. With more developers collaborating in one place we are seeing the collection of available indicators grow 📈
  • Through UBports contributions to Ayatana Indicators we will soon have a Bluetooth indicator that can replace Blueman, providing a much simpler way to connect and manage Bluetooth devices. UBports have also been working on a network indicator and we hope to consolidate that to provide improved network management as well.
  • Other indicators that are being worked on include printers, accessibility, keyboard (long absent from Ubuntu MATE), webmail and display.

So, that is the backstory about how developers from different projects come together to collaborate on a shared interest and improve software for their users 💪

Webcamoid

We’ve replaced Cheese :cheese: with Webcamoid :movie_camera: as the default webcam tool for several reasons.

  • Webcamoid is a full webcam/capture configuration tool with recording, overlays and more, unlike Cheese. While there were initial concerns :pensive:, since Webcamoid is a Qt5 app, nearly all the requirements in the image are pulled in via YouTube-DL :tada:.
  • We’ve disabled notifications :bell: for Webcamoid updates if installed from universe pocket as a deb-version, since this would cause errors in the user’s system and force them to download a non-deb version. This only affects users who don’t have an existing Webcamoid configuration.

Linux Kernel

Ubuntu MATE 20.10 includes the 5.8 Linux kernel. This includes numerous updates and added support since the 5.4 Linux kernel released in Ubuntu 20.04 LTS. Some notable examples include:

  • Airtime Queue limits for better WiFi connection quality
  • Btrfs RAID1 with 3 and 4 copies and more checksum alternatives
  • USB 4 (Thunderbolt 3 protocol) support added
  • X86 Enable 5-level paging support by default
  • Intel Gen11 (Ice Lake) and Gen12 (Tiger Lake) graphics support
  • Initial support for AMD Family 19h (Zen 3)
  • Thermal pressure tracking for systems for better task placement wrt CPU core
  • XFS online repair
  • OverlayFS pairing with VirtIO-FS
  • General Notification Queue for key/keyring notification, mount changes, etc.
  • Active State Power Management (ASPM) for improved power savings of PCIe-to-PCI devices
  • Initial support for POWER10

Raspberry Pi images

We have been preparing Ubuntu MATE 20.04 images for the Raspberry Pi and we will be release final image for 20.04 and 20.10 in the coming days 🙂

Major Applications

Accompanying MATE Desktop 1.24.1 and Linux 5.8 are Firefox 81, LibreOffice 7.0.2, Evolution 3.38 & Celluloid 0.18.

Major Applications

See the Ubuntu 20.10 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 20.10

This new release will be first available for PC/Mac users.

Download

Upgrading from Ubuntu MATE 20.04 LTS

You can upgrade to Ubuntu MATE 20.10 from Ubuntu MATE 20.04 LTS. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” drop down menu to “For any new version”.
  • Press Alt+F2 and type in update-manager -c -d into the command box.
  • Update Manager should open up and tell you: New distribution release ‘XX.XX’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Known Issues

Here are the known issues.

Component Problem Workarounds Upstream Links
Ayatana Indicators Clock missing on panel upon upgrade to 20.10

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on October 22, 2020 12:00 AM

October 18, 2020

Full Circle Weekly News #186

Full Circle Magazine


Linux Mint 20.1 Will Arrive Mid-December
https://blog.linuxmint.com/?p=3969
on October 18, 2020 01:09 PM

October 17, 2020

If you’re a prior reader of the blog, you probably know that when I have the opportunity to take a training class, I like to write a review of the course. It’s often hard to find public feedback on trainings, which feels frustrating when you’re spending thousands of dollars on that course.

Last week, I took the “Reverse Engineering with Ghidra” taught by Jeremy Blackthorne (0xJeremy) of the Boston Cybernetics Institute. It was ostensibly offered as part of the Infiltrate Conference, but 2020 being what it is, there was no conference and it was just an online training. Unfortunately for me, it was being run on East Coast time and I’m on the West Coast, so I got to enjoy some early mornings.

I won’t bury the lede here – on the whole, the course was a high-quality experience taught by an instructor who is clearly both passionate and experienced with technical instruction. I would highly recommend this course if you have little experience in reverse engineering and want to get bootstrapped on performing reversing with Ghidra. You absolutely do need to have some understanding of how programs work – memory sections, control flow, how data and code is represented in memory, etc., but you don’t need to have any meaningful RE experience. (At least, that’s my takeaway, see the course syllabus for more details.)

I would say that about 75% of the total time was spent executing labs and the other 25% was spent with lecture. The lecture time, however, had very little prepared material to read – most of it was live demonstration of the toolset, which made for a great experience when he would answer questions by showing you exactly how to get something done in Ghidra.

Like many information security courses, they provide a virtual machine image with all of the software installed and configured. Interestingly, they seem to share this image across multiple courses, so the actual exercises are downloaded by the student during the course. They provide both VirtualBox and VMWare VMs, but both are OVAs which should be importable into either virtualization platform. Because I always need to make things harder on myself, I actually used QEMU/KVM virtualization for the course, and it worked just fine as well.

The coverage of Ghidra as a tool for reversing was excellent. The majority of the time was spent on manual analysis tasks with examples in a variety of architectures. I believe we saw X86, AMD64, MIPS, ARM, and PowerPC throughout the course. Most of the reversing tasks were a sort of “crack me” style challenge, which was a fitting way to introduce the Ghidra toolkit.

We also spent some time on two separate aspects of Ghidra programming – extending Ghidra with scripts, plugins, and tools, and headless analysis of programs using the GhidraScript API. Though Ghidra is a Java program, it has both Java APIs and Jython bindings to those APIs, and all of the headless analysis exercises were done in Python (Jython).

Jeremy did a great job of explaining the material and was very clear in his teaching style. He provided support for students who were having issues without disrupting the flow for other students. One interesting approach is encouraging students to just keep going through the labs when they finish one, rather than waiting for that lab to be introduced. This ensures that nobody is sitting idle waiting for the course to move forward, and provides students the opportunity to learn and discover the tools on their own before the in-course coverage.

One key feature of Jeremy’s teaching approach is the extensive use of Jupyter notebooks for the lab exercises. This encourages students to produce a log of their work, as you can directly embed shell commands and python scripts (along with their output) as well as Markdown that can include images or other resources. A sort of a hidden gem of his approach was also an introduction to the Flameshot screenshot tool. This tool lets you add boxes, arrows, highlights, redactions, etc., to your screenshot directly in an on-screen overlay. I hadn’t seen it before, but I think it’ll be my goto screenshot tool in the future.

Other tooling used for making this a remote course included a Zoom meeting for the main lecture and a Discord channel for class discussion. Exercises and materials were shared via a Sharepoint server. Zoom was particularly nice because Jeremy recorded his end of the call and uploaded the recordings to the Sharepoint server, so if you wanted to revisit anything, you had both the lecture notes and video. (This is important since so much of the class was done as live demo instead of slides/text.)

It’s also worth noting that it was clear that Jeremy adjusted the course contents and pace to match the students goals and pace. At the beginning, he asked each student about their background and what they hoped to get out of the course, and he would regularly ask us to privately message him with what exercise we’re currently working on (the remote version of the instructor walking around the room) to get a sense of the pace. BCI clearly has more exercises than can fit in the four day timing of the course, so Jeremy selected the ones most relevant to student’s goals, but then provided all the materials at the end of the course so we could go forth and learn more on our own time. This was a really nice element to help get the most out of the course.

The combination of the live demo lecture style, lots of lab/hands-on exercises, and customized content and pace really worked well for me. I feel like I got a lot out of the course and am at least somewhat comfortable using Ghidra now. Overall, definitely a recommendation for those newer to reverse engineering or looking to use Ghidra for the first time.

I also recently purchased The Ghidra Book so I thought I’d make a quick comparison. The Ghidra Book looks like good reference material, but not a way to learn from first principles. If you haven’t used Ghidra at all, taking a course will be a much better way to get up to speed.

on October 17, 2020 07:00 AM

October 15, 2020

Já votaram no Podcast Ubuntu Portugal em podes.pt? Não? Então não leias mais e vai até https://podes.pt/votar/ escreve Podcast Ubuntu Portugal e clica em VOTAR. Não falhes a aritmética e repete as vezes que conseguires.

Já sabem: oiçam, subscrevam e partilhem!

  • https://collaboraonline.github.io/
  • https://events.opensuse.org/conferences/oSLO
  • https://www.humblebundle.com/books/learn-to-code-the-fun-way-no-starch-press-books?partner=pup
  • https://www.jonobacon.com/webinars/content/
  • https://www.twitch.tv/videos/763496146
  • https://podes.pt/votar/

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on October 15, 2020 09:45 PM

About Website Security

Ubuntu Studio

UPDATE 2020-10-16: This is now fixed.

We are aware that, as of this writing, our website is not 100% https. Our website is hosted by Canonical. There is an open ticket to get everything changed-over, but these things take time. There is nothing the Ubuntu Studio Team can do to speed this along or fix it ourselves. If you explicitly type-in https:// to your web browser, you should get the secure SSL version of our site.

Our download links, merchandise stores, and donation links are unaffected by this as they are hosted elsewhere.

We thank you for your understanding.

on October 15, 2020 05:21 PM

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In September, 208.25 work hours have been dispatched among 13 paid contributors. Their reports are available:
  • Abhijith PA did 12.0h (out of 14h assigned), thus carrying over 2h to October.
  • Adrian Bunk did 14h (out of 19.75h assigned), thus carrying over 5.75h to October.
  • Ben Hutchings did 8.25h (out of 16h assigned and 9.75h from August), but gave back 7.75h, thus carrying over 9.75h to October.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 19.75h (out of 19.75h assigned).
  • Holger Levsen did 5h coordinating/managing the LTS team.
  • Markus Koschany did 31.75h (out of 19.75h assigned and 12h from August).
  • Ola Lundqvist did 9.5h (out of 12h from August), thus carrying 2.5h to October.
  • Roberto C. Sánchez did 19.75h (out of 19.75h assigned).
  • Sylvain Beucler did 19.75h (out of 19.75h assigned).
  • Thorsten Alteholz did 19.75h (out of 19.75h assigned).
  • Utkarsh Gupta did 8.75h (out of 19.75h assigned), while he already anticipated the remaining 11h in August.

Evolution of the situation

September was a regular LTS month with an IRC meeting.

The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file has 48 packages needing an update.

Thanks to our sponsors

Sponsors that joined recently are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on October 15, 2020 02:07 PM

S13E30 – Whistling indoors

Ubuntu Podcast from the UK LoCo

This week we’ve been upgrading our GPUs. We discuss our experiences using IoT devices, bring you some command line love and go over all your wonderful feedback.

It’s Season 13 Episode 30 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

bpytop

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on October 15, 2020 02:00 PM

October 13, 2020

Kubuntu Focus Model 2 Launched

Kubuntu General News

The Kubuntu Focus team, announce the immediate availability of their second generation laptop, the Kubuntu Focus M2.

Customers experience power out of the box acclaimed by both experts and new users alike. The finely-tuned Focus virtually eliminates the need to configure the OS, applications, or updates. Kubuntu combines industry standard Ubuntu 20.04 LTS with the beautiful yet familiar KDE desktop. With dozens of Guided Solutions and unparalleled support, the shortest path to Linux success is the Focus.

The M2 is available now and is smaller, lighter, and faster than the prior generation M1. The 8c/16t i7-10875H CPU is faster by 17% single-core and 58% multi-core.

Full details are available on the Kubuntu Focus website at kfocus.org

on October 13, 2020 06:31 PM

October 11, 2020

Full Circle Weekly News #185

Full Circle Magazine


Amnesia the Dark Descent, Without Assets, Now Open Source
https://frictionalgames.com/2020-09-amnesia-is-now-open-source/
on October 11, 2020 11:02 AM

October 06, 2020

Vote In It!

Bryan Quigley

I just launched Voteinit.com which focuses on information on ballot measures. It's just a series of simple tables showing what different groups support which ballot measures in California. If anyone is interested in doing similar for their state/town/city, contributions welcome on Github! My primary goal is to make it a little less overwhelming to go through 10+ ballot measures.

Please Vote In It!

on October 06, 2020 08:00 PM

October 03, 2020

Yesterday I got a fresh new Pixel 4a, to replace my dying OnePlus 6. The OnePlus had developed some faults over time: It repeatedly loses connection to the AP and the network, and it got a bunch of scratches and scuffs from falling on various surfaces without any protection over the past year.

Why get a Pixel?

Camera: OnePlus focuses on stuffing as many sensors as it can into a phone, rather than a good main sensor, resulting in pictures that are mediocre blurry messes - the dreaded oil painting effect. Pixel have some of the best camera in the smartphone world. Sure, other hardware is far more capable, but the Pixels manage consistent results, so you need to take less pictures because they don’t come out blurry half the time, and the post processing is so good that the pictures you get are just great. Other phones can shoot better pictures, sure - on a tripod.

Security updates: Pixels provide 3 years of monthly updates, with security updates being published on the 5th of each month. OnePlus only provides updates every 2 months, and then the updates they do release are almost a month out of date, not counting that they are only 1st-of-month patches, meaning vendor blob updates included in the 5th-of-month updates are even a month older. Given that all my banking runs on the phone, I don’t want it to be constantly behind.

Feature updates: Of course, Pixels also get Beta Android releases and the newest Android release faster than any other phone, which is advantageous for Android development and being nerdy.

Size and weight: OnePlus phones keep getting bigger and bigger. By today’s standards, the OnePlus 6 at 6.18" and 177g is a small an lightweight device. Their latest phone, the Nord, has 6.44" and weighs 184g, the OnePlus 8 comes in at 180g with a 6.55" display. This is becoming unwieldy. Eschewing glass and aluminium for plastic, the Pixel 4a comes in at 144g.

First impressions

Accessories

The Pixel 4a comes in a small box with a charger, USB-C to USB-C cable, a USB-OTG adapter, sim tray ejector. No pre-installed screen protector or bumper are provided, as we’ve grown accustomed to from Chinese manufacturers like OnePlus or Xiaomi. The sim tray ejector has a circular end instead of the standard oval one - I assume so it looks like the ‘o’ in Google?

Google sells you fabric cases for 45€. That seems a bit excessive, although I like that a lot of it is recycled.

Haptics

Coming from a 6.18" phablet, the Pixel 4a with its 5.81" feels tiny. In fact, it’s so tiny my thumb and my index finger can touch while holding it. Cute! Bezels are a bit bigger, resulting in slightly less screen to body. The bottom chin is probably impracticably small, this was already a problem on the OnePlus 6, but this one is even smaller. Oh well, form over function.

The buttons on the side are very loud and clicky. As is the vibration motor. I wonder if this Pixel thinks it’s a Model M. It just feels great.

The plastic back feels really good, it’s that sort of high quality smooth plastic you used to see on those high-end Nokia devices.

The finger print reader, is super fast. Setup just takes a few seconds per finger, and it works reliably. Other phones (OnePlus 6, Mi A1/A2) take like half a minute or a minute to set up.

Software

The software - stock Android 11 - is fairly similar to OnePlus' OxygenOS. It’s a clean experience, without a ton of added bloatware (even OnePlus now ships Facebook out of box, eww). It’s cleaner than OxygenOS in some way - there are no duplicate photos apps, for example. On the other hand, it also has quite a bunch of Google stuff I could not care less about like YT Music. To be fair, those are minor noise once all 130 apps were transferred from the old phone.

There are various things I miss coming from OnePlus such as off-screen gestures, network transfer rate indicator in quick settings, or a circular battery icon. But the Pixel has an always on display, which is kind of nice. Most of the cool Pixel features, like call screening or live transcriptions are unfortunately not available in Germany.

The display is set to display the same amount of content as my 6.18" OnePlus 6 did, so everything is a bit tinier. This usually takes me a week or two to adjust too, and then when I look at the OnePlus again I’ll be like “Oh the font is huge”, but right now, it feels a bit small on the Pixel.

You can configure three colour profiles for the Pixel 4a: Natural, Boosted, and Adaptive. I have mine set to adaptive. I’d love to see stock Android learn what OnePlus has here: the ability to adjust the colour temperature manually, as I prefer to keep my devices closer to 5500K than 6500K, as I feel it’s a bit easier on the eyes. Or well, just give me the ability to load a ICM profile (though, I’d need to calibrate the screen then - work!).

Migration experience

Restoring the apps from my old phone only restore settings for a few handful out of 130, which is disappointing. I had to spent an hour or two logging in to all the other apps, and I had to fiddle far too long with openScale to get it to take its data over. It’s a mystery to me why people do not allow their apps to be backed up, especially something innocent like a weight tracking app. One of my banking apps restored its logins, which I did not really like. KeePass2Android settings were restored as well, but at least the key file was not restored.

I did not opt in to restoring my device settings, as I feel that restoring device settings when changing manufactures is bound to mess up some things. For example, I remember people migrating to OnePlus phones and getting their old DND schedule without any way to change it, because OnePlus had hidden the DND stuff. I assume that’s the reason some accounts, like my work GSuite account were not migrated (it said it would migrate accounts during setup).

I’ve setup Bitwarden as my auto-fill service, so I could login into most of my apps and websites using the stored credentials. I found that often that did not work. Like Chrome does autofill fine once, but if I then want to autofill again, I have to kill and restart it, otherwise I don’t get the auto-fill menu. Other apps did not allow any auto-fill at all, and only gave me the option to copy and paste. Yikes - auto-fill on Android still needs a lot of work.

Performance

It hangs a bit sometimes, but this was likely due to me having set 2 million iterations on my Bitwarden KDF and using Bitwarden a lot, and then opening up all 130 apps to log into them which overwhelmed the phone a bit. Apart from that, it does not feel worse than the OnePlus 6 which was to be expected, given that the benchmarks only show a slight loss in performance.

Photos do take a few seconds to process after taking them, which is annoying, but understandable given how much Google relies on computation to provide decent pictures.

Audio

The Pixel has dual speakers, with the earpiece delivering a tiny sound and the bottom firing speaker doing most of the work. Still, it’s better than just having the bottom firing speaker, as it does provide a more immersive experience. Bass makes this thing vibrate a lot. It does not feel like a resonance sort of thing, but you can feel the bass in your hands. I’ve never had this before, and it will take some time getting used to.

Final thoughts

This is a boring phone. There’s no wow factor at all. It’s neither huge, nor does it have high-res 48 or 64 MP cameras, nor does it have a ton of sensors. But everything it does, it does well. It does not pretend to be a flagship like its competition, it doesn’t want to wow you, it just wants to be the perfect phone for you. The build is solid, the buttons make you think of a Model M, the camera is one of the best in any smartphone, and you of course get the latest updates before anyone else. It does not feel like a “only 350€” phone, but yet it is. 128GB storage is plenty, 1080p resolution is plenty, 12.2MP is … you guessed it, plenty.

The same applies to the other two Pixel phones - the 4a 5G and 5. Neither are particularly exciting phones, and I personally find it hard to justify spending 620€ on the Pixel 5 when the Pixel 4a does job for me, but the 4a 5G might appeal to users looking for larger phones. As to 5G, I wouldn’t get much use out of it, seeing as its not available anywhere I am. Because I’m on Vodafone. If you have a Telekom contract or live outside of Germany, you might just have good 5G coverage already and it might make sense to get a 5G phone rather than sticking to the budget choice.

Outlook

The big question for me is whether I’ll be able to adjust to the smaller display. I now have a tablet, so I’m less often using the phone (which my hands thank me for), which means that a smaller phone is probably a good call.

Oh while we’re talking about calls - I only have a data-only SIM in it, so I could not test calling. I’m transferring to a new phone contract this month, and I’ll give it a go then. This will be the first time I get VoLTE and WiFi calling, although it is Vodafone, so quality might just be worse than Telekom on 2G, who knows. A big shoutout to congstar for letting me cancel with a simple button click, and to @vodafoneservice on twitter for quickly setting up my benefits of additional 5GB per month and 10€ discount for being an existing cable customer.

I’m also looking forward to playing around with the camera (especially night sight), and eSIM. And I’m getting a case from China, which was handed over to the Airline on Sep 17 according to Aliexpress, so I guess it should arrive in the next weeks. Oh, and screen protector is not here yet, so I can’t really judge the screen quality much, as I still have the factory protection film on it, and that’s just a blurry mess - but good enough for setting it up. Please Google, pre-apply a screen protector on future phones and include a simple bumper case.

I might report back in two weeks when I have spent some more time with the device.

on October 03, 2020 11:16 AM

October 01, 2020

We are pleased to announce that the beta images for Lubuntu 20.10 have been released! While we have reached the bugfix-only stage of our development cycle, these images are not meant to be used in a production system. We highly recommend joining our development group or our forum to let us know about any issues. […]
on October 01, 2020 09:55 AM

This month I started working on ways to make hosting access easier for Debian Developers. I also did some work and planning for the MiniDebConf Online Gaming Edition that we’ll likely announce within the next 1-2 days. Just a bunch of content that needs to be fixed and a registration bug then I think we’ll be ready to send out the call for proposals.

In the meantime, here’s my package uploads and sponsoring for September:

2020-09-07: Upload package calamares (3.2.30-1) to Debian unstable.

2020-09-07: Upload package gnome-shell-extension-dash-to-panel (39-1) to Debian unstable.

2020-09-08: Upload package gnome-shell-extension-draw-on-your-screen (6.2-1) to Debian unstable.

2020-09-08: Sponsor package sqlobject (3.8.0+dfsg-2) for Debian unstable (Python team request).

2020-09-08: Sponsor package bidict (0.21.0-1) for Debian unstable (Python team request).

2020-09-11: Upload package catimg (2.7.0-1) to Debian unstable.

2020-09-16: Sponsor package gamemode (1.6-1) for Debian unstable (Games team request).

2020-09-21: Sponsor package qosmic (1.6.0-3) for Debian unstable (Debian Mentors / e-mail request).

2020-09-22: Upload package gnome-shell-extension-draw-on-your-screen (6.4-1) to Debian unstable.

2020-09-22: Upload package bundlewrap (4.2.0-1) to Debian unstable.

2020-09-25: Upload package gnome-shell-extension-draw-on-your-screen (7-1) to Debian unstable.

2020-09-27: Sponsor package libapache2-mod-python (3.5.0-1) for Debian unstable (Python team request).

2020-09-27: Sponsor package subliminal (2.1.0-1) for Debian unstable (Python team request).

on October 01, 2020 12:15 AM

September 29, 2020

This blog post is part two of a four part series

  1. Overview, summary and motivation
  2. Porting approach with various details, examples and problems I ran into along the way
  3. Performance optimizations
  4. Building Rust code into a C library as drop-in replacement

In this part I’ll go through the actual porting process of the libebur128 C code to Rust, the approach I’ve chosen with various examples and a few problems I was running into.

It will be rather technical. I won’t explain details about how the C code works but will only focus on the aspects that are relevant for porting to Rust, otherwise this blog post would become even longer than it already is.

Porting

With the warnings out of the way, let’s get started. As a reminder, the code can be found on GitHub and you can also follow along the actual chronological porting process by going through the git history there. It’s not very different to what will follow here but just in case you prefer looking at diffs instead.

Approach

The approach I’ve taken is basically the same that Federico took for librsvg or Joe Neeman’s took for nnnoiseless:

  1. Start with the C code and safe Rust bindings around the C API
  2. Look for a function or component very low in the call graph without dependencies on (too much) other C code
  3. Rewrite that code in Rust and add an internal C API for it
  4. Call the new internal C API for that new Rust code from the C code and get rid of the C implementation of the component
  5. Make sure the tests are still passing
  6. Go to 2. and repeat

Compared to what I did when porting the FFmpeg loudness normalization filter this has the advantage that at every step there is a working version of the code and you don’t only notice at the very end that somewhere along the way you made a mistake. At each step you can validate that what you did was correct and the amount of code to debug if something went wrong is limited.

Thanks to Rust having a good FFI story for interoperability with C in either direction, writing the parts of the code that are called from C or calling into C is not that much of a headache and not worse than actually writing C.

Rust Bindings around C Library

This step could’ve been skipped if all I cared about was having a C API for the ported code later, or if I wanted to work with the tests of the C library for validation and worry about calling it from Rust at a later point. In this case I had already done safe Rust bindings around the C library before, and having a Rust API made it much easier to write tests that could be used during the porting and that could be automatically run at each step.

bindgen

As a first step for creating the Rust bindings there needs to be a way to actually call into the C code. In C there are the header files with the type definitions and function declarations, but Rust can’t directly work from those. The solution to this was in this case bindgen, which basically converts the C header files into something that Rust can understand. The resulting API is completely unsafe still but can be used in a next step to write safe Rust bindings around it.

I would recommend using bindgen for any non-trivial C API for which there is no better translation tool available, or for which there is no machine-readable description of the API that could be used instead by another tool. Parsing C headers is no fun and there is very little information available in C for generating safe bindings. For example for GObject-based libraries, using gir would be a better idea as it works from a rich XML description of the API that contains information about e.g. ownership transfer and allows to autogenerate safe Rust bindings in many cases.

Also the dependency on clang makes it hard to run bindgen as part of every build, so instead I’ve made sure that the code generated by bindgen is platform independent and included it inside the repository. If you use bindgen, please try to do the same. Requiring clang for building your crate makes everything more complicated for your users, especially if they’re unfortunate enough to use Windows.

But back to the topic. What bindgen generates is basically a translation of the C header into Rust: type definitions and function declarations. This looks for example as follows

\#[repr(C)]
\#[derive(Debug, Copy, Clone)]
pub struct ebur128_state {
    pub mode: ::std::os::raw::c_int,
    pub channels: ::std::os::raw::c_uint,
    pub samplerate: ::std::os::raw::c_ulong,
    pub d: *mut ebur128_state_internal,
}

extern "C" {
    pub fn ebur128_init(
        channels: ::std::os::raw::c_uint,
        samplerate: ::std::os::raw::c_ulong,
        mode: ::std::os::raw::c_int,
    ) -> *mut ebur128_state;

    pub fn ebur128_destroy(st: *mut *mut ebur128_state);

    pub fn ebur128_add_frames_int(
        st: *mut ebur128_state,
        src: *const ::std::os::raw::c_int,
        frames: usize,
    ) -> ::std::os::raw::c_int;
}

Based on this it is possible to call the C functions directly from unsafe Rust code and access the members of all the structs. It requires working with raw pointers and ensuring that everything is done correctly at any point to not cause memory corruption or worse. It’s just like using the API from C with a slightly different syntax.

Build System

To be able to call into the C API its implementation somehow has to be linked into your crate. As the C code later also has to be modified to call into the already ported Rust functions instead of the original C code, it makes most sense to build it as part of the crate instead of linking to an external version of it.

This can be done with the cc crate. It is called into from cargo‘s build.rs for configuring it, for example for configuring which C files to compile and how. Once done it is possible to call any exported C function from the Rust code. The build.rs is not really complicated in this case

fn main() {
    cc::Build::new()
        .file("src/c/ebur128.c")
        .compile("ebur128");
}

Safe Rust API

With all that in place a safe Rust API around the unsafe C functions can be written now. How this looks in practice differs from API to API and might require some more thought in case of a more complex API to ensure everything is still safe and sound from a Rust point of view. In this case it was fortunately rather simple.

For example the struct definition, the constructor and the destructor (Drop impl) looks as follows based on what bindgen generated above

pub struct EbuR128(ptr::NonNull<ffi::ebur128_state>);

The struct is a simple wrapper around std::ptr::NonNull, which itself is a zero-cost wrapper around raw pointers that additionally ensures that the stored pointer is never NULL and allows additional optimizations to take place based on that.

In other words: the Rust struct is just a raw pointer but with additional safety guarantees.

impl EbuR128 {
    pub fn new(channels: u32, samplerate: u32, mode: Mode) -> Result<Self, Error> {
        static ONCE: std::sync::Once = std::sync::Once::new();

        ONCE.call_once(|| unsafe { ffi::ebur128_libinit() });

        unsafe {
            let ptr = ffi::ebur128_init(channels, samplerate as _, mode.bits() as i32);
            let ptr = ptr::NonNull::new(ptr).ok_or(Error::NoMem)?;
            Ok(EbuR128(ptr))
        }
    }
}

The constructor is slightly more complicated as it also has to ensure that the one-time initialization function is called, once. This requires using std::sync::Once as above.

After that it calls the C constructor with the given parameters. This can return NULL in various cases when not enough memory could be allocated as described in the documentation of the C library. This needs to be handled gracefully here and instead of panicking an error is returned to the caller. ptr::NonNull::new() is returning an Option and if NULL is passed it would return None. If this happens it is transformed into an error together with an early return via the ? operator.

In the end the pointer then only has to be wrapped in the struct and be returned.

impl Drop for EbuR128 {
    fn drop(&mut self) {
        unsafe {
            let mut state = self.0.as_ptr();
            ffi::ebur128_destroy(&mut state);
        }
    }
}

The Drop trait is used for defining what should happen if a value of the struct goes out of scope and what should be done to clean up after it. In this case this means calling the destroy function of the C library. It takes a pointer to a pointer to its state, which is then set to NULL. As such it is necessary to store the raw pointer in a local variable and pass a mutable reference to it. Otherwise the ptr::NonNull would end up with a NULL pointer inside it, which would result in undefined behaviour.

The last function that I want to mention here is the one that takes a slice of audio samples for processing

    pub fn add_frames_i32(&mut self, frames: &[i32]) -> Result<(), Error> {
        unsafe {
            if frames.len() % self.0.as_ref().channels != 0 {
                return Err(Error::NoMem);
            }

            let res = ffi::ebur128_add_frames_int(
                self.0.as_ptr(),
                frames.as_ptr(),
                frames.len() / self.0.as_ref().channels,
            );
            Error::from_ffi(res as ffi::error, || ())
        }
    }

Apart from calling the C function it is again necessary to check various pre-conditions before doing so. The C function will cause out of bounds reads if passed a slice that doesn’t contain a sample for each channel, so this must be checked beforehand or otherwise the caller (safe Rust code!) could cause out of bounds memory accesses.

In the end after calling the function its return value is converted into a Result, converting any errors into the crate’s own Error enum.

As can be seen here, writing safe Rust bindings around the C API requires reading of the documentation of the C code and keeping all the safety guarantees of Rust in mind to ensure that it is impossible to violate those safety guarantees, no matter what the caller passes into the functions.

Not having to read the documentation and still being guaranteed that the code can’t cause memory corruption is one of the big advantages of Rust. If there even is documentation or it mentions such details.

Replacing first function: Resampler

Once the safe Rust API is done, it is possible to write safe Rust code that makes use of the C code. And among other things that allows to write tests to ensure that the ported code still does the same as the previous C code. But this will be the topic of the next section. Writing tests is boring, porting code is more interesting and that’s what I will start with.

To find the first function to port, I first read the C code to get a general idea of how the different pieces fit together and what the overall structure of the code is. Based on this I selected the resampler that is used in the true peak measurement. It is one of the leaf functions of the call-graph, that is, it is not calling into any other C code and is relatively independent from everything else. Unlike many other parts of the code it is also already factored out into separate structs and functions.

In the C code this can be found in the interp_create(), interp_destroy() and interp_process() functions. The resulting Rust code can be found in the interp module, which provides basically the same API in Rust except for the destroy() function which Rust provides for free, and corresponding extern "C" functions that can be called from C.

The create() function is not very interesting from a porting point of view: it simply allocates some memory and initializes it. The C and Rust versions of it look basically the same. The Rust version is only missing the checks for allocation failures as those can’t currently be handled in Rust, and handling allocation failures like in the C code is almost useless with modern operating systems and overcommitting allocators.

Also the struct definition is not too interesting, it is approximately the same as the one in C just that pointers to arrays are replaced by Vecs and lengths of them are directly taken from the Vec instead of storing them separately. In a later version the Vec were replaced by boxed slices and SmallVec but that shouldn’t be a concern here for now.

The main interesting part here is the processing function and how to provide all the functions to the C code.

Processing Function: Using Iterators

The processing function, interp_process(), is basically 4 nested for loops over each frame in the input data, each channel in each frame, the interpolator factors and finally the filter coefficients.

unsigned int out_stride = interp->channels * interp->factor;

for (frame = 0; frame < frames; frame++) {
  for (chan = 0; chan < interp->channels; chan++) {
    interp->z[chan][interp->zi] = *in++;
    outp = out + chan;
    for (f = 0; f < interp->factor; f++) {
      acc = 0.0;
      for (t = 0; t < interp->filter[f].count; t++) {
        int i = (int) interp->zi - (int) interp->filter[f].index[t];
        if (i < 0) {
          i += (int) interp->delay;
        }
        c = interp->filter[f].coeff[t];
        acc += (double) interp->z[chan][i] * c;
      }
      *outp = (float) acc;
      outp += interp->channels;
    }
  }
  out += out_stride;
  interp->zi++;
  if (interp->zi == interp->delay) {
    interp->zi = 0;
  }
}

This could be written the same way in Rust but slice indexing is not very idiomatic in Rust and using Iterators is preferred as it leads to more declarative code. Also all the indexing would lead to suboptimal performance due to the required bounds checks. So the main task here is to translate the code to iterators as much as possible.

When looking at the C code, the outer loop iterates over chunks of channels samples from the input and chunks of channels * factor samples from the output. This is exactly what the chunks_exact iterator on slices does in Rust. Similarly the second outer loop iterates over all samples of the input chunks and the two-dimensional array z, which has delay items per channel. On the Rust side I represented z as a flat, one-dimensional array for simplicty, so instead of indexing the chunks_exact() iterator is used again for iterating over it.

This leads to the following for the two outer loops

for (src, dst) in src
    .chunks_exact(self.channels)
    .zip(dst.chunks_exact_mut(self.channels * self.factor)) {
    for (channel_idx, (src, z)) in src
        .iter()
        .zip(self.z.chunks_exact_mut(delay))
        .enumerate() {
        // insert more code here
    }
}

Apart from making it more clear what the data access patterns are, this is also less error prone and gives more information to the compiler for optimizations.

Inside this loop, a ringbuffer z / zi is used to store the incoming samples for each channel and keeping the last delay number of samples for further processing. We’ll keep this part as it is for now and use explicit indexing. While a VecDeque, or any similar data structure from other crates, could be used here, it would complicate the code more, cause more allocations. I’ll revisit this piece of code in part 3 of this blog post.

The first inner loop now iterates over all filters, of which there are factor many and chunks of size channels from the output, for which the same translation as before is used. The second inner loop iterates over all coefficients of the filter and the z ringbuffer, and sums up the product of both which is then stored in the output.

So overall the body of the second outer loop above with the two inner loops would look as follows

z[self.zi] = *src;

for (filter, dst) in self
    .filter
    .iter()
    .zip(dst.chunks_exact_mut(self.channels)) {
        let mut acc = 0.0;
        for (c, index) in &filter.coeff {
            let mut i = self.zi as i32 - *index as i32;
            if i < 0 {
                i += self.delay as i32;
            }
            acc += z[i] as f64 * c;
        }

        dst[channel_idx] = acc as f32;
    }
}

self.zi += 1;
if self.zi == self.delay {
    self.zi = 0;
}

The full code after porting can be found here.

My general approach for porting C code with loops is what I did above: first try to understand the data access patterns and then find ways to express these with Rust iterators, and only if there is no obvious way use explicit indexing. And if the explicit indexing turns out to be a performance problem due to bounds checks, first try to reorganize the data so that direct iteration is possible (which usually also improves performance due to cache locality) and otherwise use some well-targeted usages of unsafe code. But more about that in part 3 of this blog post in the context of optimizations.

Exporting C Functions

To be able to call the above code from C, a C-compatible function needs to be exported for it. This involves working with unsafe code and raw pointers again as that’s the only thing C understands. The unsafe Rust code needs to assert the implicit assumptions that can’t be expressed in C and that the calling C code has to follow.

For the interpolator, functions with the same API as the previous C code are exported to keep the amount of changes minimal: create(), destroy() and process() functions.

\#[no_mangle]
pub unsafe extern "C" fn interp_create(taps: u32, factor: u32, channels: u32) -> *mut Interp {
    Box::into_raw(Box::new(Interp::new(
        taps,
        factor,
        channels,
    )))
}

\#[no_mangle]
pub unsafe extern "C" fn interp_destroy(interp: *mut Interp) {
    drop(Box::from_raw(interp));
}

The #[no_mangle] attribute on functions makes sure that the symbols are exported as is instead of being mangled by the compiler. extern "C" makes sure that the function has the same calling convention as a corresponding C function.

For the create() function the interpolator struct is wrapped in a Box, which is the most basic mechanism to do heap allocations in Rust. This is needed because the C code shouldn’t have to know about the layout of the Interp struct and should just handle it as an opaque pointer. Box::into_raw() converts the Box into a raw pointer that can be returned directly to C. This also passes ownership of the memory to C.

The destroy() function is doing the inverse of the above and calls Box::from_raw() to get a Box back from the raw pointer. This requires that the raw pointer passed in was actually allocated via a Box and is of the correct type, which is something that can’t really be checked. The function has to trust the C code to do this correctly. The standard approach to memory safety in C: trust that everybody is writing correct code.

After getting back the Box, it only has to be dropped and Rust will take care of deallocating any memory as needed.

\#[no_mangle]
pub unsafe extern "C" fn interp_process(
    interp: *mut Interp,
    frames: usize,
    src: *const f32,
    dst: *mut f32,
) -> usize {
    use std::slice;

    let interp = &mut *interp;
    let src = slice::from_raw_parts(src, interp.channels * frames);
    let dst = slice::from_raw_parts_mut(dst, interp.channels * interp.factor * frames);

    interp.process(src, dst);

    interp.factor * frames
}

The main interesting part for the processing function is the usage of slice::from_raw_parts. From the C side, the function again has to trust that the pointer are correct and actually contains frames number of audio frames. In Rust a slice knows about its size so some conversion between the two is needed: a slice of the correct size has to be created around the pointer. This does not involve any copying of memory, it only stores the length information together with the raw pointer. This is also the reason why it’s not required anywhere to pass the length separately to the Rust version of the processing function.

With this the interpolator is fully ported and the C functions can be called directly from the C code. On the C side they are declared as follows and then called as before

typedef void * interpolator;

extern interpolator* interp_create(unsigned int taps, unsigned int factor, unsigned int channels);
extern void interp_destroy(interpolator* interp);
extern size_t interp_process(interpolator* interp, size_t frames, float* in, float* out);

The commit that did all this also adds some tests to ensure that everything still works correctly. It also contains some optimizations on top of the code above and is not 100% the same code.

Writing Tests

For testing that porting the code part by part to Rust doesn’t introduce any problems I went for the common two layer approach: 1. integration tests that test if the whole thing still works correctly, and 2. unit tests for the rewritten component alone.

The integration tests come in two variants inside the ebur128 module: one variant just testing via assertions that the results on a fixed input are the expected ones, and one variant comparing the C and Rust implementations. The unit tests only come in the second variant for now.

To test the C implementation in the integration tests an old version of the crate that had no ported code yet is pulled in. For comparing the C implementation of the individual components, I extracted the C code into a separate C file that exported the same API as the corresponding Rust code and called both from the tests.

Comparing Floating Point Numbers

The first variant is not very interesting apart from the complications involved when comparing floating point numbers. For this I used the float_eq crate, which provides different ways of comparing floating point numbers.

The first variant of tests use it by checking if the absolute difference between the expected and actual result is very small, which is less strict than the ULP check used for the second variant of tests. Unfortunately this was required because depending on the used CPU and toolchain the results differ from the expected static results (generated with one CPU and toolchain), while for the comparison between the C and Rust implementation on the same CPU the results are basically the same.

quickcheck

quickcheck is a crate that allows to write randomized property tests. This seemed like the perfect tool for writing tests that compare the two implementations: the property to check for is equality of results, and it should be true for any possible input.

Using quickcheck is simple in principle. You write a test function that takes the inputs as parameters, do the processing and then check that the property you want to test for holds by either using assertions or returning a bool or TestResult.

\#[quickcheck]
fn compare_c_impl_i16(signal: Signal<i16>) {
    let mut ebu = EbuR128::new(signal.channels, signal.rate, ebur128::Mode::all()).unwrap();
    ebu.add_frames_i16(&signal.data).unwrap();

    let mut ebu_c =
        ebur128_c::EbuR128::new(signal.channels, signal.rate, ebur128_c::Mode::all()).unwrap();
    ebu_c.add_frames_i16(&signal.data).unwrap();

    compare_results(&ebu, &ebu_c, signal.channels);
}

quickcheck will then generate random values for the function parameters via the Arbitrary impl of the given types and call it many times. If one run fails, it tries to find a minimal testcase that fails based on “shrinking” the initial failure case and then prints that failing, shrunk testcase.

And this is the part of using quickcheck that involves some more effort: writing a reasonable Arbitrary impl for the inputs that can also be shrunk in a useful way on failures.

For the tests here I came up with a Signal type. Its Arbitrary implementation creates an audio signal with 1-16 channels and multiple sine waves of different amplitudes and frequencies. Shrinking first tries to reproduce the problem with a single channel, and then halving the signal length.

This worked well in practice so far. It doesn’t cover all possible inputs but should cover anything that can fail, and the simple shrinking approach also helped to find smallish testcases if something failed. But of course it’s not a perfect solution, only a practical one.

Based on these sets of tests I could be reasonably certain that the C and Rust implementation provide exactly the same results, so I could start porting the next part of the code.

True Peak: Macros vs. Traits

C doesn’t really have any mechanisms for writing code that is generic over different types (other than void *), or any more advanced means of abstraction than functions and structs. For that reason the C code uses macros via the C preprocessor in various places to write code once for the different input types (i16, i32, f32 and f64 or in C terms short, int, float and double). The C preprocessor is just a fancy mechanism for string concatenation so this is rather unpleasant to write and read, results might not even be valid C code and the resulting compiler errors are often rather confusing.

In Rust macros could also be used for this. While more clean than C macros because of macro hygiene rules and working on a typed token tree instead of just strings, this still would end up with hard to write and read code with possibly confusing compiler errors. For abstracting over different types, Rust provides traits. These allow to write code generic over different types with a well-defined interface, and allow to do much more but I won’t cover more here.

One example of macro usage in the C code is the input processing, of which the true peak measurement is one part. In C this basically looks as follows, with some parts left out because they’re not relevant for the true peak measurement itself

static void ebur128_check_true_peak(ebur128_state* st, size_t frames) {
  size_t c, i, frames_out;

  frames_out =
      interp_process(st->d->interp, frames, st->d->resampler_buffer_input,
                     st->d->resampler_buffer_output);

  for (i = 0; i < frames_out; ++i) {
    for (c = 0; c < st->channels; ++c) {
      double val =
          (double) st->d->resampler_buffer_output[i * st->channels + c];

      if (EBUR128_MAX(val, -val) > st->d->prev_true_peak[c]) {
        st->d->prev_true_peak[c] = EBUR128_MAX(val, -val);
      }
    }
  }
}

\#define EBUR128_FILTER(type, min_scale, max_scale)                             \
  static void ebur128_filter_##type(ebur128_state* st, const type* src,        \
                                    size_t frames) {                           \
    static double scaling_factor =                                             \
        EBUR128_MAX(-((double) (min_scale)), (double) (max_scale));            \
    double* audio_data = st->d->audio_data + st->d->audio_data_index;          \
                                                                               \
    // some other code                                                         \
                                                                               \
    if ((st->mode & EBUR128_MODE_TRUE_PEAK) == EBUR128_MODE_TRUE_PEAK &&       \
        st->d->interp) {                                                       \
      for (i = 0; i < frames; ++i) {                                           \
        for (c = 0; c < st->channels; ++c) {                                   \
          st->d->resampler_buffer_input[i * st->channels + c] =                \
              (float) ((double) src[i * st->channels + c] / scaling_factor);   \
        }                                                                      \
      }                                                                        \
      ebur128_check_true_peak(st, frames);                                     \
    }                                                                          \
                                                                               \
    // some other code                                                         \
}

EBUR128_FILTER(short, SHRT_MIN, SHRT_MAX)
EBUR128_FILTER(int, INT_MIN, INT_MAX)
EBUR128_FILTER(float, -1.0f, 1.0f)
EBUR128_FILTER(double, -1.0, 1.0)

What the invocations of the macro at the bottom do is to take the whole macro body and replace the usages of type, min_scale and max_scale accordingly. That is, one ends up with a function ebur128_filter_short() that works on a const short *, uses 32768.0 as scaling_factor and the corresponding for the 3 other types.

To convert this to Rust, first a trait that provides all the required operations has to be defined and then implemented on the 4 numeric types that are supported as audio input. In this case, the only required operation is to convert the input values to an f32 between -1.0 and +1.0.

pub(crate) trait AsF32: Copy {
    fn as_f32(self) -> f32;
}

impl AsF32 for i16 {
    fn as_f32(self) -> f32 {
        self as f32 / -(i16::MIN as f32)
    }
}

// And the same for i32, f32 and f64

Once this trait is defined and implemented on the needed types the Rust function can be written generically over the trait

pub(crate) fn check_true_peak<T: AsF32>(&mut self, src: &[T], peaks: &mut [f64]) {
    assert!(src.len() <= self.buffer_input.len());
    assert!(peaks.len() == self.channels);

    for (o, i) in self.buffer_input.iter_mut().zip(src.iter()) {
        *o = i.as_f32();
    }

    self.interp.process(
        &self.buffer_input[..(src.len())],
        &mut self.buffer_output[..(src.len() * self.interp_factor)],
    );

    for (channel_idx, peak) in peaks.iter_mut().enumerate() {
        for o in self.buffer_output[..(src.len() * self.interp_factor)]
            .chunks_exact(self.channels) {
            let v = o[channel_idx].abs() as f64;
            if v > *peak {
              *peak = v;
            }
        }
    }
}

This is not a direct translation of the C code though. As part of rewriting the C code I also factored out the true peak detection from the filter function into its own function. It is called from the filter function shown in the C code a bit further above. This way it was easy to switch only this part from a C implementation to a Rust implementation while keeping the other parts of the filter, and to also test it separately from the whole filter function.

All this can be found in this commit together with tests and benchmarks. The overall code is a bit different than what is listed above, and also the latest version in the repository looks a bit different but more on that in part 3 of this blog post.

One last thing worth mentioning here is that the AsF32 trait is not public API of the crate and neither are the functions generic over the input type. Instead, the generic functions are only used internally and the public API only provides 4 functions that are specialized to the concrete input types. This keeps the API surface smaller and makes the API easier to understand for users.

Loudness History

The next component I ported to Rust was the loudness history data structure. This is used to keep a history of previous loudness measurements for giving longer-term results and it exists in two variants: a histogram-based one and a queue-based one with a maximum length.

As part of the history data structure there are also a couple of operations to calculate values from it, but I won’t go into details of them here. They were more or less direct translations of the C code.

In C this data structure and the operations on it were distributed all over the code and part of the main state struct, so the first step to port it over to Rust was to identify all those pieces and put them behind some kind of API.

This is also something that I noticed when porting the FFmpeg loudness normalization filter: Rust making it much less effort to define new structs, functions or even modules than C seems to often lead to more modular code with clear component boundaries instead of everything put together in the same place. Requirements from the borrow checker often also make it more natural to split components into separate structs and functions.

Check the commit for the full details but in the end I ended up with the following functions that are called from the C code

typedef void * history;

extern history* history_create(int use_histogram, size_t max);
extern void history_add(history* hist, double energy);
extern void history_set_max_size(history* hist, size_t max);
extern double history_gated_loudness(const history* hist);
extern double history_relative_threshold(const history* hist);
extern double history_loudness_range(const history* hist);
extern void history_destroy(history *hist);

Enums

In the C code the two variants of the history were implemented by having both of them always present in the state struct but only one of them initialized and then at every usage site having code like the following

if (st->d->use_histogram) {
    // histogram code follows here
} else {
    // queue code follows here
}

Doing it like this is error prone, easy to forget and having fields in the struct that are unused in certain configurations seems wasteful. In Rust this situation is naturally expressed with enums

enum History {
    Histogram(Histogram),
    Queue(Queue),
}

struct Histogram { ... }
struct Queue { ... }

This allows to use the same storage for both variants and at each usage site the compiler enforces that both variants are explicitly handled

match history {
    History::Histogram(ref hist) => {
        // histogram code follows here
    }
    History::Queue(ref queue) => {
        // queue code follows here
    }
}

Splitting it up like this with an enum also leads to implementing the operations of the two variants directly on their structs and only implementing the common code directly implemented on the history enum. This also improves readability because it’s immediately clear what a piece of code applies to.

Logarithms

As mentioned in the overview already, there are some portability concerns in the C code. One of them showed up when porting the history and comparing the results of the ported code with the C code. This resulted in the following rather ugly code

fn energy_to_loudness(energy: f64) -> f64 {
    #[cfg(feature = "internal-tests")]
    {
        10.0 * (f64::ln(energy) / f64::ln(10.0)) - 0.691
    }
    #[cfg(not(feature = "internal-tests"))]
    {
        10.0 * f64::log10(energy) - 0.691
    }
}

In the C code, ln(x) / ln(10) is used everywhere for calculating the base 10 logarithm. Mathematically that’s the same thing but in practice this is unfortunately not true and the explicit log10() function is both faster and more accurate. Unfortunately it’s not available everywhere in C (it’s available since C99) so was not used in the C code. In Rust it is always available so I first used it unconditionally.

When running the tests later, they failed because the results of C code where slightly different than the results of the Rust code. In the end I tracked it down to the usage of log10(), so for now when comparing the two implementations the slower and less accurate version is used.

Lazy Initialization

Another topic I already mentioned in the overview is one-time initialization. For using the histogram efficiently it is necessary to calculate the values at the edges of each histogram bin as well as the center values. These values are always the same so they could be calculated once up-front. The C code calculated them whenever a new instance was created.

In Rust, one could build something around std::sync::Once together with static mut variables for storing the data, but that would be not very convenient and also would require using some unsafe code. static mut variables are inherently unsafe. Instead this can be simplified with the lazy_static or once_cell crates, and the API of the latter is also available now as part of the Rust standard library in the nightly versions.

Here I used lazy_static, which leads to the following code

lazy_static::lazy_static! {
    static ref HISTOGRAM_ENERGIES: [f64; 1000] = {
        let mut energies = [0.0; 1000];

        for (i, o) in energies.iter_mut().enumerate() {
            *o = f64::powf(10.0, (i as f64 / 10.0 - 69.95 + 0.691) / 10.0);
        }

        energies
    };

    static ref HISTOGRAM_ENERGY_BOUNDARIES: [f64; 1001] = {
        let mut boundaries = [0.0; 1001];

        for (i, o) in boundaries.iter_mut().enumerate() {
            *o = f64::powf(10.0, (i as f64 / 10.0 - 70.0 + 0.691) / 10.0);
        }

        boundaries
    };
}

On first access to e.g. HISTOGRAM_ENERGIES the corresponding code would be executed and from that point onwards it would be available as a read-only array with the calculated values. In practice this later turned out to cause performance problems, but more on that in part 3 of this blog post.

Another approach for calculating these constant numbers would be to calculate them at compile-time via const functions. This is almost possible with Rust now, the only part missing is a const variant of f64::powf(). It is also not available as a const function in C++ so there is probably a deeper reason behind this. Otherwise the code would look exactly like the code above except that the variables would be plain static instead of static ref and all calculations would happen at compile-time.

In the latest version of the code, and until f64::powf() is available as a const function, I’ve decided to simply include a static array with the calculated values inside the code.

Data Structures

And the last topic for the history is an implementation detail of the queue-based implementation. As I also mentioned during the overview, the C code is using a linked-list-based queue, and this is exactly where it is used.

The queue is storing one f64 value per entry, which means that in the end there is one heap memory allocation of 12 or 16 bytes per entry, depending on pointer size. That’s a lot of very small allocations, each allocation is 50-100% bigger than the actual payload and that’s ignoring any overhead by the allocator itself. By this quite some memory is wasted, and by having each value in a different allocation and having to follow a pointer to the next one, any operations on all values is not going to be very cache efficient.

As C doesn’t have any data structures in its standard library and this linked-list-based queue is something that is readily available on the BSDs and Linux at least, it probably makes sense to use it here instead of implementing a more efficient data structure inside the code. But it still seems really suboptimal for this use-case.

In Rust the standard library provides an ringbuffer-based VecDeque, which offers exactly the API that is needed here, stores all values tightly packed and thus doesn’t have any memory wasted per value and at the same time provides better cache efficiency. And it is available everywhere where the Rust standard library is available, unlike the BSD queue used by the C implementation.

In practice, apart from the obvious savings in memory, this also caused the Rust implementation without any further optimizations to take only 50%-70% of the time that the C implementation took, depending on the operation.

Filter: Flushing Denormals to Zero

Overall porting the filter function from C was the same as everything mentioned before so I won’t go into details here. The whole commit porting it can be found here.

There is only one aspect I want to focus on here: if available on x86/x86-64 the MXCSR register temporarily gets the _MM_FLUSH_ZERO_ON bit set to flush denormal floating point number to zero. That is, denormals (i.e. very small numbers close to zero) as result of any floating point operation are considered to be zero. If hardware support is not available, at the end of each filter call the values kept for the next call are manually set to zero if they contain denormals.

This is done in the C code for performance reasons. Operations on denormals are generally much slower than on normalized floating point numbers and it has a measurable impact on the performance in this case.

In Rust this had to be replicated. Not only for the performance reasons but also because otherwise the results of both implementations would be slightly different and comparing them in the tests would be harder.

On the C side, this requires some build system integration and usage of the C preprocessor to decide whether the hardware support for this can be used or not, and then some conditional code that is used in the EBUR128_FILTER macro that was shown a few sections above already. Specifically this is the code

\#if defined(__SSE2_MATH__) || defined(_M_X64) || _M_IX86_FP >= 2
\#include <xmmintrin.h>
\#define TURN_ON_FTZ                                                            \
  unsigned int mxcsr = _mm_getcsr();                                           \
  _mm_setcsr(mxcsr | _MM_FLUSH_ZERO_ON);
\#define TURN_OFF_FTZ _mm_setcsr(mxcsr);
\#define FLUSH_MANUALLY
\#else
\#warning "manual FTZ is being used, please enable SSE2 (-msse2 -mfpmath=sse)"
\#define TURN_ON_FTZ
\#define TURN_OFF_FTZ
\#define FLUSH_MANUALLY                                                         \
  st->d->v[c][4] = fabs(st->d->v[c][4]) < DBL_MIN ? 0.0 : st->d->v[c][4];      \
  st->d->v[c][3] = fabs(st->d->v[c][3]) < DBL_MIN ? 0.0 : st->d->v[c][3];      \
  st->d->v[c][2] = fabs(st->d->v[c][2]) < DBL_MIN ? 0.0 : st->d->v[c][2];      \
  st->d->v[c][1] = fabs(st->d->v[c][1]) < DBL_MIN ? 0.0 : st->d->v[c][1];
\#endif

This is not really that bad and my only concern here would be that it’s relatively easy to forget calling TURN_OFF_FTZ once the filter is done. This would then affect all future floating point operations outside the filter and potentially cause a lot of problems. This blog post gives a nice example of an interesting bug caused by this and shows how hard it was to debug it.

When porting this to more idiomatic Rust, this problem does not exist anymore.

This is the Rust implementation I ended up with

\#[cfg(all(
    any(target_arch = "x86", target_arch = "x86_64"),
    target_feature = "sse2"
))]
mod ftz {
    #[cfg(target_arch = "x86")]
    use std::arch::x86::{_mm_getcsr, _mm_setcsr, _MM_FLUSH_ZERO_ON};
    #[cfg(target_arch = "x86_64")]
    use std::arch::x86_64::{_mm_getcsr, _mm_setcsr, _MM_FLUSH_ZERO_ON};

    pub struct Ftz(u32);

    impl Ftz {
        pub fn new() -> Option<Self> {
            unsafe {
                let csr = _mm_getcsr();
                _mm_setcsr(csr | _MM_FLUSH_ZERO_ON);
                Some(Ftz(csr))
            }
        }
    }

    impl Drop for Ftz {
        fn drop(&mut self) {
            unsafe {
                _mm_setcsr(self.0);
            }
        }
    }
}

\#[cfg(not(any(all(
    any(target_arch = "x86", target_arch = "x86_64"),
    target_feature = "sse2"
))))]
mod ftz {
    pub struct Ftz;

    impl Ftz {
        pub fn new() -> Option<Self> {
            None
        }
    }
}

While a bit longer, it is also mostly whitespace. The important part to notice here is that when using the hardware support, a struct with a Drop impl is returned and once this struct is leaving the scope it would reset the MXCSR register again to its previous value. This way it can’t be forgotten and would also be reset as part of stack unwinding in case of panics.

On the usage side this looks as follows

let ftz = ftz::Ftz::new();

// all the calculations

if ftz.is_none() {
    // manual flushing of denormals to zero
}

No macros are required in Rust for this and all the platform-specific code is nicely abstracted away in a separate module. In the future support for this on e.g. ARM could be added and it would require no changes anywhere else, just the addition of another implementation of the Ftz struct.

Making it bullet-proof

As Anthony Ramine quickly noticed and told me on Twitter, the above is not actually sufficient. For non-malicious code using the Ftz type everything is alright: on every return path, including panics, the register would be reset again.

However malicious (or simply very confused?) code could make use of e.g. mem::forget(), Box::leak() or some other function to “leak” the Ftz value and cause the Drop implementation to never actually run and reset the register’s value. It’s perfectly valid to leak memory in safe Rust, so it’s not a good idea to rely on Drop implementations too much.

The solution for this can be found in this commit but the basic idea is to never actually give out a value of the Ftz type but only pass an immutable reference to safe Rust code. This then basically looks as follows

mod ftz {
    pub fn with_ftz<F: FnOnce(Option<&Ftz>) -> T, T>(func: F) -> T {
        unsafe {
            let ftz = Ftz::new();
            func(Some(&ftz))
        }
    }
}

ftz::with_ftz(|ftz| {
    // do things or check if `ftz` is None
});

This way it is impossible for any code outside the ftz module to leak the value and prevent resetting of the register.

Input Processing: Order of Operations Matters

The other parts of the processing code were relatively straightforward to port and not really different from anything I already mentioned above. However as part of porting that code I ran into a problem that took quite a while to debug: Once ported, the results of the C and Rust implementation were slightly different again.

I went through the affected code in detail and didn’t notice anything obvious. Both the C code and the Rust code were doing the same, so why are the results different?

This is the relevant part of the C code

size_t i;
double channel_sum;

channel_sum = 0.0;
if (st->d->audio_data_index < frames_per_block * st->channels) {
  for (i = 0; i < st->d->audio_data_index / st->channels; ++i) {
    channel_sum += st->d->audio_data[i * st->channels + c] *
                   st->d->audio_data[i * st->channels + c];
  }
  for (i = st->d->audio_data_frames -
           (frames_per_block - st->d->audio_data_index / st->channels);
       i < st->d->audio_data_frames; ++i) {
    channel_sum += st->d->audio_data[i * st->channels + c] *
                   st->d->audio_data[i * st->channels + c];
  }
} else {
  for (i = st->d->audio_data_index / st->channels - frames_per_block;
       i < st->d->audio_data_index / st->channels; ++i) {
    channel_sum += st->d->audio_data[i * st->channels + c] *
                   st->d->audio_data[i * st->channels + c];
  }
}

and the first version of the Rust code

let mut channel_sum = 0.0;

if audio_data_index < frames_per_block * channels {
    channel_sum += audio_data[..audio_data_index]
        .chunks_exact(channels)
        .map(|f| f[c] * f[c])
        .sum();

    channel_sum += audio_data
        [(audio_data.len() - frames_per_block * channels + audio_data_index)..]
        .chunks_exact(channels)
        .map(|f| f[c] * f[c])
        .sum();
} else {
    channel_sum += audio_data
        [(audio_data_index - frames_per_block * channels)..audio_data_index]
        .chunks_exact(channels)
        .map(|f| f[c] * f[c])
        .sum();
}

The difference between the two variants is the order of floating point operations in the if branch. The C code sums up all values into the same accumulator, while the Rust code first sums both parts into a separate accumulator and then adds them together. I changed it to do exactly the same and that caused the tests to pass again.

The order in which floating point operations are done matters, unfortunately, and in the example above the difference was big enough to cause the tests to fail. And the above is a nice practical example that shows that addition on floating point numbers is actually not associative.

Last C Code: Replacing API Layer

The last step was the most satisfying one: getting rid of all the C code. This can be seen in this commit. Note that the performance numbers in the commit message are wrong. At that point both versions were much closer already performance-wise and the Rust implementation was even faster in some configurations.

Up to that point all the internal code was already ported to Rust and the only remaining C code was the public C API, which did some minor tasks and then called into the Rust code. Technically this was not very interesting so I won’t get into any details here. It doesn’t add any new insights and this blog post is already long enough!

If you check the git history, all commits that followed after this one were cleanups, addition of some new features, adding more tests, performance optimizations (see the next part of this blog post) and adding a C-compatible API around the Rust implementation (see the last part of this blog post). The main part of the work was done.

Difficulty and Code Size

With the initial porting out of the way, I can now also answer the first two questions I wanted to get answered as part of this project.

Porting the C code to Rust was not causing any particular difficulties. The main challenges were

  • Understanding what the C code actually does and mapping the data access patterns to Rust iterators for more idiomatic and faster code. This also has the advantage of making the code clearer than with the C-style for loops. Porting was generally a pleasant experience and the language did not get in the way when implementing such code.
    Also, by being able to port it incrementally I could always do a little bit during breaks and didn’t have to invest longer, contiguous blocks of time into the porting.
  • Refactoring the C code to be able to replace parts of it step by step with Rust implementations. This was complicated a bit by the fact that the C code did not factor out the different logical components nicely but instead kept everything entangled.
    From having worked a lot with C for the more than 15 years, I wouldn’t say that this is because the code is bad (it is not!) but simply because C encourages writing code like this. Defining new structs or new functions seems like effort, and even worse if you try to move code into separate files because then you also have to worry about a header file and keeping the code and the header file in sync. Rust simplifies this noticeably and the way how the language behaves encourages splitting the code more into separate components.

Now for the size of the code. This is a slightly more complicated question. Rust and the default code style of rustfmt cause code to be spread out over more lines and to have more whitespace than the structurally same code in C with the common code styles used in C. In my experience, visually Rust code looks much less dense than C code for this reason.

Intuitively I would say that I have written much less Rust code for the actual implementation than there was C code, but lacking any other metrics let’s take a look at the lines of code while ignoring tests and comments. I used tokei for this.

  • 1211 lines for the Rust code
  • 1741 lines for the C code. Of this, 494 lines are headers and 367 lines of the headers are the queue implementation. That is, there are 1247 lines of non-header C code.

This makes the Rust implementation only slightly smaller if we ignore the C headers. Rust allows to write more concise code so I would have expected the difference to be bigger. At least partially this can probably be attributed to the different code formatting that causes Rust code to be less visually dense and as a result have code spread out over more lines than otherwise.

In any case, overall I’m happy with the results so far.

I will look at another metric of code size in the last part of this blog post for some further comparison: the size of the compiled code.

Next Part

In the next part of this blog post I will describe the performance optimizations I did to make the Rust implementation at least as fast as the C one and the problems I ran into while doing so. The previous two parts of the blog post had nothing negative to say about Rust but this will change in the third part. The Rust implementation without any optimizations was already almost as fast as the C implementation thanks to how well idiomatic Rust code can be optimized by the compiler, but the last few percent were not as painless as one would hope. In practice the performance difference probably wouldn’t have mattered.

From looking at the git history and comparing the code, you will also notice that some of the performance optimizations already happened as part of the porting. The final code is not exactly what I presented above.

on September 29, 2020 04:00 PM

September 28, 2020

After a few weeks of development and testing, we are proud to finally announce that Git protocol v2 is available at Launchpad! But what are the improvements in the protocol itself, and how can you benefit from that?

The git v2 protocol was released a while ago, in May 2018, with the intent of simplifying git over HTTP transfer protocol, allowing extensibility of git capabilities, and reducing the network usage in some operations.

For the end user, the main clear benefit is the bandwidth reduction: in the previous version of the protocol, when one does a “git pull origin master”, for example, even if you have no new commits to fetch from the remote origin, git server would first “advertise” to the client all refs (branches and tags) available. In big repositories with hundreds or thousands of refs, this simple handshake operation could consume a lot of bandwidth and time to communicate a bunch of data that would potentially be discarded by the client after.

In the v2 protocol, this waste is no longer present: the client now has the ability to filter which refs it wants to know about before the server starts advertising it.

The v2 protocol is not the default on git clients yet, but if you are using a git version higher than 2.19, you can use v2: simply run git config --global protocol.version 2, and you will be using the most recent protocol version when communicating with servers that support this version. Including Launchpad, of course.

And even if you have repositories hosted in a server that is not yet compatible with v2, don’t worry: the git client is backward compatible. If the server does not support v2, git client should fall back gracefully to the previous version and everything should continue to work as expected. We hope you enjoy the new feature. And let us know if you have any feedback!

on September 28, 2020 01:26 PM

September 25, 2020

The Linux 5.9-rc6 kernel source contains over 300,000 literal strings used in kernel messages of various sorts (errors, warnings, etc) and it is no surprise that typos and spelling mistakes slip into these messages from time to time.

To catch spelling mistakes I run a daily automated job that fetches the tip from linux-next and runs a fast spelling checker tool that finds all spelling mistakes and then diff's these against the results from the previous day.  The diff is emailed to me and I put my kernel janitor hat on, fix these up and send these to the upstream developers and maintainers.

The spelling checker tool is a fast-and-dirty C parser that finds literal strings and also variable names and checks these against a US English dictionary containing over 100,000 words. As fun weekend side project I hand optimized the checker to be able to parse and spell check several millions lines of kernel C code per second.

Every 3 or so months I collate all the fixes I've made and where appropriate I add new spelling mistake patterns to the kernel checkpatch spelling dictionary.   Kernel developers should in practice run checkpatch.pl on their patches before submitting them upstream and hopefully the dictionary will catch a lot of the regular spelling mistakes.

Over the past couple of years I've seen less spelling mistakes creep into the kernel, either because folk are running checkpatch more nowadays and/or that the dictionary is now able to catch more spelling mistakes.  As it stands, this is good as it means less work to fix these up.

Spelling mistakes may be trivial fixes, but cleaning these up helps make the kernel errors appear more professional and can also help clear up some ambiguous messages.

on September 25, 2020 12:11 PM

Launchpad still requires Python 2, which in 2020 is a bit of a problem. Unlike a lot of the rest of 2020, though, there’s good reason to be optimistic about progress.

I’ve been porting Python 2 code to Python 3 on and off for a long time, from back when I was on the Ubuntu Foundations team and maintaining things like the Ubiquity installer. When I moved to Launchpad in 2015 it was certainly on my mind that this was a large body of code still stuck on Python 2. One option would have been to just accept that and leave it as it is, maybe doing more backporting work over time as support for Python 2 fades away. I’ve long been of the opinion that this would doom Launchpad to being unmaintainable in the long run, and since I genuinely love working on Launchpad - I find it an incredibly rewarding project - this wasn’t something I was willing to accept. We’re already seeing some of our important dependencies dropping support for Python 2, which is perfectly reasonable on their terms but which is starting to become a genuine obstacle to delivering important features when we need new features from newer versions of those dependencies. It also looks as though it may be difficult for us to run on Ubuntu 20.04 LTS (we’re currently on 16.04, with an upgrade to 18.04 in progress) as long as we still require Python 2, since we have some system dependencies that 20.04 no longer provides. And then there are exciting new features like type hints and async/await that we’d like to be able to use.

However, until last year there were so many blockers that even considering a port was barely conceivable. What changed in 2019 was sorting out a trifecta of core dependencies. We ported our database layer, Storm. We upgraded to modern versions of our Zope Toolkit dependencies (after contributing various fixes upstream, including some substantial changes to Zope’s test runner that we’d carried as local patches for some years). And we ported our Bazaar code hosting infrastructure to Breezy. With all that in place, a port seemed more of a realistic possibility.

Still, even with this, it was never going to be a matter of just following some standard porting advice and calling it good. Launchpad has almost a million lines of Python code in its main git tree, and around 250 dependencies of which a number are quite Launchpad-specific. In a project that size, not only is following standard porting advice an extremely time-consuming task in its own right, but just about every strange corner case is going to show up somewhere. (Did you know that StringIO.StringIO(None) and io.StringIO(None) do different things even after you account for the native string vs. Unicode text difference? How about the behaviour of .union() on a subclass of frozenset?) Launchpad’s test suite is fortunately extremely thorough, but even just starting up the test suite involves importing most of the data model code, so before you can start taking advantage of it you have to make a large fraction of the codebase be at least syntactically-correct Python 3 code and use only modules that exist in Python 3 while still working in Python 2; in a project this size that turns out to be a large effort on its own, and can be quite risky in places.

Canonical’s product engineering teams work on a six-month cycle, but it just isn’t possible to cram this sort of thing into six months unless you do literally nothing else, and “please can we put all feature development on hold while we run to stand still” is a pretty tough sell to even the most understanding management. Fortunately, we’ve been able to grow the Launchpad team in the last year or so, and so it’s been possible to put “Python 3” on our roadmap on the understanding that we aren’t going to get all the way there in one cycle, while still being able to do other substantial feature development work as well.

So, with all that preamble, what have we done this cycle? We’ve taken a two-pronged approach. From one end, we identified 147 classes that needed to be ported away from some compatibility code in our database layer that was substantially less friendly to Python 3: we’ve ported 38 of those, so there’s clearly a fair bit more to do, but we were able to distribute this work out among the team quite effectively. From the other end, it was clear that it would be very inefficient to do general porting work when any attempt to even run the test suite would run straight into the same crashes in the same order, so I set myself a target of getting the test suite to start up, and started hacking on an enormous git branch that I never expected to try to land directly: instead, I felt free to commit just about anything that looked reasonable and moved things forward even if it was very rough, and every so often went back to tidy things up and cherry-pick individual commits into a form that included some kind of explanation and passed existing tests so that I could propose them for review.

This strategy has been dramatically more successful than anything I’ve tried before at this scale. So far this cycle, considering only Launchpad’s main git tree, we’ve landed 137 Python-3-relevant merge proposals for a total of 39552 lines of git diff output, keeping our existing tests passing along the way and deploying incrementally to production. We have about 27000 more lines of patch at varying degrees of quality to tidy up and merge. Our main development branch is only perhaps 10 or 20 more patches away from the test suite being able to start up, at which point we’ll be able to get a buildbot running so that multiple developers can work on this much more easily and see the effect of their work. With the full unlanded patch stack, about 75% of the test suite passes on Python 3! This still leaves a long tail of several thousand tests to figure out and fix, but it’s a much more incrementally-tractable kind of problem than where we started.

Finally: the funniest (to me) bug I’ve encountered in this effort was the one I encountered in the test runner and fixed in zopefoundation/zope.testrunner#106: IDs of failing tests were written to a pipe, so if you have a test suite that’s large enough and broken enough then eventually that pipe would reach its capacity and your test runner would just give up and hang. Pretty annoying when it meant an overnight test run didn’t give useful results, but also eloquent commentary of sorts.

on September 25, 2020 11:01 AM

September 23, 2020

8 years of my work on AArch64

Marcin Juszkiewicz

Back in 2012 AArch64 was something new, unknown yet. There was no toolchain support (so no gcc, binutils or glibc). And I got assigned to get some stuff running around it.

OpenEmbedded

As there was no hardware cross compilation was the only way. Which meant OpenEmbedded as we wanted to have wide selection of software available.

I learnt how to use modern OE (with OE Core and layers) by building images for ARMv7 and checking them on some boards I had floating around my desk.

Non-public toolchain work

Some time later first non-public patches for binutils and gcc arrived in my inbox. Then eglibc ones. So I started building and on 12th September 2012 I was able to build helloworld:

12:38 hrw@puchatek:aarch64-oe-linux$ ./aarch64-oe-linux-gcc ~/devel/sources/hello.c -o hello
12:38 hrw@puchatek:aarch64-oe-linux$ file hello
hello: ELF 64-bit LSB executable, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.39, not stripped
12:39 hrw@puchatek:aarch64-oe-linux$ objdump -f hello

hello:     file format elf64-littleaarch64
architecture: aarch64, flags 0x00000112: 
EXEC_P, HAS_SYMS, D_PAGED 
start address 0x00000000004003e0

Then images followed. Several people at Linaro (and outside) used those images to test misc things.

At that moment we ran ARMv8 Fast models (quite slow system emulator from Arm). There was a joke that Arm developers formed a queue for single core 10 GHz x86-64 cpus to get AArch64 running faster.

Toolchain became public

Then 1st October 2012 came. I entered Linaro office in Cambridge for AArch64 meeting and was greeted with “glibc patches went to public ML” information. So I rebased my OpenEmbedded repository, updated patches, removed any traces of non-public ones and published whole work.

Building on AArch64

My work above added support for AArch64 as a target architecture. But can it be used as a host? One day I decided to check and ran OpenEmbedded on AArch64.

After one small patch it worked fine.

X11 anyone?

As I had access to Arm Fast model I was able to play with graphics. So one day in January 2013 I did a build and and started Xorg. Through next years I had fun when people wrote that they got X11 running on their AArch64 devices ;D

Two years later I had Applied Micro Mustang at home (still have it). Once it had working PCI Express support I added graphics card and started X11 on hardware.

Then went debugging why Xorg requires configuration file and one day with help from Dave Airlie, Mark Salter and Matthew Garrett I got two solutions for the problem. Do not remember did any of them went upstream but some time later problem was solved.

Few years later I met Dave Airlie at Linux Plumbers. We introduced to each other and he said “ah, you are the ‘arm64 + radeon guy’” ;D

AArch64 Desktop week

One day in September 2015 I had an idea. PCIe worked, USB too. So I did AArch64 desktop week. Connected monitors, keyboard, mouse, speakers and used Mustang instead of my x86-64 desktop.

It was fun.

Distributions

First we had nothing. Then I added AArch64 target into OpenEmbedded.

Same month Arm released Foundation model so anyone was able to play with AArch64 system. No screen, just storage, serial and network but it was enough for some to even start building whole distributions like Debian, Fedora, OpenSUSE, Ubuntu.

At that moment several patches were shared by all distributions as it was faster way than waiting for upstreams. I saw multiple versions of some of them during my journey of fixing packages in some distributions.

Debian and Ubuntu

In February 2013 Debian/Ubuntu team presented their AArch64 port. It was their first architecture bootstrapped without using external toolchains. Work was done in Ubuntu due to different approach to development than Debian has. All work was merged back so some time later Debian also had AArch64 port.

Fedora

Fedora team started early — October 2012, right after toolchain became public. Used Fedora 17 packages and switched to Fedora 19 during work.

When I joined Red Hat in September 2013 one of my duties was fixing packages in Fedora to get them built on AArch64.

OpenSUSE

In January 2014 first versions of QEMU support arrived and people moved from using Foundation model. March/April OpenSUSE team did massive amount of builds to get their distribution built that way.

RHEL

Fedora bootstrap also meant RHEL 7 bootstrap. When I joined Red Hat there were images ready to use in models. My work was testing them and fixing packages. There were multiple times when AArch64 fix helped to build also on ppc64le and s390x architectures.

Hardware I played with

First Linux capable hardware was announced in June 2013. I got access to it at Red Hat. Building and debugging was much faster than using fast models ;D

Applied Micro Mustang

Soon Applied Micro Mustangs were everywhere. Distributions used them to build packages etc. Even without support for half of hardware (no PCI Express, no USB).

I got one in June 2014. Running UEFI firmware out of the box. At first months I had a feeling that firmware is developed at Red Hat as we had fresh versions often right after first patches for missing hardware functionality were written. In reality it was maintained by Applied Micro and we had access to sources so there were some internal changes in testing (that’s why I had firmware versions like ‘0.12-rh’).

All those graphics cards I collected to test how PCI Express works. Or testing USB before it was even merged into Linux mainline kernel. Using virtualization for development of armhf build fixes (8 cores, 12 gigabytes of ram and plenty of storage beat all armv7 hardware I had).

I stopped using Mustang around 2018. It is still under my desk.

For those who use: make sure you have 3.06.25 firmware.

96boards

In February 2015 Linaro announced 96boards initiative. The plan was to make small, unified SBC with different Arm chips. Both 32- and 64-bit ones.

First ones were ‘Consumer Edition’. Small, limited to basic connectivity. Now there are tens of them. 32-bit, 64-bit, fpga etc. Choose your poison ;D

Second ones were ‘Enterprise Edition’. Few attempts existed, most of them did not survived prototype phase. There was joke that full length PCI Express slot and two USB ports requirements are there because I wanted to have AArch64 desktop ;D

Too bad that nothing worth using came from EE spec.

Servers

As Linaro assignee I have access to several servers from Linaro members. Some are mass-market ones, some never made to market. We had over hundred X-Gene1 based systems (mostly as m400 cartridges in HPe Moonshot chassis’) and shutdown them in 2018 as they were getting more and more obsolete.

Main system I use for development is one of those ‘never went to mass-market’ ones. 46 cpu cores, 96 GB of ram make it nice machine for building container images, Debian packages or running virtual machines in OpenStack.

Desktop

For some time I was waiting for some desktop class hardware to have development box more up-to-date than Mustang. Months turned into years. I no longer wait as it looks like there will be no such thing.

Solidrun has made some attempts in this area. First with Macchiatobin and later with Honeycomb. I did not used any of them.

Cloud

When I (re)joined Linaro in 2016 I became part of team working on getting OpenStack working on AArch64 hardware. We used Liberty, Mitaka, Newton releases and then changed way we work and started contributing more. And more. Kolla, Nova, Dib and other projects. Added aarch64 nodes to OpenDev CI.

The effect of it was Linaro Developer Cloud used by hundreds of projects to speed-up their aarch64 porting, tens of projects hosting their CI systems etc.

Two years later Amazon started offering aarch64 nodes in AWS.

Summary

I spent half of my life with Arm on AArch64. Had great moments like building helloworld as one of first people outside of Arm Ltd. Got involved in far more projects then ever thought. Met new friends, visited several places in the world I would probably never go otherwise.

I also got grumpy and complained far too many times that AArch64 market is ‘cheap but limited sbc or fast but expensive servers and nearly nothing in between’. Wrote some posts about missing systems targeting software developers and lost hope that such will happen.

NOTE: It is 8 years of my work on AArch64. I work with Arm since 2004.

on September 23, 2020 03:33 PM

September 22, 2020

Over the last few weeks I ported the libebur128 C library to Rust, both with a proper Rust API as well as a 100% compatible C API.

This blog post will be split into 4 parts that will be published over the next weeks

  1. Overview and motivation
  2. Porting approach with various details, examples and problems I ran into along the way
  3. Performance optimizations
  4. Building Rust code into a C library as drop-in replacement

If you’re only interested in the code, that can be found on GitHub and in the ebur128 crate on crates.io.

The initial versions of the ebur128 crate was built around the libebur128 C library (and included its code for ease of building), version 0.1.2 and newer is the pure Rust implementation.

EBU R128

libebur128 implements the EBU R128 loudness standard. The Wikipedia page gives a good summary of the standard, but in short it describes how to measure loudness of an audio signal and how to use this for loudness normalization.

While this intuitively doesn’t sound very complicated, there are lots of little details (like how human ears are actually working) that make this not as easy as one might expect. This results in there being many different ways for measuring loudness and is one of the reasons why this standard was introduced. Of course it is also not the only standard for this.

libebur128 is also the library that I used in the GStreamer loudness normalization plugin, about which I wrote a few weeks ago already. By porting the underlying loudness measurement code to Rust, the only remaining C dependency of that plugin is GStreamer itself.

Apart from that it is used by FFmpeg, but they include their own modified copy, as well as many other projects that need some kind of loudness measurement and don’t use ReplayGain, another older but widely used standard for the same problem.

Why?

Before going over the details of what I did, let me first explain why I did this work at all. libebur128 is a perfectly well working library, in wide use for a long time and probably rather bug-free at this point and it was already possible to use the C implementation from Rust just fine. That’s what the initial versions of the ebur128 crate were doing.

My main reason for doing this was simply because it seemed like a fun little project. It isn’t a lot of code that is changing often so once ported it should be more or less finished and it shouldn’t be much work to stay in sync with the C version. I started thinking about doing this already after the initial release of the C-based ebur128 release, but after reading Joe Neeman’s blog post about porting another C audio library (RNNoise) to Rust this gave me the final push to actually start with porting the code and to follow through until it’s done.

However, don’t go around and ask other people to rewrite their projects in Rust (don’t be rude) or think that your own rewrite is magically going to be much faster and less buggy than the existing implementation. While Rust saves you from a big class of possible bugs, it doesn’t save you from yourself and usually rewrites contain bugs that didn’t exist in the original implementation. Also getting good performance in Rust requires, like in every other language, some effort. Before rewriting any software, think about the goals of this rewrite realistically as well as the effort required to actually get it finished.

Apart from fun there were also a few technical and non-technical reasons for me to look into this. I’m going to just list two here (curiosity and portability). I will skip the usual Rust memory-safety argument as that seems less important with this code: the C code is widely used for a long time, not changing a lot and has easy to follow memory access patterns. While it definitely had a memory safety bug (see above), it was rather difficult to trigger and it was fixed in the meantime.

Curiosity

Personally and at my company Centricular we try to do any new projects where it makes sense in Rust. While this worked very well in the past and we got great results, there were some questions for future projects that I wanted to get some answers, hard data and personal experience for

  • How difficult is it to port a C codebase function by function to Rust while keeping everything working along the way?
  • How difficult is it to get the same or better performance with idiomatic Rust code for low-level media processing code?
  • How much bigger or smaller is the resulting code and do Rust’s higher-level concepts like iterators help to keep code concise?
  • How difficult is it to create a C-compatible library in Rust with the same API and ABI?

I have some answers to all these questions already but previous work on this was not well structured and the results were also not documented, which I’m trying to change here now. Both to have a reference for myself in the future as well as for convincing other people that Rust is a reasonable technology choice for such projects.

As you can see the general pattern of these questions are introducing Rust into an existing codebase, replacing existing components with Rust and writing new components in Rust, which is also relates to my work on the Rust GStreamer bindings.

Portability

C is a very old language and while there is a standard, each compiler has its own quirks and each platform different APIs on top of the bare minimum that the C standard defines. C itself is very portable, but it is not easy to write portable C code, especially when not using a library like GLib that hides these differences and provides basic data structures and algorithms.

This seems to be something that is often forgotten when the portability of C is given as an argument against Rust, and that’s the reason why I wanted to mention this here specifically. While you can get a C compiler basically everywhere, writing C code that also runs well everywhere is another story and C doesn’t make this easy by design. Rust on the other hand makes writing portable code quite easy in my experience.

In practice there were three specific issues I had for this codebase. Most of the advantages of Rust here are because it is a new language and doesn’t have to carry a lot of historical baggage.

Mathematical Constants and Functions

Mathematical constants are not actually part of any C standard. While most compilers just define M_PI (for π), M_E (for 𝖾) and others in math.h nonetheless as they’re defined by POSIX and UNIX98.

Microsoft’s MSVC doesn’t, but instead you have to #define _USE_MATH_DEFINES before including math.h.

While not a big problem per-se, it is annoying and indeed caused the initial version of the ebur128 Rust crate to not compile with MSVC because I forgot about it.

Similarly, which mathematical functions are available depends a lot on the target platform and which version of the C standard is supported. An example of this is the log10 function to calculate the base-10 logarithm. For portability reasons, libebur128 didn’t use it but instead calculated it via the natural logarithm (ln(x) / ln(10) = log10(x)) because it’s only available in POSIX and since C99. While C99 is from 1999, there are still many compilers out there that don’t fully support it, again most prominently MSVC until very recently.

Using log10 instead of going via the natural logarithm is faster and more precise due to floating point number reasons, which is why the Rust implementation uses it but in C it would be required to check at build-time if the function is available or not, which complicates the build process and can easily be forgotten. libebur128 decided to not bother with these complications and simply not use it. Because of that, some conditional code in the Rust implementation is necessary for ensuring that both implementations return the same results in the tests.

Data Structures

libebur128 uses a linked-list-based queue data structure. As the C standard library is very minimal, no collection data structures are included. However on the BSDs and also on Linux with the GNU C library there is one available in sys/queue.h.

Of course MSVC does not have this and other compilers/platforms probably won’t have it either, so libebur128 included a local copy of that queue implementation. Now when building, one has to decide whether there is a system implementation available or otherwise use the internal version. Or simply always use the internal version.

Copying implementations of basic data structures and algorithms into every single project is ugly and error-prone, so let’s maybe not do that. C not having a standardized mechanism for dependency handling doesn’t help with this, which is unfortunately why this is very common in C projects.

One-time Initialization

Thread-safe one-time initialization is another thing that is not defined by the C standard, and depending on your platform there are different APIs available for it or none at all. POSIX again defines one that is widely available, but you can’t really depend on it unconditionally.

This complicates the code and build procedure, so libebur128 simply did not do that and did its one-time initializations of some global arrays every time a new instance was created. Which is probably fine, but a bit wasteful and probably strictly-speaking according to the C standard not actually thread-safe.

The initial version of the ebur128 Rust crate side-stepped this problem by simply doing this initialization once with the API provided by the Rust standard library. See part 2 and part 3 of this blog post for some more details about this.

Easier to Compile and Integrate

A Rust port only requires a Rust compiler, a mixed C/Rust codebase requires at least a C compiler in addition and some kind of build system for the C code.

libebur128 uses CMake, which would be an additional dependency so in the initial version of the ebur128 crate I went via cargo‘s build.rs build scripts and the cc crate as building libebur128 is easy enough. This works but build scripts are problematic for integration of the Rust code into other build systems than cargo.

The Rust port also makes use of conditional compilation in various places. Unlike in C with the preprocessor, non-standardized and inconsistent platform #defines and it being necessary to integrate everything in a custom way into the build system, Rust has a principled and well-designed approach to this problem. This makes it easier to keep the code clean, easier to maintain and more portable.

In addition to build system related simplifications, by not having any C code it is also much easier to compile the code to other targets like WebAssembly, which is natively supported by Rust. It is also possible to compile C to WebAssembly but getting both toolchains to agree with each other and produce compatible code seems not very easy.

Overview

As mentioned above, the code can be found on GitHub and in the ebur128 crate on crates.io.

The current version of the code produces the exact same results as the C version. This is enforced by the quickcheck tests that are running randomized inputs through both versions and check that the results are the same. The code also succeeds all the tests in the EBU loudness test set, so should hopefully be standards compliant as long as the test implementation is not wrong.

Performance-wise the Rust implementation is at least as fast as the C implementation. In some configurations it’s a few percent faster but probably not enough that it actually matters in practice. There are various benchmarks for both versions in different configurations available. The benchmarks are based on the criterion crate, which uses statistical methods to give as accurate as possible results. criterion also generates nice results with graphs for making analysis of the results more pleasant. See part 3 of this blog post for more details.

Writing tests and benchmarks for Rust is so much easier and feels more natural then doing it in C, so the Rust implementation has quite good coverage of the different code paths now. Especially no struggling with build systems was necessary like it would have been in C thanks to cargo and Rust having built-in support. This alone seems to have the potential to cause Rust code having, on average, better quality than similar code written in C.

It is also possible to compile the Rust implementation into a C library with the great cargo-c tool. This easily builds the code as a static/dynamic C library and installs the library, a C header file and also a pkg-config file. With this the Rust implementation is a 100% drop-in replacement of the C libebur128. It is not even necessary to recompile existing code. See part 4 of this blog post for more details.

Dependencies

Apart from the Rust standard library the Rust implementation depends on two other, small and widely used crates. Unlike with C, depending on external dependencies is rather simple with Rust and cargo. The two crates in question are

  • smallvec for a dynamically sized vectors/arrays that can be stored on the stack up to a certain size and only then fall back to heap allocations. This allows to avoid a couple of heap allocations under normal usage.
  • bitflags, which provides a macro for implementing properly typed bitflags. This is used in the constructor of the main type for selecting the features and modes that should be enabled, which directly maps to how the C API works (just with less type-safety).

Unsafe Code

A common question when announcing a Rust port of some C library is how much unsafe code was necessary to reach the same performance as the C code. In this case there are two uses of unsafe code outside the FFI code to call the C implementation in the tests/benchmarks and the C API.

Resampler

The True Peak measurement is using a resampler to upsample the audio signal to a higher sample rate. As part of the most inner loop of the resampler a statically sized ringbuffer is used.

As part of that ringbuffer, explicit indexing of a slice is needed. While the indexes are already manually checked to wrap around when needed, the Rust compiler and LLVM can’t figure that out so additional bounds checks plus panic handling is present in the compiled code. Apart from slowing down the loop with the additional condition, the panic code also causes the whole loop to be optimized less well.

So to get around that, unsafe indexing into the slice is used for performance reasons. While it requires a human now to check the memory safety of the code instead of relying on the compiler, the code in question is simple and small enough that it shouldn’t be a problem in practice.

More on this in part 2 and part 3 of this blog post.

Flushing Denormals to Zero

The other use of unsafe code is in the filter that is applied to the incoming audio signal. On x86/x86-64 the MXCSR register temporarily gets the _MM_FLUSH_ZERO_ON bit set to flush denormal floating point number to zero. That is, denormals (i.e. very small numbers close to zero) as result of any floating point operation are considered as zero.

This happens both for performance reasons as well as correctness reasons. Operations on denormals are generally much slower than on normalized floating point numbers. This has a measurable impact on the performance in this case.

Also as the C library does the same and not flushing denormals to zero would lead to slightly different results. While this difference doesn’t matter in practice as it’s very very small, it would make it harder to compare the results of both implementations as they wouldn’t be as close to each other anymore.

Doing this affects every floating point operation that happens while that bit is set, but because these are only the floating point operations performed by this crate and it’s guaranteed that the bit is unset again (even in case of panics) before leaving the filter, this shouldn’t cause any problems for other code.

Additional Features

Once the C library was ported and performance was comparable to the C implementation, I shortly checked the issues reported on the C library to check if there’s any useful feature requests or bug reports that I could implement / fix in the Rust implementation. There were three, one of which I also wanted for a future project.

None of the new features are available via the C API at this point for compatibility reasons.

Resetting the State

For this one there was a PR already for the C library. Previously the only way to reset all measurements was to create a new instance, which involves new memory allocations, filter initialization, etc..

It’s easy enough to provide a reset method to do only the minimal work required to reset all measurements and restart with a fresh state so I’ve added that to the Rust implementation.

Fix set_max_window() to actually work

This was a bug introduced in the C implementation a while ago in an attempt to prevent integer overflows when calculating sizes of memory allocations, which then would cause memory safety bugs because less memory was allocated than expected. Accidentally this fix restricted the allowed values for the maximum window size too much. There is a PR for fixing this in the C implementation.

On the Rust side this bug also existed because I simply ported over the checks. If I hadn’t ported over the checks, or ported an earlier version without the checks, there fortunately wouldn’t have been any memory safety bug on the Rust side though but instead one of two situations would have happened instead

  1. In debug builds integer overflows cause a panic, so instead of allocating less memory than expected during the setting of the parameters there would’ve been a panic immediately instead of invalid memory accesses later.
  2. In release builds integer overflows simply wrap around for performance reasons. This would’ve caused less memory than expected to be allocated, but later when trying to access the memory there would’ve been a panic when trying to access memory outside the allocated area.

While a panic is also not nice, it at least leads to no undefined behaviour and prevents worse things from happening.

The proper fix in this case was to not restrict the maximum window size statically but to instead check for overflows during the calculations. This is the same the PR for the C implementation does, but on the Rust side this is much easier because of built-in operations like checked_mul for doing an overflow-checking multiplication. In C this requires some rather convoluted code (check the PR for details).

Support for Planar Audio Input

The last additional feature that I implemented was support for planar audio input, for which also a PR to the C implementation exists already.

Most of the time audio signals have the samples of each channel interleaved with each other, so for example for stereo you have an array of samples with the first sample for the left channel, the first sample for the right channel, the second sample for the left channel, etc.. While this representation has some advantages, in other situations it is easier or faster to work with planar audio: the samples of each channel are contiguous one after another, so you have e.g. first all the samples of the left channel one after another and only then all samples of the right channel.

The PR for the C implementation does this with some code duplication of existing macro code (which can be prevented by making the macros more complicated), on the Rust side I implemented this without any code duplication by adding an internal abstraction for interleaved/planar audio and iterating over the samples and then working with that in normal, generic Rust code. This required some minor refactoring and code reorganization but in the end was rather painless. Note that most of the change is addition of new tests and moving some code around.

When looking at the Samples trait, the main part of this refactoring, one might wonder why I used closures instead of Rust iterators for iterating over the samples and the reason is unfortunately performance. More on this in part 3 of this blog post.

Next Part

In the next part of this blog post I will describe the porting approach in detail and also give various examples for how to port C code to idiomatic Rust, and some examples of problems I was running into.

on September 22, 2020 01:00 PM

September 21, 2020

Previously: v5.6

Linux v5.7 was released at the end of May. Here’s my summary of various security things that caught my attention:

arm64 kernel pointer authentication
While the ARMv8.3 CPU “Pointer Authentication” (PAC) feature landed for userspace already, Kristina Martsenko has now landed PAC support in kernel mode. The current implementation uses PACIASP which protects the saved stack pointer, similar to the existing CONFIG_STACKPROTECTOR feature, only faster. This also paves the way to sign and check pointers stored in the heap, as a way to defeat function pointer overwrites in those memory regions too. Since the behavior is different from the traditional stack protector, Amit Daniel Kachhap added an LKDTM test for PAC as well.

BPF LSM
The kernel’s Linux Security Module (LSM) API provide a way to write security modules that have traditionally implemented various Mandatory Access Control (MAC) systems like SELinux, AppArmor, etc. The LSM hooks are numerous and no one LSM uses them all, as some hooks are much more specialized (like those used by IMA, Yama, LoadPin, etc). There was not, however, any way to externally attach to these hooks (not even through a regular loadable kernel module) nor build fully dynamic security policy, until KP Singh landed the API for building LSM policy using BPF. With this, it is possible (for a privileged process) to write kernel LSM hooks in BPF, allowing for totally custom security policy (and reporting).

execve() deadlock refactoring
There have been a number of long-standing races in the kernel’s process launching code where ptrace could deadlock. Fixing these has been attempted several times over the last many years, but Eric W. Biederman and Ernd Edlinger decided to dive in, and successfully landed the a series of refactorings, splitting up the problematic locking and refactoring their uses to remove the deadlocks. While he was at it, Eric also extended the exec_id counter to 64 bits to avoid the possibility of the counter wrapping and allowing an attacker to send arbitrary signals to processes they normally shouldn’t be able to.

slub freelist obfuscation improvements
After Silvio Cesare observed some weaknesses in the implementation of CONFIG_SLAB_FREELIST_HARDENED‘s freelist pointer content obfuscation, I improved their bit diffusion, which makes attacks require significantly more memory content exposures to defeat the obfuscation. As part of the conversation, Vitaly Nikolenko pointed out that the freelist pointer’s location made it relatively easy to target too (for either disclosures or overwrites), so I moved it away from the edge of the slab, making it harder to reach through small-sized overflows (which usually target the freelist pointer). As it turns out, there were a few assumptions in the kernel about the location of the freelist pointer, which had to also get cleaned up.

RISCV page table dumping
Following v5.6’s generic page table dumping work, Zong Li landed the RISCV page dumping code. This means it’s much easier to examine the kernel’s page table layout when running a debug kernel (built with PTDUMP_DEBUGFS), visible in /sys/kernel/debug/kernel_page_tables.

array index bounds checking
This is a pretty large area of work that touches a lot of overlapping elements (and history) in the Linux kernel. The short version is: C is bad at noticing when it uses an array index beyond the bounds of the declared array, and we need to fix that. For example, don’t do this:

int foo[5];
...
foo[8] = bar;

The long version gets complicated by the evolution of “flexible array” structure members, so we’ll pause for a moment and skim the surface of this topic. While things like CONFIG_FORTIFY_SOURCE try to catch these kinds of cases in the memcpy() and strcpy() family of functions, it doesn’t catch it in open-coded array indexing, as seen in the code above. GCC has a warning (-Warray-bounds) for these cases, but it was disabled by Linus because of all the false positives seen due to “fake” flexible array members. Before flexible arrays were standardized, GNU C supported “zero sized” array members. And before that, C code would use a 1-element array. These were all designed so that some structure could be the “header” in front of some data blob that could be addressable through the last structure member:

/* 1-element array */
struct foo {
    ...
    char contents[1];
};

/* GNU C extension: 0-element array */
struct foo {
    ...
    char contents[0];
};

/* C standard: flexible array */
struct foo {
    ...
    char contents[];
};

instance = kmalloc(sizeof(struct foo) + content_size);

Converting all the zero- and one-element array members to flexible arrays is one of Gustavo A. R. Silva’s goals, and hundreds of these changes started landing. Once fixed, -Warray-bounds can be re-enabled. Much more detail can be found in the kernel’s deprecation docs.

However, that will only catch the “visible at compile time” cases. For runtime checking, the Undefined Behavior Sanitizer has an option for adding runtime array bounds checking for catching things like this where the compiler cannot perform a static analysis of the index values:

int foo[5];
...
for (i = 0; i < some_argument; i++) {
    ...
    foo[i] = bar;
    ...
}

It was, however, not separate (via kernel Kconfig) until Elena Petrova and I split it out into CONFIG_UBSAN_BOUNDS, which is fast enough for production kernel use. With this enabled, it's now possible to instrument the kernel to catch these conditions, which seem to come up with some regularity in Wi-Fi and Bluetooth drivers for some reason. Since UBSAN (and the other Sanitizers) only WARN() by default, system owners need to set panic_on_warn=1 too if they want to defend against attacks targeting these kinds of flaws. Because of this, and to avoid bloating the kernel image with all the warning messages, I introduced CONFIG_UBSAN_TRAP which effectively turns these conditions into a BUG() without needing additional sysctl settings.

Fixing "additive" snprintf() usage
A common idiom in C for building up strings is to use sprintf()'s return value to increment a pointer into a string, and build a string with more sprintf() calls:

/* safe if strlen(foo) + 1 < sizeof(string) */
wrote  = sprintf(string, "Foo: %s\n", foo);
/* overflows if strlen(foo) + strlen(bar) > sizeof(string) */
wrote += sprintf(string + wrote, "Bar: %s\n", bar);
/* writing way beyond the end of "string" now ... */
wrote += sprintf(string + wrote, "Baz: %s\n", baz);

The risk is that if these calls eventually walk off the end of the string buffer, it will start writing into other memory and create some bad situations. Switching these to snprintf() does not, however, make anything safer, since snprintf() returns how much it would have written:

/* safe, assuming available <= sizeof(string), and for this example
 * assume strlen(foo) < sizeof(string) */
wrote  = snprintf(string, available, "Foo: %s\n", foo);
/* if (strlen(bar) > available - wrote), this is still safe since the
 * write into "string" will be truncated, but now "wrote" has been
 * incremented by how much snprintf() *would* have written, so "wrote"
 * is now larger than "available". */
wrote += snprintf(string + wrote, available - wrote, "Bar: %s\n", bar);
/* string + wrote is beyond the end of string, and availabe - wrote wraps
 * around to a giant positive value, making the write effectively 
 * unbounded. */
wrote += snprintf(string + wrote, available - wrote, "Baz: %s\n", baz);

So while the first overflowing call would be safe, the next one would be targeting beyond the end of the array and the size calculation will have wrapped around to a giant limit. Replacing this idiom with scnprintf() solves the issue because it only reports what was actually written. To this end, Takashi Iwai has been landing a bunch scnprintf() fixes.

That's it for now! Let me know if there is anything else you think I should mention here. Next up: Linux v5.8.

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

on September 21, 2020 11:32 PM

September 15, 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, 237.25 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

August was a regular LTS month once again, even though it was only our 2nd month with Stretch LTS.
At the end of August some of us participated in DebConf 20 online where we held our monthly team meeting. A video is available.
As of now this video is also the only public resource about the LTS survey we held in July, though a written summary is expected to be released soon.

The security tracker currently lists 56 packages with a known CVE and the dla-needed.txt file has 55 packages needing an update.

Thanks to our sponsors

Sponsors that recently joined are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on September 15, 2020 10:01 AM

Middle of September 2020 Notes

Stephen Michael Kellat

”A person is praised for his insight, but a warped mind leads to contempt.” – Proverbs 12:8 (Common English Bible)

It has been a while since I have written anything that might appear on Planet Ubuntu. Specifically the last time was June 26th. That’s not necessarily a good thing.

I have been busy writing. What have I been writing? I knocked out a new novelette in Visual Studio Code. The print version was typeset using the novel class using LuaLaTeX. It is a bit of a sci-fi police procedural. It is up on Amazon for people to acquire though I do note that Amazon’s print-on-demand costs have gone up a wee bit since the start of the planet-wide coronavirus crisis.

I also have taken time to test the Groovy Gorilla ISOs for Xubuntu. I encourage everybody out there to visit the testing tracker to test disc images for Xubuntu and other flavours as we head towards the release of 20.10 next month. Every release needs as much testing as possible.

Based upon an article from The Register it appears that the Community Council is being brought back to life. Nominations are being sought per a post on the main Discourse instance but readers of this are reminded that you need to be a current member either directly or indirectly of the 609 Ubuntu Members shown on Launchpad. Those 609 persons are the electors for the Community Council and the Community Council is drawn from that group. The size and composition of the Ubuntu Members group on Launchpad can change based upon published procedures and the initiative of individual to be part of such changes.

I will highlight an article at Yahoo Finance concerning financial distress among the fifty states. Here in Ohio we are seemingly in the middle of the pack. In Ashtabula County we have plenty of good opportunities in the age of coronavirus especially with our low transmission rates and very good access to medical facilities. With some investment in broadband backhaul an encampment for coders could be built who did not want to stick with city living. There is enough empty commercial real estate available to provide opportunities for film and television production if the wildfires and coronavirus issues out in California are not brought under control any time soon.

As a closing note, a federal trial judge ruled that the current coronavirus response actions in Pennsylvania happen to be unconstitutional. A similar lawsuit is pending before a trial judge here in Ohio about coronavirus response actions in this particular state. This year has been abnormal in so many ways and this legal news is just another facet of the abnormality.

on September 15, 2020 01:49 AM

Disk usage

So, you wake up one day, and find that one of your programs, starts to complainig about “No space left on device”:

Next thing (Obviously, duh?) is to see what happened, so you fire up du -h /tmp right?:

$ du -h /tmp
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/zkvm1-root  6.2G  4.6G  1.3G  79% /

Well, yes, but no, ok? ok, ok!

Wait, what? there’s space there! How can it be? In all my years of experience (+15!), I’ve never seen such thing!

Gods must be crazy!? or is it a 2020 thing?

I disagree with you

$ touch /tmp
touch: cannot touch ‘/tmp/test’: No space left on device

Wait, what? not even a small empty file? Ok...

After shamelessly googling/duckducking/searching, I ended up at https://blog.merovius.de/2013/10/20/ext4-mysterious-no-space-left-on.html but alas, that was not my problem, although… perhaps too many files?, let’s check with du -i this time:

$ du -i /tmp
`Filesystem             Inodes  IUsed IFree IUse% Mounted on
/dev/mapper/zkvm1-root 417792 417792     0  100% /

Of course!

Because I’m super smart I’m not, I now know where my problem is, too many files!, time to start fixing this…

After few minutes of deleting files, moving things around, bind mounting things, I landed with the actual root cause:

Tons of messages waiting in /var/spool/clientmqueue to be processed, I decided to delete some, after all, I don’t care about this system’s mails… so find /var/spool/clientmqueue -type f -delete does the job, and allows me to have tab completion again! YAY!.

However, because deleting files blindly is never a good solution, I ended up in the link from above, the solution was quite simple:

$ systemctl enable --now sendmail

Smart idea!

After a while, root user started to receive system mail, and I could delete them afterwards :)

In the end, very simple solution (In my case!) rather than formatting or transfering all the data to a second drive, formatting & playing with inode size and stuff…

Filesystem             Inodes IUsed  IFree IUse% Mounted on
/dev/mapper/zkvm1-root 417792 92955 324837   23% /

Et voilà, ma chérie! It's alive!

This is a very long post, just to say:

ext4 no space left on device can mean: You have no space left, or you don’t have more room to store your files.

on September 15, 2020 12:00 AM

September 13, 2020

Wootbook / Tongfang laptop

Jonathan Carter

Old laptop

I’ve been meaning to get a new laptop for a while now. My ThinkPad X250 is now 5 years old and even though it’s still adequate in many ways, I tend to run out of memory especially when running a few virtual machines. It only has one memory slot, which I maxed out at 16GB shortly after I got it. Memory has been a problem in considering a new machine. Most new laptops have soldered RAM and local configurations tend to ship with 8GB RAM. Getting a new machine with only a slightly better CPU and even just the same amount of RAM as what I have in the X250 seems a bit wasteful. I was eyeing the Lenovo X13 because it’s a super portable that can take up to 32GB of RAM, and it ships with an AMD Ryzen 4000 series chip which has great performance. With Lenovo’s discount for Debian Developers it became even more attractive. Unfortunately that’s in North America only (at least for now) so that didn’t work out this time.

Enter Tongfang

I’ve been reading a bunch of positive reviews about the Tuxedo Pulse 14 and KDE Slimbook 14. Both look like great AMD laptops, supports up to 64GB of RAM and clearly runs Linux well. I also noticed that they look quite similar, and after some quick searches it turns out that these are made by Tongfang and that its model number is PF4NU1F.

I also learned that a local retailer (Wootware) sells them as the Wootbook. I’ve seen one of these before although it was an Intel-based one, but it looked like a nice machine and I was already curious about it back then. After struggling for a while to find a local laptop with a Ryzen CPU and that’s nice and compact and that breaks the 16GB memory barrier, finding this one that jumped all the way to 64GB sealed the deal for me.

This is the specs for the configuration I got:

  • Ryzen 7 4800H 2.9GHz Octa Core CPU (4MB L2 cache, 8MB L3 cache, 7nm process).
  • 64GB RAM (2x DDR4 2666mHz 32GB modules)
  • 1TB nvme disk
  • 14″ 1920×1280 (16:9 aspect ratio) matt display.
  • Real ethernet port (gigabit)
  • Intel Wifi 6 AX200 wireless ethernet
  • Magnesium alloy chassis

This configuration cost R18 796 (€947 / $1122). That’s significantly cheaper than anything else I can get that even starts to approach these specs. So this is a cheap laptop, but you wouldn’t think so by using it.

I used the Debian netinstall image to install, and installation was just another uneventful and boring Debian installation (yay!). Unfortunately it needs the firmware-iwlwifi and firmare-amd-graphics packages for the binary blobs that drives the wifi card and GPU. At least it works flawlessly and you don’t need an additional non-free display driver (as is the case with NVidia GPUs). I haven’t tested the graphics extensively yet, but desktop graphics performance is very snappy. This GPU also does fancy stuff like VP8/VP9 encoding/decoding, so I’m curious to see how well it does next time I have to encode some videos. The wifi upgrade was nice for copying files over. My old laptop maxed out at 300Mbps, this one connects to my home network between 800-1000Mbps. At this speed I don’t bother connecting via cable at home.

I read on Twitter that Tuxedo Computers thinks that it’s possible to bring Coreboot to this device. That would be yet another plus for this machine.

I’ll try to answer some of my own questions about this device that I had before, that other people in the Debian community might also have if they’re interested in this device. Since many of us are familiar with the ThinkPad X200 series of laptops, I’ll compare it a bit to my X250, and also a little to the X13 that I was considering before. Initially, I was a bit hesitant about the 14″ form factor, since I really like the portability of the 12.5″ ThinkPad. But because the screen bezel is a lot smaller, the Wootbook (that just rolls off the tongue a lot better than “the PF4NU1F”) is just slightly wider than the X250. It weighs in at 1.1KG instead of the 1.38KG of the X250. It’s also thinner, so even though it has a larger display, it actually feels a lot more portable. Here’s a picture of my X250 on top of the Wootbook, you can see a few mm of Wootbook sticking out to the right.

Card Reader

One thing that I overlooked when ordering this laptop was that it doesn’t have an SD card reader. I see that some variations have them, like on this Slimbook review. It’s not a deal-breaker for me, I have a USB card reader that’s very light and that I’ll just keep in my backpack. But if you’re ordering one of these machines and have some choice, it might be something to look out for if it’s something you care about.

Keyboard/Touchpad

On to the keyboard. This keyboard isn’t quite as nice to type on as on the ThinkPad, but, it’s not bad at all. I type on many different laptop keyboards and I would rank this keyboard very comfortably in the above average range. I’ve been typing on it a lot over the last 3 days (including this blog post) and it started feeling natural very quickly and I’m not distracted by it as much as I thought I would be transitioning from the ThinkPad or my mechanical desktop keyboard. In terms of layout, it’s nice having an actual “Insert” button again. This is things normal users don’t care about, but since I use mc (where insert selects files) this is a welcome return :). I also like that it doesn’t have a Print Screen button at the bottom of my keyboard between alt and ctrl like the ThinkPad has. Unfortunately, it doesn’t have dedicated pgup/pgdn buttons. I use those a lot in apps to switch between tabs. At leas the Fn button and the ctrl buttons are next to each other, so pressing those together with up and down to switch tabs isn’t that horrible, but if I don’t get used to it in another day or two I might do some remapping. The touchpad has en extra sensor-button on the top left corner that’s used on Windows to temporarily disable the touchpad. I captured it’s keyscan codes and it presses left control + keyscan code 93. The airplane mode, volume and brightness buttons work fine.

I do miss the ThinkPad trackpoint. It’s great especially in confined spaces, your hands don’t have to move far from the keyboard for quick pointer operations and it’s nice for doing something quick and accurate. I painted a bit in Krita last night, and agree with other reviewers that the touchpad could do with just a bit more resolution. I was initially disturbed when I noticed that my physical touchpad buttons were gone, but you get right-click by tapping with two fingers, and middle click with tapping 3 fingers. Not quite as efficient as having the real buttons, but it actually works ok. For the most part, this keyboard and touchpad is completely adequate. Only time will tell whether the keyboard still works fine in a few years from now, but I really have no serious complaints about it.

Display

The X250 had a brightness of 172 nits. That’s not very bright, I think the X250 has about the dimmest display in the ThinkPad X200 range. This hasn’t been a problem for me until recently, my eyes are very photo-sensitive so most of the time I use it at reduced brightness anyway, but since I’ve been working from home a lot recently, it’s nice to sometimes sit outside and work, especially now that it’s spring time and we have some nice days. At full brightness, I can’t see much on my X250 outside. The Wootbook is significantly brighter even (even at less than 50% brightness), although I couldn’t find the exact specification for its brightness online.

Ports

The Wootbook has 3x USB type A ports and 1x USB type C port. That’s already quite luxurious for a compact laptop. As I mentioned in the specs above, it also has a full-sized ethernet socket. On the new X13 (the new ThinkPad machine I was considering), you only get 2x USB type A ports and if you want ethernet, you have to buy an additional adapter that’s quite expensive especially considering that it’s just a cable adapter (I don’t think it contains any electronics).

It has one hdmi port. Initially I was a bit concerned at lack of displayport (which my X250 has), but with an adapter it’s possible to convert the USB-C port to displayport and it seems like it’s possible to connect up to 3 external displays without using something weird like display over usual USB3.

Overall remarks

When maxing out the CPU, the fan is louder than on a ThinkPad, I definitely noticed it while compiling the zfs-dkms module. On the plus side, that happened incredibly fast. Comparing the Wootbook to my X250, the biggest downfall it has is really it’s pointing device. It doesn’t have a trackpad and the touchpad is ok and completely usable, but not great. I use my laptop on a desk most of the time so using an external mouse will mostly solve that.

If money were no object, I would definitely choose a maxed out ThinkPad for its superior keyboard/mouse, but the X13 configured with 32GB of RAM and 128GB of SSD retails for just about double of what I paid for this machine. It doesn’t seem like you can really buy the perfect laptop no matter how much money you want to spend, there’s some compromise no matter what you end up choosing, but this machine packs quite a punch, especially for its price, and so far I’m very happy with my purchase and the incredible performance it provides.

I’m also very glad that Wootware went with the gray/black colours, I prefer that by far to the white and silver variants. It’s also the first laptop I’ve had since 2006 that didn’t come with Windows on it.

The Wootbook is also comfortable/sturdy enough to carry with one hand while open. The ThinkPads are great like this and with many other brands this just feels unsafe. I don’t feel as confident carrying it by it’s display because it’s very thin (I know, I shouldn’t be doing that with the ThinkPads either, but I’ve been doing that for years without a problem :) ).

There’s also a post on Reddit that tracks where you can buy these machines from various vendors all over the world.

on September 13, 2020 08:44 PM

September 10, 2020

Unav 3 is here!

Costales

The new uNav 3 is here! A simple, easy & beautiful GPS Navigator for Ubuntu Touch! 100% Libre. Doesn't track you, it respects you. Powered by Openstreemap. Online & offline (powered by OSM Scout Server) GPS Navigation. Enjoy it in your UBPorts device!
on September 10, 2020 09:10 PM

September 07, 2020

sxmo on pinephone

Serge Hallyn

If you are looking for a new phone that either respects your privacy, leaves you in control, or just has a different form factor from the now ubiquitous 6″ slab, there are quite a few projects in various states of readiness

Freedom:

  • vollaphone
  • oneplus
  • pinephone
  • librem 5
  • fairphone

Different form factors:

Earlier this year I bought a pinephone, braveheart edition. I’ve tried several OSes on it. Just yesterday, I tried:

  • sailfish: looked great, but it would not recognize sim, and crashed when launching browser.
  • ubports (ubuntu touch): looked good, texting worked, but crashed when launching app store and would not ring on incoming calls.
  • mobian: nice set of default apps, but again would not ring on incoming calls.

So I’m back to running what I’ve had on it for a month or two – sxmo, the suckless mobile operating system. It’s an interesting, different take on> interacting with the phone, and I quite like it. More importantly, for now it’s the most reliable as a communication devvice. With it, I can

  • make and receive calls and texts.
  • send texts using vi :).
  • easily send/receive mail using mbsync, mutt, and msmtp.
  • easily customize using scripts – editing existing ones, and adding new ones to the menu system.
  • use a cozy, known setup (dwm, st, tmux, sshd)
  • change call and text ringtone based on the caller – few other phones I’ve had have done that, not one did it well.
  • have a good browsing experience.
  • use both wifi and 4G data. I’ve not hotspotted, but can see no reason why that will be a problem.

The most limiting thing about this phone is the battery. It drains very quickly, charges slowly, and if I leave the battery in while turned off, it continues to discharge until, after a day, it doesn’t want to turn back on. An external battery charger helps enormously with this. There is also an apparent hardware misfeature which will prevent the modem from waking the cpu during deep sleep – this will presumably be fixed in later hardware versions, remember mine is the braveheart .

on September 07, 2020 03:01 PM

September 05, 2020

Akademy Kicks off

Jonathan Riddell

Viewers on Planet Ubuntu can see the videos on my original post.

Akademy 2020 launched in style with this video starring moi and many other good looking contributors..

We’re online now, streaming onto YouTube at room 1 and room 2 or register for the event to get involved.

I gave the first KDE talk of the conference talking about the KDE is All About the Apps goal

And after the Consistency and Wayland talks we had a panel session.

Talks are going on for the next three hours this European early evening.  And start again tomorrow (Sunday).

 

on September 05, 2020 04:36 PM

September 04, 2020

In the spring of 2020, the GNOME project ran their Community Engagement Challenge in which teams proposed ideas that would “engage beginning coders with the free and open-source software community [and] connect the next generation of coders to the FOSS community and keep them involved for years to come.” I have a few thoughts on this topic, and so does Alan Pope, and so we got chatting and put together a proposal for a programming environment for making simple apps in a way that new developers could easily grasp. We were quite pleased with it as a concept, but: it didn’t get selected for further development. Oh well, never mind. But the ideas still seem good to us, so I think it’s worth publishing the proposal anyway so that someone else has the chance to be inspired by it, or decide they want it to happen. Here:

Cabin: Creating simple apps for Linux, by Stuart Langridge and Alan Pope

I’d be interested in your thoughts.

on September 04, 2020 09:30 AM

For the past few months, I’ve been running a handful of SSH Honeypots on some cloud providers, including Google Cloud, DigitalOcean, and NameCheap. As opposed to more complicated honeypots looking at attacker behavior, I decided to do something simple and was only interested in where they were coming from, what tools might be in use, and what credentials they are attempting to use to authenticate. My dataset includes 929,554 attempted logins over a period of a little more than 3 months.

If you’re looking for a big surprise, I’ll go ahead and let you down easy: my analysis hasn’t located any new botnets or clusters of attackers. But it’s been a fascinating project nonetheless.

Honeypot Design

With a mere 200ish lines of Go, I implemented a honeypot server using the golang.org/x/crypto/ssh library as the underlying implementation. I advertised a portable OpenSSH version as the server version string (sent to clients on connection). I then logged each connection to a SQLite database, including the timestamp, IP address, client version, and credentials used to (attempt to) authenticate.

Analysis of Credentials

In a surprise to absolutely nobody, root is by far the most commonly tried username for login sessions. I suspect there must be many attackers trying lists of passwords with just root as the username, as 78% of attempted logins were with username root. None of the remainder of the top 10 are particularly surprising, although usuario was not one I expected to see. (It is Spanish for user.)

Blank passwords are the most common attempted passwords, followed by other obvious choices, like 123456 and password. Just off the top 10 list was a surprising choice of password: J5cmmu=Kyf0-br8CsW. Interestingly, a Google search for this password only finds other people with experience running credential honeypots. It doesn’t appear in any of the password wordlists I have, including SecLists and others. If anyone knows what this is a password for, I’d love to know.

There were a number of other interesting passwords such as 7ujMko0admin, used for a bunch of networked DVRs, and also known to be used by malware attacking IoT devices. There are other passwords that don’t look obvious to a US-centric view of the world, like:

  • baikal – a lake in Siberia
  • prueba – Spanish for test
  • caonima – a Mandarin profanity written in Pinyin
  • meiyoumima – Mandarin for “no password”
  • woaini – Mandarin for “I love you”
  • poiuytThe name for an optical illusion also known as the "devil's tuning fork" Edit: multiple redditors pointed out this is the begginning of the top row of the keyboard from right to left.

There are also dozens and dozens of keyboard walks, like 1q2w3e, 1qaz@WSX, and !QAZ2wsx. There are many more that took me much longer to realize they were keyboard walks, such as 4rfv$RFV and qpwoei.

It has actually fascinated me to look at some of the less obvious passwords and discern their background. Many are inexplicable, but I assume they are from hardcoded passwords in devices or something along those lines. Or perhaps someone let their cat walk across the keyboard to generate it. I’ve certainly had that experience.

Overall, the top 10 usernames and top 10 passwords (not necessarily together) are:

Username Count Password Count
root 729108 <blank> 40556
admin 23302 123456 14542
user 8420 admin 7757
test 7547 123 7355
oracle 6211 1234 7099
ftpuser 4012 root 6999
ubuntu 3657 password 6118
guest 3606 test 5671
postgres 3455 12345 5223
usuario 2876 guest 4423

There were a total of 128,588 unique pairings of username and password attempted, though only 38,112 were attempted 5 or more times. You can download the full list of pairs with counts here, but I’ve omitted those attempted less than 5 times in case a legitimate user typo’d an IP or otherwise was mistaken. The top 25 pairings are:

username password count
root   37580
root root 4213
user user 2794
root 123456 2569
test test 2532
admin admin 2531
root admin 2185
guest guest 2143
root password 2128
oracle oracle 1869
ubuntu ubuntu 1811
root 1234 1681
root 123 1658
postgres postgres 1594
support support 1535
jenkins jenkins 1360
admin password 1241
root 12345 1177
pi raspberry 1160
root 12345678 1126
root 123456789 1069
ubnt ubnt 1069
admin 1234 1012
root 1234567890 967
ec2-user ec2-user 963

Again, no real surprises here. ubnt is a little bit higher than I would have thought (for Ubiquiti networking gear) but I suppose there’s a fair bit of their gear on the internet. It’s interesting to see the mix of “lazy admin” and “default credentials” here. It’s mildly interesting to me that all substrings of the first 10 digits (3 or longer) are included, except for 7 digits. I guess 7 digit passwords are less common?

Timing Information

Though I imagine these kind of untargeted scans are long-term processes continually running, I decided to check and see what the timing looked like anyway. Neither the day of week analysis nor the hour of day analysis look like there’s any significant variance.

Day of Week Hour of Day

Looking at the number of login requests over the time period where I’ve been running the honeypots shows the traffic to be intermittent. While I didn’t expect the number to be constant, the variance is much higher than I expected. I imagine a larger sample size and more nodes would probably make the results more even.

Day of Study

Analysis of Sources

So where are all of these requests coming from? I want to start by noting that none of my analysis is an attempt to attribute the actors making the requests – that’s just not possible with this kind of data. There’s two ways to look at the source of requests – in terms of the network, and in terms of the (assumed) geography. My analysis relied on the IP to ASN and IP to Country data provided by iptoasn.com.

Looking at the country-level data, networks from China lead the pack by a long shot (62% of all login attempts), followed by the US.

Countries

Country Count
CN 577789
US 87589
TW 48645
FR 39072
RU 30929
NL 29920
JP 28033
DE 15408
IN 13921
LT 6623

Again, I’m not claiming that these countries mean anything other than location of the autonomous system (AS) that originates the requests. I also did not do individual IP geolocation, so the results should be taken with a small grain of salt.

So what networks are sourcing this traffic? I have the full AS counts and data, but the top networks are:

AS Name Country ASN Count
CHINANET-BACKBONE No.31,Jin-rong Street CN 4134 202024
CHINANET-JS-AS-AP AS Number for CHINANET jiangsu province backbone CN 23650 186274
CHINA169-BACKBONE CNCGROUP China169 Backbone CN 4837 122192
HINET Data Communication Business Group TW 3462 48492
OVH FR 16276 30865
VECTANT ARTERIA Networks Corporation JP 2519 27481
DIGITALOCEAN-ASN - DigitalOcean, LLC US 14061 26965
MICROSOFT-CORP-MSN-AS-BLOCK - Microsoft Corporation US 8075 20370
RMINJINERING RU 49877 16710
AS38994 NL 38994 14482
XMGBNET Golden-Bridge Netcom communication Co.,LTD. CN 45058 12418
CNNIC-ALIBABA-CN-NET-AP Hangzhou Alibaba Advertising Co.,Ltd. CN 37963 12045
CNNIC-TENCENT-NET-AP Shenzhen Tencent Computer Systems Company Limited CN 45090 10804
CNIX-AP China Networks Inter-Exchange CN 4847 10000
PONYNET - FranTech Solutions US 53667 9317
ITTI US 44685 7960
CHINA169-BJ China Unicom Beijing Province Network CN 4808 7835
AS12876 FR 12876 7262
AS209605 LT 209605 6586
CONTABO DE 51167 6261

AS Graph

Chinanet is no surprise given the high ratio of China in general. OVH is a low-cost host known to have liberal AUP, so is popular for both malicious and research purposes. DigitalOcean and Microsoft, of course, are popular cloud providers. Surprisingly, AWS only sourced about 600 connections, unless they have a large number of IPs on a non-Amazon ASN.

Overall, traffic came from 27,448 unique IPv4 addresses. Of those, more than 11 thousand sent only a single request. At the other end of the spectrum, the top IP source sent 64,969 login requests.

Most hosts sent relatively few requests, the large numbers are outliers:

IP Count Graph

Surely, by now a thought has crossed your mind: how many of these requests are coming from Tor? Surely the Tor network is a wretched hive of scum and villany, and the source of much malicious traffic, right?

Tor Graph

Not at all. Only 219 of the unique source IPs were identified as Tor exit nodes, representing only 0.8% of the sources. On a per-request basis, even a smaller percentage of requests is seen from Tor exit nodes.

Client Software

Remember – this is self-reported by the client application, and just like I can spoof the server version string, so can clients. But I still thought it would be interesting to take a brief look at those.

client count
SSH-2.0-PuTTY 309797
SSH-2.0-PUTTY 182465
SSH-2.0-libssh2_1.4.3 135502
SSH-2.0-Go 125254
SSH-2.0-libssh-0.6.3 62117
SSH-2.0-libssh2_1.7.0 23799
SSH-2.0-libssh2_1.9.0 21627
SSH-2.0-OpenSSH_7.3 9954
SSH-2.0-OpenSSH_7.4p1 8949
SSH-2.0-libssh2_1.8.0 5284
SSH-2.0-JSCH-0.1.45 3469
SSH-2.0-PuTTY_Release_0.70 2080
SSH-2.0-PuTTY_Release_0.63 1813
SSH-2.0-OpenSSH_5.3 1212
SSH-2.0-paramiko_1.8.1 1140
SSH-2.0-PuTTY_Release_0.62 1130
SSH-2.0-OpenSSH_4.3 795
SSH-2.0-PuTTY_Release_0.66 694
SSH-2.0-OpenSSH_7.9p1 Raspbian-10+deb10u2 690
SSH-2.0-libssh_0.11 660

You know, I didn’t expect that. PuTTY as the top client strings. (Also not sure what to make of the case difference.) I wonder if people are building the PuTTY SSH library into a tool for scanning or wrapping the binary in some kind of script.

Go, paramiko, and libssh are less surprising, as they’re libraries designed for integration. It’s hard to know if the OpenSSH requests are linked into a scanning tool or just wrapped versions of the SSH client. At some point in the future, I might dive more into this and trying to figure out which software uses which libraries (at least for the publicly-known tools).

Summary

I was hoping to find something earth-shattering in this research. Instead, I found things that were much as expected – common usernames and passwords, widespread scanning, large numbers of requests. One thing’s for sure though: connect it to the internet and someone’s going to pwn it.

on September 04, 2020 07:00 AM

September 03, 2020

Akademy 2020 Starts Tomorrow

Jonathan Riddell

KDE’s annual conference starts tomorrow with a day of tutorials.  There’s two days of talks at the weekend and the rest of the week with meetings and BoFs.

Register now.

Tomorrow European morning you can learn about QML, Debugging or speed up dev workflows.  In the evening a choice of QML, Multithreading and Implicit Bias training.

Saturday morning the talks start with a Keynote at 09:00UTC and then I’m up talking about the All About the Apps Goal.  There’s an overview of the Wayland and Consistency goals too plus we have a panel to discuss them.

Saturday early evening I’m looking forward to some talks about Qt 6 updates and “Integrating Hollywood Open Source with KDE Applications” sounds intriguing.

On Sunday European morning I’m scared but excited to learn more elite C++ from Ivan, but I hear Linux is being rewritten in Rust so that’s worth learning about next.  And it doesn’t get much more exciting than the Wag Company tails.

In the afternoon those of us who care about licences will enjoy Open Source Compliance and an early win for Kubuntu was switching to System Settings so it’ll be good to get an update Behind the Scene.

On Monday join us for some tutorial on getting your apps to the users with talks on Snaps, Flatpak, neon and Appimage.

Mon, Tue and Wednesday has escape room puzzles.  You need to register for these in advance separately, so sign up now.

There’s a pub quiz on Thursday.

It’s going to be a fun week, and no need to travel so sign up now!

 

on September 03, 2020 02:20 PM