February 03, 2023

Sunset, Witch Wells ArizonaSunset, Witch Wells Arizona

Another busy week!

In the snap world, I have been busy trying to solve the problem of core20 snaps needing security updates and focal is no longer supported in KDE Neon. So I have created a ppa at https://launchpad.net/~scarlettmoore/+archive/ubuntu/kf5-5.99-focal-updates/+packages

Which of course presents more work, as kf5 5.99.0 requires qt5 5.15.7. Sooo this is a WIP.

Snapcraft kde-neon-extension is moving along as I learn the python ways of formatting, and fixing some issues in my tests.

In the Debian world, I am sad to report Mycroft-AI has gone bust, however the packaging efforts are not in vain as the project has been forked to https://github.com/orgs/OpenVoiceOS/repositories and should be relatively easy to migrate.

I have spent some time verifying the libappimage in buster is NOT vulnerable with CVE-2020-25265 as the code wasn’t introduced yet.

Skanpage, plasma-bigscreen both have source uploads so the can migrate to testing to hopefully make it into bookworm!

As many of you know, I am seeking employment. I am a hard worker, that thrives on learning new things. I am a self starter, knowledge sponge, and eager to be an asset to < insert your company here > !

Meanwhile, as interview processes are much longer than I remember and the industry exploding in layoffs, I am coming up short on living expenses as my unemployment lingers on. Please consider donating to my gofundme. Thank you for your consideration.

I still have a ways to go to cover my bills this month, I will continue with my work until I cannot, I hate asking, but please consider a donation. Thank you!

GoFundMe

on February 03, 2023 06:54 PM

February 02, 2023

MLOps (short for machine learning operations) is slowly evolving into an independent approach to the machine learning lifecycle that includes all steps – from data gathering to governance and monitoring. It will become a standard as artificial intelligence is moving towards becoming part of everyday business, rather than an innovative activity. 

Get an intro to MLOps on the 15th of February with Canonical’s experts.

Register now

Over time, there have been different approaches used in MLOps. The most popular ones are model-driven and data-driven approaches. The split between them is defined by the main focus of the AI system: data or code. Which one should you choose? The decision challenges data scientists to choose which component will play a more important role in the development of a robust model. In this blog, we will evaluate both.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/c3c4/1_Capture.jpg" width="720" /> </noscript>

Model-centric development

Model-driven development focuses, as the name suggests, on machine learning model performance. It uses different methods of experimentation in order to improve the performance of the model, without altering the data. The main goal of this approach is to work on the code and optimise it as much as possible. It includes code, model architecture and training processes as well.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/d736/1_Capture.jpg" width="720" /> </noscript>

If you look deeper into this development method, the model-driven approach is all about high-quality ML models. What it means, in reality, is that developers focus on using the best set of ML algorithms and AI platforms. The approach also is a basis for great advancements in the AI space, such as the development of specialised frameworks like Tensorflow or PyTorch.

Model-centric development has been around since the early days of the discipline, so it benefits from widespread adoption across a variety of AI applications. The reason for this can be traced back to the fact that AI was initially a research-focused area. Historically, this approach was designed for challenging problems and huge datasets, which ML specialists were meant to solve by optimising AI models. It has also been driven by the wide adoption of open source, which allows free access to various GitHub repositories. Model-driven development encourages developers to experiment with the latest bits of technology and try to get the best results by fine-tuning the model. From an organisational perspective, it is suited for enterprises which have enough data to train machine-learning models.

When it comes to pitfalls, the model-centric approach requires a lot of manual work at the various stages of the ML lifecycle. For example, data scientists have to spend a lot of time on data labelling, data validation or training the model. The approach may result in slower project delivery, higher costs and little return on investment. This is the main reason why practitioners considered trying to tackle this problem from a different perspective with data-centric development.

Data-centric development

As it is often mentioned, data is the heart of any AI initiative. The data-centric approach takes this statement seriously, by systematically interacting with the datasets in order to obtain better results and increase the accuracy of machine learning applications.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/0279/2_Capture.jpg" width="720" /> </noscript>

When compared to the model-centric approach, in this case, the ML model is fixed, and all improvements are related to the data. These enhancements range from better data labelling to using different data samples for training or increasing the size of the data set. This approach improves data handling as well, by creating a common understanding of the datasets.

The data-centric approach has a few essential guidelines that look after:

  • Data labelling
  • Data augmentation
  • Error analysis
  • Data versioning

Data labelling for data-centric development

Data labelling assigns labels to data. The process provides information about the datasets that are then used by algorithms to learn. It emphasises both content and structure information, so it often includes various data types, measurement units, or time periods represented in the dataset. Having correct and consistent labels can define the success of an AI project.

Data-centric development often highlights the importance of correct labelling. There are various examples of how to approach it; the key goal is avoiding inconsistencies and ambiguities.  Below you can find an image that Andrew Ng offers as an example of data labels in practice. In this case, the labels are used for two adjectives: inconsistency and ambiguity.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/yuNU1LsB5c3JAAeSzLktJtPjJPqdCKi0Zu43hLC0vpbvqx0YwudLSQ34hAZ4Zw9A3MJt28OM1N6UZbAtIjUv6qv70GQFxbps93JGYOQfy-5qvLxL-6ccvce56cBU08jsiq5yPz_an68mJVHZZOJJz94" width="720" /> </noscript>

Data augmentation for data-centric development

Data augmentation is a process that consists of the generation of new data based on various means, such as interpolation or explorations. It is not always needed, but in some instances, there are models that require a larger amount of data at various stages of the ML lifecycle: training, validation, and data synthesis.

Whenever you perform this activity, checking data quality and ensuring the elimination of noise is also part of the guidelines.

Error analysis for data-centric development

Error analysis is a process performed once a model is trained. Its main goal is to identify a subset that can be used for improving the dataset. It is a task that requires diligence, as it needs to be performed repeatedly, in order to get gradual improvements in both data quality and model performance.

Data versioning for data-centric development

Data versioning tracks changes that happen within the datasets, in order to identify performance changes within the model. It enables collaboration, eases the data management process and fastens the delivery of machine learning pipelines from experimentation to production.

When it comes to pitfalls, the data-centric method struggles mostly with data. On one hand, it can be hard to manage and control. On the other hand, it can be biased if it does not represent the actual population, leading to models that underperform in real life. Lastly, because of the data requirements, it can easily be expensive or suitable only for projects which have collected data for a longer period of time.

Model-centric and data-centric development with MLOps

In reality, both of these approaches are tightly linked to MLOps. Regardless of the option that data scientists choose, they need to follow MLOps guidelines and integrate their method within the tooling that they choose. Developers can use the same tool but have different approaches across different projects. The main difference could occur at the level of the ML lifecycle where changes are happening. It’s important to note that the approach will affect how the model is optimised for the specific initiative, so choosing it with care is important to position your project for success.

Get an intro to MLOps on the 15th of February with Canonical’s experts.

Register now

 Charmed Kubeflow is an end-to-end MLOps tooling, designed for scaling machine learning models to production. Because of its features and integrations, it has the ability to support both model-centric and data-centric development. It is an open-source platform, which encourages contributions and represents the foundations of a growing MLOps ecosystem that Canonical is moving towards, with various integrations at various levels: hardware, tooling and AI frameworks.

Learn more about MLOps

on February 02, 2023 06:26 PM

E232 Null Modem

Podcast Ubuntu Portugal

O título foi o Diogo Constantino que pediu assim… Entretanto já temos Miguel, ele foi encontrado vivo e de boa saúde, ainda que em parte incerta. Mas a notícia que importa salientar é que, após todos estes episódios e de toda a pressão que sempre sofreu, finalmente está a considerar oferecer alguma inteligência à sua residência. E mais não dizemos aqui… Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on February 02, 2023 12:00 AM

February 01, 2023

Multipass 1.11 is here!

This release has some particularly interesting features that we’ve been wanting to ship for a while now. We’re excited to share them with you!

For those who aren’t familiar with Multipass, it’s software that streamlines every aspect of managing and working with virtual machines. We’ve found that development, particularly for cloud applications, can often involve a huge amount of tedious work setting up development and testing environments. Multipass aims to solve that by making the process of creating and destroying VMs as simple as a single command, and by integrating the VM into your host machine and your development flow as much as possible.

That principle of integration is one of the main focuses we had for the 1.11 release. There are two major features out today that make Multipass much more integrated with your host machine – native mounts and directory mapping.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/a7e1/james-harrison-vpOeXr5wmR4-unsplash.jpg" width="720" /> </noscript>

Performance boost

Performance has always been in Multipass’ DNA – we try to keep it as lightweight as we can so that nothing gets between developers and their work. With the 1.11 release, we’ve taken another big step forward.

With the new native mounts feature, Multipass is getting a major performance boost. This feature uses platform-optimized software to make filesystems shared between the host computer and the virtual machine much faster than before. In benchmarking, we’ve seen speed gains of around 10x! For people sharing data with Multipass from their host machine, this is a huge time saver.

Multipass is one of the few VM management tools available to developers on Apple silicon. Performance mounts make the M1 and M2 even faster platforms for Ubuntu. For those who don’t remember, Multipass can launch VMs on the Apple M1 and M2 in less than 20 seconds.

User experience

Multipass’ performance leveled up with this release, and the user experience did as well! Directory mapping is a new way to be more efficient than ever with Multipass. Multipass has supported command aliasing for some time now, but one drawback of aliasing alone is that it loses the context of where the command is executed in the filesystem. Commands like docker-compose, for example, are context sensitive. They may rely on certain files being present in the working directory, or give different results depending on where they are run. 

Directory mapping maintains the context of an aliased command, meaning that an aliased command sent from the host will be executed in the same context on the VM. This feature has the potential to make it feel like you are running linux programs natively on your Mac or Windows terminal.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/7844/nubelson-fernandes-iE71-TMrrkE-unsplash.jpg" width="720" /> </noscript>

Other upgrades

In addition to directory mapping, Blueprints now allow for alias and workspace definitions, meaning you can now spin up a new VM and start using aliased (and context-sensitive) commands in a shared filespace with no additional configuration required. Look for some examples in the near future!

Some other notable upgrades include the `transfer` command and UEFI booting. The `transfer` command now allows for recursive file transfers. This should make it much easier to transfer entire directories as opposed to individual folders or files. Multipass now boots its instances via UEFI which means we are able to support Ubuntu Core 20 and 22 for our IoT developers.

To get started with Multipass, head to our install page or check out our tutorials. We always love to hear feedback from our community, so please let us know what you’re up to by posting in discourse, or dropping in for our office hours.

on February 01, 2023 03:35 PM

This is the story of the currently progressing changes to secure boot on Ubuntu and the history of how we got to where we are.

taking a step back: how does secure boot on Ubuntu work?

Booting on Ubuntu involves three components after the firmware:

  1. shim
  2. grub
  3. linux

Each of these is a PE binary signed with a key. The shim is signed by Microsoft’s 3rd party key and embeds a self-signed Canonical CA certificate, and optionally a vendor dbx (a list of revoked certificates or binaries). grub and linux (and fwupd) are then signed by a certificate issued by that CA

In Ubuntu’s case, the CA certificate is sharded: Multiple people each have a part of the key and they need to meet to be able to combine it and sign things, such as new code signing certificates.

BootHole

When BootHole happened in 2020, travel was suspended and we hence could not rotate to a new signing certificate. So when it came to updating our shim for the CVEs, we had to revoke all previously signed kernels, grubs, shims, fwupds by their hashes.

This generated a very large vendor dbx which caused lots of issues as shim exported them to a UEFI variable, and not everyone had enough space for such large variables. Sigh.

We decided we want to rotate our signing key next time.

This was also when upstream added SBAT metadata to shim and grub. This gives a simple versioning scheme for security updates and easy revocation using a simple EFI variable that shim writes to and reads from.

Spring 2022 CVEs

We still were not ready for travel in 2021, but during BootHole we developed the SBAT mechanism, so one could revoke a grub or shim by setting a single EFI variable.

We actually missed rotating the shim this cycle as a new vulnerability was reported immediately after it, and we decided to hold on to it.

2022 key rotation and the fall CVEs

This caused some problems when the 2nd CVE round came, as we did not have a shim with the latest SBAT level, and neither did a lot of others, so we ended up deciding upstream to not bump the shim SBAT requirements just yet. Sigh.

Anyway, in October we were meeting again for the first time at a Canonical sprint, and the shardholders got together and created three new signing keys: 2022v1, 2022v2, and 2022v3. It took us until January before they were installed into the signing service and PPAs setup to sign with them.

We also submitted a shim 15.7 with the old keys revoked which came back at around the same time.

Now we were in a hurry. The 22.04.2 point release was scheduled for around middle of February, and we had nothing signed with the new keys yet, but our new shim which we need for the point release (so the point release media remains bootable after the next round of CVEs), required new keys.

So how do we ensure that users have kernels, grubs, and fwupd signed with the new key before we install the new shim?

upgrade ordering

grub and fwupd are simple cases: For grub, we depend on the new version. We decided to backport grub 2.06 to all releases (which moved focal and bionic up from 2.04), and kept the versioning of the -signed packages the same across all releases, so we were able to simply bump the Depends for grub to specify the new minimum version. For fwupd-efi, we added Breaks.

(Actually, we also had a backport of the CVEs for 2.04 based grub, and we did publish that for 20.04 signed with the old keys before backporting 2.06 to it.)

Kernels are a different story: There are about 60 kernels out there. My initial idea was that we could just add Breaks for all of them. So our meta package linux-image-generic which depends on linux-image-$(uname -r)-generic, we’d simply add Breaks: linux-image-generic (« 5.19.0-31) and then adjust those breaks for each series. This would have been super annoying, but ultimately I figured this would be the safest option. This however caused concern, because it could be that apt decides to remove the kernel metapackage.

I explored checking the kernels at runtime and aborting if we don’t have a trusted kernel in preinst. This ensures that if you try to upgrade shim without having a kernel, it would fail to install. But this ultimately has a couple of issues:

  1. It aborts the entire transaction at that point, so users will be unable to run apt upgrade until they have a recent kernel.
  2. We cannot even guarantee that a kernel would be unpacked first. So even if you got a new kernel, apt/dpkg might attempt to unpack it first and then the preinst would fail because no kernel is present yet.

Ultimately we believed the danger to be too large given that no kernels had yet been released to users. If we had kernels pushed out for 1-2 months already, this would have been a viable choice.

So in the end, I ended up modifying the shim packaging to install both the latest shim and the previous one, and an update-alternatives alternative to select between the two:

In it’s post-installation maintainer script, shim-signed checks whether all kernels with a version greater or equal to the running one are not revoked, and if so, it will setup the latest alternative with priority 100 and the previous with a priority of 50. If one or more of those kernels was signed with a revoked key, it will swap the priorities around, so that the previous version is preferred.

Now this is fairly static, and we do want you to switch to the latest shim eventually, so I also added hooks to the kernel install to trigger the shim-signed postinst script when a new kernel is being installed. It will then update the alternatives based on the current set of kernels, and if it now points to the latest shim, reinstall shim and grub to the ESP.

Ultimately this means that once you install your 2nd non-revoked kernel, or you install a non-revoked kernel and then reconfigure shim or the kernel, you will get the latest shim. When you install your first non-revoked kernel, your currently booted kernel is still revoked, so it’s not upgraded immediately. This has a benefit in that you will most likely have two kernels you can boot without disabling secure boot.

regressions

Of course, the first version I uploaded had still some remaining hardcoded “shimx64” in the scripts and so failed to install on arm64 where “shimaa64” is used. And if that were not enough, I also forgot to include support for gzip compressed kernels there. Sigh, I need better testing infrastructure to be able to easily run arm64 tests as well (I only tested the actual booting there, not the scripts).

shim-signed migrated to the release pocket in lunar fairly quickly, but this caused images to stop working, because the new shim was installed into images, but no kernel was available yet, so we had to demote it to proposed and block migration. Despite all the work done for end users, we need to be careful to roll this out for image building.

another grub update for OOM issues.

We had two grubs to release: First there was the security update for the recent set of CVEs, then there also was an OOM issue for large initrds which was blocking critical OEM work.

We fixed the OOM issue by cherry-picking all 2.12 memory management patches, as well as the red hat patches to the loader we take from there. This ended up a fairly large patch set and I was hesitant to tie the security update to that, so I ended up pushing the security update everywhere first, and then pushed the OOM fixes this week.

With the OOM patches, you should be able to boot initrds of between 400M and 1GB, it also depends on the memory layout of your machine and your screen resolution and background images. So OEM team had success testing 400MB irl, and I tested up to I think it was 1.2GB in qemu, I ran out of FAT space then and stopped going higher :D

other features in this round

  • Intel TDX support in grub and shim
  • Kernels are allocated as CODE now not DATA as per the upstream mm changes, might fix boot on X13s

am I using this yet?

The new signing keys are used in:

  • shim-signed 1.54 on 22.10+, 1.51.3 on 22.04, 1.40.9 on 20.04, 1.37~18.04.13 on 18.04
  • grub2-signed 1.187.2~ or newer (binary packages grub-efi-amd64-signed or grub-efi-arm64-signed), 1.192 on 23.04.
  • fwupd-signed 1.51~ or newer
  • various linux updates. Check apt changelog linux-image-unsigned-$(uname -r) to see if Revoke & rotate to new signing key (LP: #2002812) is mentioned in there to see if it signed with the new key.

If you were able to install shim-signed, your grub and fwupd-efi will have the correct version as that is ensured by packaging. However your shim may still point to the old one. To check which shim will be used by grub-install, you can check the status of the shimx64.efi.signed or (on arm64) shimaa64.efi.signed alternative. The best link needs to point to the file ending in latest:

$ update-alternatives --display shimx64.efi.signed
shimx64.efi.signed - auto mode
  link best version is /usr/lib/shim/shimx64.efi.signed.latest
  link currently points to /usr/lib/shim/shimx64.efi.signed.latest
  link shimx64.efi.signed is /usr/lib/shim/shimx64.efi.signed
/usr/lib/shim/shimx64.efi.signed.latest - priority 100
/usr/lib/shim/shimx64.efi.signed.previous - priority 50

If it does not, but you have installed a new kernel compatible with the new shim, you can switch immediately to the new shim after rebooting into the kernel by running dpkg-reconfigure shim-signed. You’ll see in the output if the shim was updated, or you can check the output of update-alternatives as you did above after the reconfiguration has finished.

For the out of memory issues in grub, you need grub2-signed 1.187.3~ (same binaries as above).

how do I test this (while it’s in proposed)?

  1. upgrade your kernel to proposed and reboot into that
  2. upgrade your grub-efi-amd64-signed, shim-signed, fwupd-signed to proposed.

If you already upgraded your shim before your kernel, don’t worry:

  1. upgrade your kernel and reboot
  2. run dpkg-reconfigure shim-signed

And you’ll be all good to go.

deep dive: uploading signed boot assets to Ubuntu

For each signed boot asset, we build one version in the latest stable release and the development release. We then binary copy the built binaries from the latest stable release to older stable releases. This process ensures two things: We know the next stable release is able to build the assets and we also minimize the number of signed assets.

OK, I lied. For shim, we actually do not build in the development release but copy the binaries upward from the latest stable, as each shim needs to go through external signing.

The entire workflow looks something like this:

  1. Upload the unsigned package to one of the following “build” PPAs:

  2. Upload the signed package to the same PPA

  3. For stable release uploads:

    • Copy the unsigned package back across all stable releases in the PPA
    • Upload the signed package for stable releases to the same PPA with ~<release>.1 appended to the version
  4. Submit a request to canonical-signing-jobs to sign the uploads.

    The signing job helper copies the binary -unsigned packages to the primary-2022v1 PPA where they are signed, creating a signing tarball, then it copies the source package for the -signed package to the same PPA which then downloads the signing tarball during build and places the signed assets into the -signed deb.

    Resulting binaries will be placed into the proposed PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed

  5. Review the binaries themselves

  6. Unembargo and binary copy the binaries from the proposed PPA to the proposed-public PPA: https://launchpad.net/~ubuntu-uefi-team/+archive/ubuntu/proposed-public.

    This step is not strictly necessary, but it enables tools like sru-review to work, as they cannot access the packages from the normal private “proposed” PPA.

  7. Binary copy from proposed-public to the proposed queue(s) in the primary archive

Lots of steps!

WIP

As of writing, only the grub updates have been released, other updates are still being verified in proposed. An update for fwupd in bionic will be issued at a later point, removing the EFI bits from the fwupd 1.2 packaging and using the separate fwupd-efi project instead like later release series.

on February 01, 2023 01:40 PM

January 30, 2023

Welcome to the Ubuntu Weekly Newsletter, Issue 772 for the week of January 22 – 28, 2023. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on January 30, 2023 09:31 PM

Ronin

Stuart Langridge

In 1701, Asano Naganori, a feudal lord in Japan, was summoned to the shogun’s court in Edo, the town now called Tokyo. He was a provincial chieftain, and knew little about court etiquette, and the etiquette master of the court, Kira Kozuke-no-Suke, took offence. It’s not exactly clear why; it’s suggested that Asano didn’t bribe Kira sufficiently or at all, or that Kira felt that Asano should have shown more deference. Whatever the reasoning, Kira ridiculed Asano in the shogun’s presence, and Asano defended his honour by attacking Kira with a dagger.

Baring steel in the shogun’s castle was a grievous offence, and the shogun commanded Asano to atone through suicide. Asano obeyed, faithful to his overlord. The shogun further commanded that Asano’s retainers, over 300 samurai, were to be dispossessed and made leaderless, and forbade those retainers from taking revenge on Kira so as to prevent an escalating cycle of bloodshed. The leader of those samurai offered to divide Asano’s wealth between all of them, but this was a test. Those who took him up on the offer were paid and told to leave. Forty-seven refused this offer, knowing it to be honourless, and those remaining 47 reported to the shogun that they disavowed any loyalty to their dead lord. The shogun made them rōnin, masterless samurai, and required that they disperse. Before they did, they swore a secret oath among themselves that one day they would return and avenge their master. Then each went their separate ways. These 47 rōnin immersed themselves into the population, seemingly forgoing any desire for revenge, and acting without honour to indicate that they no longer followed their code. The shogun sent spies to monitor the actions of the rōnin, to ensure that their unworthy behaviour was not a trick, but their dishonour continued for a month, two, three. For a year and a half each acted dissolutely, appallingly; drunkards and criminals all, as their swords went to rust and their reputations the same.

A year and a half later, the forty-seven rōnin gathered together again. They subdued or killed and wounded Kira’s guards, they found a secret passage hidden behind a scroll, and in the hidden courtyard they found Kira and demanded that he die by suicide to satisfy their lord’s honour. When the etiquette master refused, the rōnin cut off Kira’s head and laid it on Asano’s grave. Then they came to the shogun, surrounded by a public in awe of their actions, and confessed. The shogun considered having them executed as criminals but instead required that they too die by suicide, and the rōnin obeyed. They were buried, all except one who was not present and who lived on, in front of the tomb of their master. The tombs are a place to be visited even today, and the story of the 47 rōnin is a famous one both inside and outside Japan.

You might think: why have I been told this story? Well, there were 47 of them. 47 is a good number. It’s the atomic number of silver, which is interesting stuff; the most electrically conductive metal. (During World War II, the Manhattan Project couldn’t get enough copper for the miles of wiring they needed because it was going elsewhere for the war effort, so they took all the silver out of Fort Knox and melted it down to make wire instead.) It’s strictly non-palindromic, which means that it’s not only not a palindrome, it remains not a palindrome in any base smaller than itself. And it’s how old I am today.

Yes! It’s my birthday! Hooray!

A glowing message board reading 'BDAY BASH 47'

I have had a good birthday this year. The family and I had delightful Greek dinner at Mythos in the Arcadian, and then yesterday a bunch of us went to the pub and had an absolute whale of an afternoon and evening, during which I became heartily intoxicated and wore a bag on my head like Lord Farrow, among other things. And I got a picture of the Solvay Conference from Bruce.

A framed picture of the Solvay Conference 1927, which is a bunch of stern-looking male physicists and Marie Curie arranged like a school photo

This year is shaping up well; I have some interesting projects coming up, including one will-be-public thing that I’ve been working on and which I’ll be revealing more about in due course, a much-delayed family thing is very near its end (finally!), and in general it’s just gotta be better than the ongoing car crash that the last few years have been. Fingers crossed; ask me again in twelve months, anyway. I’ve been writing these little posts for 21 years now (last year has more links) and there have been ups and downs, but this year I feel quite hopeful about the future for the first time in a while. This is good news. Happy birthday, me.

Me wearing a peach-coloured card gift bag on my head in the pub

on January 30, 2023 03:44 PM

January 28, 2023

Witch Wells, AZ SnowWitch Wells, AZ Snow

It has been a very busy few weeks as we endured snowstorm after snowstorm!

I have made some progress on the Mycroft in debian adventure! This will slow down as we enter freeze for bookworm and there is no way we will make it into bookworm as there are some significant issues to solve.

  • lingua-franco uploaded and accepted
  • pako uploaded and accepted
  • speechpy-fast uploaded
  • fitipy ready to upload

On the KDE side of things:

  • Plasma-bigscreen uploaded and accepted
  • skanpage uploaded and in NEW

In the Snap arena, I have made my first significant contribution to snapcraft upstream! It has been a great learning experience as I convert my Ruby knowledge to Python. Formatting is something I need to get used to!

https://github.com/snapcore/snapcraft/pull/4023

Snaps have been on hold due to the kde-neon extension not having core22 support and the above pull request fixes that. Meanwhile, I have been working on getting core20 apps ( 22.08.3 final KDE apps version for this base. ) rebuilt for security updates.

As many of you know, I am seeking employment. I am a hard worker, that thrives on learning new things. I am a self starter, knowledge sponge, and eager to be an asset to < insert your company here > !

Meanwhile, as interview processes are much longer than I remember and the industry exploding in layoffs, I am coming up short on living expenses as my unemployment lingers on. Please consider donating to my gofundme. Thank you for your consideration.

GoFundMe

on January 28, 2023 03:01 PM

January 26, 2023

Firebuild logo

TL;DR: Just prefix your build command (or any command) with firebuild:

firebuild <build command>

OK, but how does it work?

Firebuild intercepts all processes started by the command to cache their outputs. Next time when the command or any of its descendant commands is executed with the same parameters, inputs and environment, the outputs are replayed (the command is shortcut) from the cache instead of running the command again.

This is similar to how ccache and other compiler-specific caches work, but firebuild can shortcut any deterministic command, not only a specific list of compilers. Since the inputs of each command is determined at run time firebuild does not need a maintained complete dependency graph in the source like Bazel. It can work with any build system that does not implement its own caching mechanism.

Determinism of commands is detected at run-time by preloading libfirebuild.so and interposing standard library calls and syscalls. If the command and all its descendants’ inputs are available when the command starts and all outputs can be calculated from the inputs then the command can be shortcut, otherwise it will be executed again. The interception comes with a 5-10% overhead, but rebuilds can be 5-20 times, or even faster depending on the changes between the builds.

Can I try it?

It is already available in Debian Unstable and Testing, Ubuntu’s development release and the latest stable version is back-ported to supported Ubuntu releases via a PPA.

How can I analyze my builds with firebuild?

Firebuild can generate an HTML report showing each command’s contribution to the build time. Below are the “before” and “after” reports of json4s, a Scala project. The command call graphs (lower ones) show that java (scalac) took 99% of the original build. Since the scalac invocations are shortcut (cutting the second build’s time to less than 2% of the first one) they don’t even show up in the accelerated second build’s call graph. What’s left to be executed again in the second run are env, perl, make and a few simple commands.

The upper graphs are the process trees, with expandable nodes (in blue) also showing which command invocations were shortcut (green). Clicking on a node shows details of the command and the reason if it was not shortcut.

Could I accelerate my project more?

Firebuild works best for builds with CPU-intensive processes and comes with defaults to not cache very quick commands, such as sh, grep, sed, etc., because caching those would take cache space and shortcutting them may not speed up the build that much. They can still be shortcut with their parent command. Firebuild’s strength is that it can find shortcutting points in the process tree automatically, e.g. from sh -c 'bash -c "sh -c echo Hello World!"' bash would be shortcut, but none of the sh commands would be cached. In typical builds there are many such commands from the skip_cache list. Caching those commands with firebuild -o 'processes.skip_cache = []' can improve acceleration and make the reports smaller.

Firebuild also supports several debug flags and -d proc helps finding reasons for not shortcutting some commands:

...
FIREBUILD: Command "/usr/bin/make" can't be short-cut due to: Executable set to be not shortcut, {ExecedProcess 1329.2, running, "make -f debian/rules build", fds=[{FileFD fd=0 {FileOFD ...
FIREBUILD: Command "/usr/bin/sort" can't be short-cut due to: Process read from inherited fd , {ExecedProcess 4161.1, running, "sort", fds=[{FileFD fd=0 {FileOFD ...
FIREBUILD: Command "/usr/bin/find" can't be short-cut due to: fstatfs() family operating on fds is not supported, {ExecedProcess 1360.1, running, "find -mindepth 1 ...
...

make, ninja and other incremental build tool binaries are not shortcut because they compare the timestamp of files, but they are fast at least and every build step they perform can still be shortcut. Ideally the slower build steps that could not be shortcut can be re-implemented in ways that can be shortcut by avoiding tools performing unsupported operations.

I hope those tools help speeding up your build with very little effort, but if not and you find something to fix or improve in firebuild itself, please report it or just leave a feedback!

Happy speeding, but not on public roads! 😉

on January 26, 2023 09:06 AM

E231 Agenda Cheia!

Podcast Ubuntu Portugal

Confirmou-se! Para a Comunidade Ubuntu em Portugal, Janeiro é um mês muito animado. Neste episódio dedicámos grande parte do tempo ao balanço do encontro de Aveiro. Para a semana talvez Sintra e depois Lisboa… o que não falta é festa e animação. Olhando para o início de Fevereiro parece que a história se repete mas depois se verá! Já sabem: oiçam, subscrevam e partilhem!

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal. E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8. Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem. Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino, Miguel e Tiago Carrondo e editado pelo Senhor Podcast. O website é produzido por Tiago Carrondo e o código aberto está licenciado nos termos da Licença MIT. A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License. Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on January 26, 2023 12:00 AM

January 23, 2023

Welcome to the Ubuntu Weekly Newsletter, Issue 771 for the week of January 15 – 21, 2023. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on January 23, 2023 10:00 PM

Ever wondered how your computer or your phone displays the current date and time accurately? What keeps all the devices in the world (and in space) in agreement on what time it is? What makes applications that require precise timing possible?

In this article, I will explain some of the challenges with time synchronization and explore two of the most popular protocols that devices use to keep their time in sync: the Network Time Protocol (NTP) and the Precision Time Protocol (PTP).

What is time?

It wouldn’t be a good article about time synchronization without spending a few words about time. We all have an intuitive concept of time since childhood, but stating precisely what ‘time’ is can be quite a challenge. I’m going to give you my idea of it.

Here is a simple definition to start with: time is how we measure changes. If the objects in the universe didn’t change and appeared to be fixed, without ever moving or mutating, I think we could all agree that time wouldn’t be flowing. Here by ‘change’ I mean any kind of change: from objects falling or changing shape, to light diffusing through space, or our memories building up in our mind.

This definition may be a starting point but does not capture all we know about time. Something that it does not capture is our concept of past, present, and future. From our day-to-day experience, we know in fact that an apple would fall off the tree due to gravity, under the normal flow of time. If we observed an apple rising from the ground, attaching itself to the tree (without the action of external forces), we could perhaps agree that what we’re observing is time flowing backward. And yet, both the apple falling off the tree and the apple rising from the ground are two valid changes from an initial state. This is where causality comes into place: time flows in such a way that the cause must precede the effect.

We can now refine our definition of time as an ordered sequence of changes, where each change is linked to the previous one by causality.

How do we measure time?

Now we have a more precise definition of time, but we still don’t have enough tools to define what is a second, an hour, or a day. This is where things get more complicated.

If we look at the definition of ‘second’ from the international standard, we can see that it is currently defined from the emission frequency of caesium-133 (133Cs) atoms. If you irradiate caesium-133 atoms with some light having sufficient energy, the atoms will absorb the light, get excited, and release the energy back in the form of light at a specific frequency. That frequency of emission is defined as 9192631770 Hz, and the second is defined as the inverse of that frequency. This definition is known as the caesium standard.

Here’s a problem to think about: how do we know that a caesium-133 atom, after getting excited, really emits light at a fixed frequency? The definition of second is implying that the frequency is constant and the same all over the world, but how do we know it’s really the case? This assumption is supported by quantum physics, according to which atoms can only transition between discrete (quantified) energy states. When an atom gets excited, it transitions from an energy state $E_1$ to an energy state $E_2$. Atoms like to be in the lowest energy state, so the atom will not stay in the state $E_2$ for long, and will want to go back to $E_1$. When doing that, it will release an amount of energy of exactly $E_2 - E_1$ in the form of a photon. According to the Planck formula, the photon will have frequency $f = (E_2 - E_1) / h$ where $h$ is the Planck constant. Because the energy levels are fixed, the resulting emission frequency is fixed as well.

By the way, this process of absorption and emission of photons is the same process that causes fluorescence.

Visualization of the absorption and emission process for an atom transitioning between two energy states Visualization of the absorption and emission process for an atom transitioning between a ground state $E_1$ to an excited state $E_2$.

Assuming that caesium-133 atoms emit light at a single, fixed frequency, we can now build extremely accurate caesium atomic clocks and measure spans of time with them. Existing caesium atomic clocks are estimated to be so precise that they may lose one second every 100 million years.

The same approach can be applied to other substances as well: atomic clocks have been constructed using rubidium (Rb), strontium (Sr), hydrogen (H), krypton (Kr), ammonia (NH3), ytterbium (Yb), each having its own emission frequency, and their own accuracy. The most accurate clock ever built is a strontium clock which may lose one second every 15 billion years.

Time dilation

If we have two atomic clocks and we let them run for a while, will they show the same time? This might sound like a rhetorical question: we just established that the frequencies of emission of atoms are fixed, so why would two identical atomic clocks ever get out of sync? Well, as a matter of fact, two identical atomic clocks may get out of sync, and this problem is not due to the clocks, but with time itself: it appears that time does not always flow in the same way everywhere.

Many experiments have shown this effect on our planet, the most famous one probably being the Hafele-Keating experiment. In this experiment, a set of caesium clocks was placed on an airplane flying around the world west-to-east, another set was placed on an airplane flying east-to-west, and another set remained on ground. The 3 sets of clocks, which were initially in sync before the planes took off, were showing different times once reunited after the trip. This experiment and similar ones have been repeated and refined multiple times, and they all showed consistent results.

These effects were due to time dilation, and the results were consistent with the predictions of special relativity and general relativity.

Time dilation due to special relativity

Special relativity predicts that if two clocks are moving with two different velocities, they are going to measure different spans of time.

Special relativity is based on two principles:

  • the speed of light is constant;
  • there are no privileged reference frames.

To understand how these principles affect the flow of time, it’s best to look at an example: imagine that a passenger is sitting on a train with a laser and a mirror in front of them. Another person is standing on the ground next to the railroad and observing the train passing. The passenger points the laser perpendicular to the mirror and turns it on.

What the passenger will observe is the beam of light from the laser to hit the mirror and come back in a straight line:

Beam of light in the train reference frame Portion of the beam of light in the train reference frame, emitted from the laser (bottom) and bouncing from the mirror (top). Note how it follows a vertical path.

From the observer perspective, however, things are quite different. Because the train is moving relative to the observer, the beam looks like it’s taking a different, slightly longer path:

Beam of light in the observer reference frame The same portion of light beam as before, but this time in the observer reference frame. Note how it follows a diagonal path, longer than the vertical path in the train reference frame.

If both the passenger and the observer measure how long it took for the light beam to hit back at the source, and if the principles of special relativity hold, then the two persons will record different measurements. If the speed of light is constant, and there is no privileged reference frame, then the speed of light $c$ must be the same in both reference frames. From the passenger’s perspective, the beam has traveled a distance of $2 L$, taking a time $2 L / c$. From the observer’s perspective, the beam has traveled a longer distance $2 M$, with $M > L$, taking a longer time $2 M / c$.

Beam of light in the observer reference frame Comparison of the light beams as seen from the two reference frames. In the train reference frame, the light beam is a vertical line of length $L$ (therefore traveling a path of length $2 L$ after bouncing from the mirror). In the observer reference frame, the light beam is distorted due to the velocity of the train. If the train moves at speed $v$, then the light beam travels a total length of $2 M = 2 L c / \sqrt{c^2 - v^2}$.

How can we reconcile these counterintuitive measurements? Special relativity does it is by stating that time flows differently in the two reference frames. Time runs “slower” inside the train and runs “faster” for the observer. One consequence of that is that the passenger ages less than the observer.

Time dilation due to special relativity is not easily detectable in our day-to-day life, but it can still cause problems with high-precision clocks. This time dilation may in fact cause clock drifts in the order of hundreds of nanoseconds per day.

Time dilation due to general relativity

Experimental data shows that clocks in a gravitational field do not follow (solely) the rules of special relativity. This does not mean that special relativity is wrong, but it’s a sign that it is incomplete. This is where general relativity comes into play. In general relativity, gravity is not seen as a force, like in classical physics, but rather as a deformation of spacetime. All objects that have mass bend spacetime, and the path of objects traveling through spacetime is affected by its curvature.

An apple falling from a tree is not going towards the ground because there’s a force “pushing” it down, but rather because that’s the shortest path in spacetime (a straight line in bent spacetime).

Apple falling according to classical physics, following a parabolic motion Apple falling according to classical physics, following a parabolic motion.
Apple falling according to general relativity, following a straight path in distorted spacetime Apple falling according to general relativity, following a straight path in distorted spacetime.

The larger the mass of objects, the larger the curvature of spacetime they produce. Time flows “slower” near large masses, and “faster” away from it. Interesting facts: people on a mountain age faster than people on the sea level, and it has been calculated that the core of the Earth is 2.5 years younger than the crust.

The time dilation caused by gravity on the surface of the Earth may amount to clock drifts in the order of hundreds of nanoseconds per day, just like special relativity.

Can we actually synchronize clocks?

Given what we have seen about time dilation, and that we may experience time differently, does it even make sense to talk about time synchronization? Can we agree on time if time flows differently for us?

The short answer is yes: the trick is to restrict our view to a closed system, like the surface of our planet. If we place some clocks scattered across the system, they will almost certainly experience different flows of time, due to different velocities, different altitudes, and other time dilation phenomena. We cannot make those clocks agree on how much time has passed since a specific event; what we can do is aggregate all the time measurements from the clocks and average them out. This way we end up with a value that is representative of how much time has passed on the entire system—in other words, we get an “overall time” for the system.

Very often, the system that we consider is not restricted to just the surface of our planet, but involves the Sun, and sometimes the moon as well. In fact, what we call one year is roughly the time it takes for the Earth to complete an orbit around the Sun; one day is roughly the time it takes for the Earth to spin around itself once and face the Sun in the same position again. Including the Sun (or the moon) in our time measurements is complicated: in part this complexity comes from the fact that precise measurements of the Earth’s position are difficult, and in part from the fact that the Earth’s rotation is not regular, not fully predictable, and it’s slowing down. It’s worth noting that climate and geological events affect the Earth’s rotation in a measurable way, and such events are very hard to model accurately.

What is important to understand here is that the word ‘time’ is often used to mean different things. Depending on how we measure it, we can end up with different definitions of time. To avoid ambiguity, I will classify ‘time’ into two big categories:

  • Elapsed time: this is the time measured directly by a clock, without using any extra information about the system where the clock lies into or about other clocks.

    We can use elapsed time to measure durations, latencies, frequencies, as well as lengths.

  • Coordinated time: this is the time measured by using a clock, paired with information about the system where it’s located (like position, velocity, and gravity), and/or information from other clocks.

    This notion of time is mostly useful for coordinating events across the system. Some practical examples: scheduling the execution of tasks in the future, checking the expiration of certificates, real-time communication.

Time standards

Over the centuries several time standards have been introduced to measure coordinated time. Nowadays there are three major standards in use: TAI, UTC, and GNSS. Let’s take a brief look at them.

TAI

International Atomic Time (TAI) is based on the weighted average of the elapsed time measured by several atomic clocks spread across the world. The more a clock in TAI is precise, the more it contributes to the weighted average. The fact that the clocks are spread in multiple locations, and the use of an average, mitigates relativistic effects and yields a value that we can think of as the overall time flow experienced by the surface of the Earth.

Note that the calculations for TAI does not include the Earth’s position with respect to the Sun.

Distribution of the laboratories that contribute to TAI all over the world Distribution of the laboratories that contribute to International Atomic Time (TAI) all over the world as of 2020. Map taken from the BIPM Annual Report on Time Activities.

UTC

Coordinated Universal Time (UTC) is built upon TAI. UTC, unlike TAI, is periodically adjusted to synchronize it with the Earth’s rotation around itself and the Sun. The goal is to make sure that 24 UTC hours are equivalent to a solar day (within a certain degree of precision). Because, as explained earlier, the Earth’s rotation is irregular, not fully predictable, and slowing down, periodic adjustments have to be made to UTC at irregular intervals.

The adjustments are performed by inserting leap seconds: these are extra seconds that are added to the UTC time to “slow down” the UTC time flow and keep it in sync with Earth’s rotation. On days when a leap second is inserted, UTC clocks go from 23:59:59 to 23:59:60.

Visualization of leap seconds inserted into UTC, and a comparison with TAI A visualization of leap seconds inserted into UTC until the end of 2022. Each orange dot represents a leap second (not in scale). When UTC was started in 1972, it started with 10 seconds of offset from TAI. As you can see, the insertion of leap seconds is very irregular: some decades have seen many leap seconds, others have seen much more.

It’s worth noting that the practice of inserting leap seconds is most likely going to be discontinued in the future. The main reason is that leap seconds have been the source of complexity and bugs in computer systems, and the benefit-to-pain ratio of leap seconds is not considered high enough to keep adding them. If leap seconds are discontinued, UTC will become effectively equivalent to TAI, with an offset: UTC will always differ from TAI by a few seconds, but this difference will always be constant, if no more leap seconds are inserted.

GNSS

Global Navigation Satellite System (GNSS) is based on a mix of accurate atomic clocks on ground and less accurate atomic clocks on artificial satellites orbiting around the Earth. The clocks on the satellites, being less accurate and subject to a variety of relativistic effects, are updated about twice a day from ground stations to correct clock drifts. Nowadays there are several implementations of GNSS around the world, including:

When GPS was launched, it was synchronized with UTC, however GPS, unlike UTC, is not adjusted to follow the Earth’s rotation, and due to that, GPS today differs from UTC by 18 seconds (because 18 leap seconds have been inserted since GPS was launched in 1980). BeiDou also does not implement leap seconds. GPS and BeiDou are therefore compatible with TAI.

Other GNSS systems like Galileo and GLONASS do implement leap seconds and are therefore compatible with UTC.

Time synchronization protocols

Dealing with coordinated time is not trivial. Different ways to deal with relativistic effects and Earth’s irregular rotation result in different time standards that are not always immediately compatible with each other. Nonetheless, once we agree on a well-defined time standard, we have a way to ask the question “what time is it?” and receive an accurate answer all around the world (within a certain degree of precision).

Let’s now take a look at how computers on a network can obtain an accurate value for the coordinated time given by a time standard. I will describe two popular protocols: NTP and PTP. The two are using similar algorithms, but offer different precision: milliseconds (NTP) and nanoseconds (PTP). Both use UDP/IP as the transport protocol.

Network Time Protocol (NTP)

The way time synchronization works with NTP is the following: a computer that wants to synchronize its time periodically queries an NTP server (or multiple servers) to get the current coordinated time. The server that provides the current coordinated time may have obtained the time from an accurate source clock connected to the server (like an atomic clock synchronized with TAI or UTC, or a GNSS receiver), or from a previous synchronization from another NTP server.

To record how “fresh” the coordinated time from an NTP server is (how distant the NTP server is from the source clock), NTP has a concept of stratum: this is a number that indicates the number of ‘hops’ from the accurate clock source:

  • stratum 0 is used to indicate an accurate clock;
  • stratum 1 is a server that is directly connected to a stratum 0 clock;
  • stratum 2 is a server that is synchronized from a stratum 1 server;
  • stratum 3 is a server that is synchronized from a stratum 2 server;
  • and so on...

The maximum stratum allowed is 15. There’s also a special stratum 16: this is not a real stratum, but a special value used by clients to indicate that time synchronization is not happening (most likely because the NTP servers are unreachable).

Visualization of NTP strata in a distributed network Examples of different NTP strata in a distributed network. A stratum n server obtains its time from stratum n - 1 servers.

The major problem with synchronizing time over a network is latency. Networks can be composed of multiple links, some of which may be slow or overloaded. Simply requesting the current time from an NTP server without taking latency into account would lead to an imprecise response. Here is how NTP deals with this problem:

  1. The NTP client sends a request via a UDP packet to an NTP server. The packet includes an originate timestamp $t_0$ that indicates the local time of the client when the packet was sent.
  2. The NTP server receives the request and records the receive timestamp $t_1$, which indicates the local time of the server when the request was received.
  3. The NTP server processes the request, prepares a response, and records the transmit timestamp $t_2$, which indicates the local time of the server when the response was sent. The timestamps $t_0$, $t_1$ and $t_2$ are all included in the response.
  4. The NTP client receives the response and records the timestamp $t_3$, which indicates the local time of the client when the response was received.
Visualization of the NTP time synchronization algorithm The NTP synchronization algorithm.

Our goal is now to calculate an estimate for the network latency and processing delay and use that information to calculate, in the most accurate way possible, the offset between the NTP client clock and the NTP server clock.

The difference $t_3 - t_0$ is the duration of the overall exchange. The difference $t_2 - t_1$ is the duration of the NTP server processing delay. If we subtract these two durations, we get the total network latency experienced, also known as round-trip delay:

$$\delta = (t_3 - t_0) - (t_2 - t_1)$$

If we assume that the transmit delay and the receive delay are the same, then $\delta / 2$ is the average network latency (this assumption may not be true in a general network, but that’s the assumption that NTP makes).

Under this assumption, the time $t_0 + \delta/2$ is the time on the client’s clock that corresponds to $t_1$ on the server’s clock. Similarly, $t_3 - \delta/2$ on the client’s clock corresponds to $t_2$ on the server’s clock. These correspondences let us calculate two estimates for the offset between the client’s clock and the server’s clock:

$$\begin{align*} \theta_1 & = t_1 - (t_0 + \delta/2) \\ \theta_2 & = t_2 - (t_3 - \delta/2) \end{align*}$$

We can now calculate the client-server offset $\theta$ as an average of those two estimates:

$$\begin{align*} \theta & = \frac{\theta_1 + \theta_2}2 \\ & = \frac{t_1 - (t_0 + \delta/2) + t_2 - (t_3 - \delta/2)}2 \\ & = \frac{t_1 - t_0 - \delta/2 + t_2 - t_3 + \delta/2}2 \\ & = \frac{(t_1 - t_0) + (t_2 - t_3)}2 \\ \end{align*}$$

Note that the offset $\theta$ may be a positive duration (meaning that the client clock is in the past), a negative duration (meaning that the client clock is in the future) or zero (meaning that the client clock agrees with the server clock, which is unlikely).

After calculating the offset $\theta$, the client can update its local clock by shifting it by $\theta$ and from that point the client will be in sync with the server (within a certain degree of precision).

Once the synchronization is done, it is expected that the client’s clock will start drifting away from the server’s clock. This may happen due to relativistic effects and more importantly because often clients do not use high-precision clocks. For this reason, it is important that NTP clients synchronize their time periodically. Usually NTP clients start by synchronizing time every minute or so when they are started, and then progressively slow down until they synchronize time once every half an hour or every hour.

There are some drawbacks with this synchronization method:

  • The request and response delays may not be perfectly symmetric, resulting in inaccuracies in the calculations of the offset $\theta$. Network instabilities, packet retransmissions, change of routes, queuing may all cause unpredictable and inconsistent delays.
  • The timestamps $t_1$ and $t_3$ must be set as soon as possible (as soon as the packets are received), and similarly $t_0$ and $t_2$ must be set as late as possible. Because NTP is implemented at the software level, there may be non-negligible delays in acquiring and recording these timestamps. These delays may be exacerbated if the NTP implementation is not very performant, or if the client or server are under high load.
  • Errors propagate and add up when increasing the number of strata.

For all these reasons, NTP clients do not synchronize time just from a single NTP server, but from multiple ones. NTP clients take into account the round-trip delays, stratum, and jitter (the variance in round-trip delays) to decide the best NTP server to get their time from. Under ideal network conditions, an NTP client will always prefer a server with a low stratum. However, an NTP client may prefer an NTP server with high stratum and more reliable connectivity over an NTP server with low stratum but a very unstable network connection.

The precision offered by NTP is in the order of a few milliseconds.

Precision Time Protocol (PTP)

PTP is a time synchronization protocol for applications that require more accuracy than the one provided by NTP. The main differences between PTP and NTP are:

  • Precision: NTP offers millisecond precision, while PTP offers nanosecond precision.
  • Time standard: NTP transmits UTC time, while PTP transmits TAI time and the difference between TAI and UTC.
  • Scope: NTP is designed to be used over large networks, including the internet, while PTP is designed to be used in local area networks.
  • Implementation: NTP is mainly software based, while PTP can be implemented both via software and on specialized hardware. The use of specialized hardware considerably reduces delays and jitter introduced by software.
Picture of a Time Card device Time Card: an open-source hardware card with a PCIe interface that can be plugged into a computer that can serve as a PTP master. It can be optionally connected to a GNSS receiver and contains a rubidium (Rb) clock.
  • Hierarchy: NTP can support a complex hierarchy of NTP servers, organized via strata. While PTP does not put a limitation on the number of nodes involved, the hierarchy is usually only composed of master clocks (the source of time information) and slave clocks (the receivers of time information). Sometimes boundary clocks are used to relay time information to network segments that are unreachable by the master clocks.
  • Clock selection: in NTP, clients select the best NTP server to use based on the NTP server clock quality and the network connection quality. In PTP, slaves do not select the best master clock to use. Instead, master clocks perform a selection between themselves using a method called best master clock algorithm. This algorithm takes into account the clock’s quality and input from system administrators, and does not factor network quality at all. The master clock selected by the algorithm is called grandmaster clock.
  • Algorithm: in NTP, clients poll the time information from servers periodically and calculate the clock offset using the algorithm described above (based on the timestamps $t_0$, $t_1$, $t_2$ and $t_3$). With PTP, the algorithm used by slaves to calculate the offset from the grandmaster clock is somewhat similar to the one used in NTP, but the order of operations is different:

    1. the grandmaster periodically broadcasts its time information $T_0$ over the network;
    2. each slave records the time $T_1$ when the broadcasted time was received;
    3. each slave sends a packet to the grandmaster at time $T_2$;
    4. the grandmaster receives the packet at time $T_3$ and sends that value back to the slave.

    The average network delay can be calculated as $\delta = ((T_3 - T_0) - (T_2 - T_1)) / 2$. The clock offset can be calculated as $\theta = ((T_1 - T_0) + (T_2 - T_3)) / 2$.

Visualization of the PTP time synchronization algorithm The PTP time synchronization algorithm.

Summary

  • Synchronizing time across a computer network is not an easy task, and first of all requires agreeing on a definition of ‘time’ and on a time standard.
  • Relativistic effects make it so that time may not flow at the same speed all over the globe, and this means that time has to be measured and aggregated across the planet in order to get a suitable value that can be agreed on.
  • Atomic clocks and GNSS are the clock sources used for most applications nowadays.
  • NTP is a time synchronization protocol that can be used on large and distributed networks like the internet and provides millisecond precision.
  • PTP is a time synchronization protocol for local area networks and provides nanosecond precision.
on January 23, 2023 07:15 PM

January 21, 2023


Are you using Kubuntu 22.10 Kinetic Kudu, our current stable release? Or are you already running our development builds of the upcoming 23.04 (Lunar Lobster)?

We currently have Plasma 5.25.90 (Plasma 5.27 Beta) available in our Beta PPA for Kubuntu 22.10 and for the 23.04 development series.

However this is a beta release, and we should re-iterate the disclaimer from the upstream release announcement:



DISCLAIMER: This release contains untested and unstable software. It is highly recommended you do not use this version in a production environment and do not use it as your daily work environment. You risk crashes and loss of data.



5.27 Beta packages and required dependencies are available in our Beta PPA. The PPA should work whether you are currently using our backports PPA or not. If you are prepared to test via the PPA, then add the beta PPA and then upgrade:

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt full-upgrade -y

Then reboot.

In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], or mailing lists [2].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.26?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing.
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.

Thanks!

Please stop by the Kubuntu-devel IRC channel on libera.chat if you need clarification of any of the steps to follow.

[1] – #kubuntu-devel on libera.chat
[2] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel


on January 21, 2023 11:24 AM

January 07, 2023

“What are the most important improvements that Debian need to make?” – I decided to see what all the fuss is about and asked ChatGPT that exact question.

It’s response:

The list of potential improvements for Debian that I provided is based on my understanding of the challenges and opportunities that software projects like Debian typically face, as well as my understanding of the goals and values of the Debian project. It is not based on any specific information about the current priorities or needs of the Debian project or community.

There are many areas where the Debian project could make improvements, and the most important ones will depend on the priorities and needs of the project and its community. Some potential areas for improvement could include:

  1. Improving the user experience: Debian is known for its stability and security, but it is not always the easiest distribution to use, especially for new users. Improving the user experience and making Debian more accessible to a wider audience could be an important goal.
  2. Keeping up with the rapid pace of change in the software ecosystem: With over 50,000 packages, Debian is a large distribution, and it can be difficult to keep up with the constant stream of updates and new software being released. Developing tools and processes to help manage this complexity and ensure that new packages are tested and integrated smoothly could be an important focus.
  3. Maintaining the stability and security of the distribution: Ensuring that Debian remains a stable and secure platform is a top priority for the project. This includes testing new packages thoroughly before they are included in the distribution, responding quickly to security vulnerabilities, and investing in infrastructure and processes to support these efforts.
  4. Ensuring the long-term sustainability of the project: Debian is a volunteer-driven project, and it can be challenging to attract and retain contributors over the long term. Developing strategies to support and encourage participation and collaboration within the community could be an important focus.
  5. Enhancing the tools and infrastructure that support the project: The Debian project relies on a wide range of tools and infrastructure to support its development and distribution efforts. Improving and modernizing these resources could help to make the project more efficient and effective.

I find that to be a great response, and I’m curious where it came from, so I asked “How did you come to that conclusion?”, it replied:

The list of potential improvements for Debian that I provided is based on my understanding of the challenges and opportunities that software projects like Debian typically face, as well as my understanding of the goals and values of the Debian project. It is not based on any specific information about the current priorities or needs of the Debian project or community.

Well, one thing is for sure, the IRC chat bots of the future are going to be much more complex than the ones I grew up with :-)

on January 07, 2023 12:19 PM

January 05, 2023

This morning I attempted to start work on my desktop PC and couldn’t. The screen is black, it doesn’t want to wake up the displays. I used the old REISUB trick to restart, and it boots, but there’s no output on the display. I did some investigation and this post is mainly to capture my notes and so others can see the problem and perhaps debug and fix it. The setup is an Intel Skull Canyon NUC connected to an external GPU enclosure which contains an NVIDIA GeForce RTX 2060.
on January 05, 2023 09:00 AM

January 02, 2023

What to do about hotlinking

Stuart Langridge

Hotlinking, in the context I want to discuss here, is the act of using a resource on your website by linking to it on someone else’s website. This might be any resource: a script, an image, anything that is referenced by URL.

It’s a bit of an anti-social practice, to be honest. Essentially, you’re offloading the responsibility for the bandwidth of serving that resource to someone else, but it’s your site and your users who get the benefit of that. That’s not all that nice.

Now, if the “other person’s website” is a CDN — that is, a site deliberately set up in order to serve resources to someone else — then that’s different. There are many CDNs, and using resources served from them is not a bad thing. That’s not what I’m talking about. But if you’re including something direct from someone else’s not-a-CDN site, then… what, if anything, should the owner of that site do about it?

I’ve got a fairly popular, small, piece of JavaScript: sorttable.js, which makes an HTML table be sortable by clicking on the headers. It’s existed for a long time now (the very first version was written twenty years ago!) and I get an email about it once a week or so from people looking to customise how it works or ask questions about how to do a thing they want. It’s open source, and I encourage people to use it; it’s deliberately designed to be simple1, because the target audience is really people who aren’t hugely experienced with web development and who can add sortability to their HTML tables with a couple of lines of code.

The instructions for sorttable are pretty clear: download the library, then put it in your web space and include it. However, some sites skip that first step, and instead just link directly to the copy on my website with a <script> element. Having looked at my bandwidth usage recently, this happens quite a lot2, and on some quite high-profile sites. I’m not going to name and shame anyone3, but I’d quite like to encourage people to not do that, if there’s a way to do it. So I’ve been thinking about ways that I might discourage hotlinking the script directly, while doing so in a reasonable and humane fashion. I’m also interested in suggestions: hit me up on Mastodon at @sil@mastodon.social or Twitter4 as @sil.

Move the script to a different URL

This is the obvious thing to do: I move the script and update my page to link to the new location, so anyone coming to my page to get the script will be wholly unaffected and unaware I did it. I do not want to do this, for two big reasons: it’s kicking the can down the road, and it’s unfriendly.

It’s can-kicking because it doesn’t actually solve the problem: if I do nothing else to discourage the practice of hotlinking, then a few years from now I’ll have people hotlinking to the new location and I’ll have to do it again. OK, that’s not exactly a lot of work, but it’s still not a great answer.

But more importantly, it’s unfriendly. If I do that, I’ll be deliberately breaking everyone who’s hotlinking the script. You might think that they deserve it, but it’s not actually them who feel the effect; it’s their users. And their users didn’t do it. One of the big motives behind the web’s general underlying principle of “don’t break the web” is that it’s not reasonable to punish a site’s users for the bad actions of the site’s creators. This applies to browsers, to libraries, to websites, the whole lot. I would like to find a less harsh method than this.

Move the script to a different dynamic URL

That is: do the above, but link to a URL which changes automatically every month or every minute or something. The reason that I don’t want to do this (apart from the unfriendly one from above, which still applies even though this fixes the can-kicking) is that this requires server collusion; I’d need to make my main page be dynamic in some way, so that links to the script also update along with the script name change. This involves faffery with cron jobs, or turning the existing static HTML page into a server-generated page, both of which are annoying. I know how to do this, but it feels like an inelegant solution; this isn’t really a technical problem, it’s a social one, where developers are doing an anti-social thing. Attempting to solve social problems with technical measures is pretty much always a bad idea, and so it is in this case.

Contact the highest-profile site developers about it

I’m leaning in this direction. I’m OK with smaller sites hotlinking (well, I’m not really, but I’m prepared to handwave it; I made the script and made it easy to use exactly to help people, and if a small part of that general donation to the universe includes me providing bandwidth for it, then I can live with that). The issue here is that it’s not always easy to tell who those heavy-bandwidth-consuming sites are. It relies on the referrer being provided, which it isn’t always. It’s also a bit more work on my part, because I would want to send an email saying “hey, Site X developers, you’re hotlinking my script as you can see on page sitex.example.com/sometable.html and it would be nice if you didn’t do that”, but I have no good way of identifying those pages; the document referrer isn’t always that specific. If I send an email saying “you’re hotlinking my script somewhere, who knows where, please don’t do that” then the site developers are quite likely to put this request at the very bottom of their list, and I don’t blame them.

Move the script and maliciously break the old one

This is: I move the script somewhere else and update my links, and then I change the previous URL to be the same script but it does something like barf a complaint into the console log, or (in extreme cases based on suggestions I’ve had) pops up an alert box or does something equally obnoxious. Obviously, I don’t wanna do this.

Legal-ish things

That is: contact the highest profile users, but instead of being conciliatory, be threatening. “You’re hotlinking this, stop doing it, or pay the Hotlink Licence Fee which is one cent per user per day” or similar. I think the people who suggest this sort of thing (and the previous malicious approach) must have had another website do something terrible to them in a previous life or something and now are out for revenge. I liked John Wick as much as the next poorly-socialised revenge-fantasy tech nerd, but he’s not a good model for collaborative software development, y’know?

Put the page (or whole site) behind a CDN

I could put the site behind Cloudflare (or perhaps a better, less troubling CDN) and then not worry about it; it’s not my bandwidth then, it’s theirs, and they’re fine with it. This used to be the case, but recently I moved web hosts5 and stepped away from Cloudflare in so doing. While this would work… it feels like giving up, a bit. I’m not actually solving the problem, I’m just giving it to someone else who is OK with it.

Live with it

This isn’t overrunning my bandwidth allocation or anything. I’m not actually affected by this. My complaint isn’t important; it’s more a sort of distaste for the process. I’d like to make this better, rather than ignoring it, even if ignoring it doesn’t mean much, as long as I’m not put to more inconvenience by fixing it. We want things to be better, after all, not simply tolerable.

So… what do you think, gentle reader? What would you do about it? Answers on a postcard.

  1. and will stay simple; I’d rather sorttable were simple and relatively bulletproof than comprehensive and complicated. This also explains why it’s not written in very “modern” JS style; the best assurance I have that it works in old browsers that are hard to test in now is that it DID work in them and I haven’t changed it much
  2. in the last two weeks I’ve had about 200,000 hits on sorttable.js from sites that hotlink it, which ain’t nothin’
  3. yet, at least, so don’t ask
  4. if you must
  5. to the excellent Mythic Beasts, who are way better than the previous hosts
on January 02, 2023 06:10 PM

From time to time, I hear people saying that Elliptic Curve Cryptography (ECC) cannot be used to directly encrypt data, and you can only do key agreement and digital signatures with it. This is a common misconception, but it’s not actually true: you can indeed use elliptic curve keys to encrypt arbitrary data. And I’m not talking about hybrid-encryption schemes (like ECIES or HPKE): I’m talking about pure elliptic curve encryption, and I’m going to show an example of it in this article. It’s true however that pure elliptic curve encryption is not widely used or standardized because, as I will explain at the end of the article, key agreement is more convenient for most applications.

Quick recap on Elliptic Curve Cryptography

I wrote an in-depth article about elliptic curve cryptography in the past on this blog, and here is a quick recap: points on an elliptic curve from an interesting algebraic structure: a cyclic group. This group lets us do some algebra with the points of the elliptic curve: if we have two points $A$ and $B$, we can add them ($A + B$) or subtract them ($A - B$). We can also multiply a point by an integer, which is the same as doing repeated addition ($n A$ = $A + A + \cdots + A$, $n$ times).

We know some efficient algorithms for doing multiplication, but the reverse of multiplication is believed to be a “hard” problem for certain elliptic curves, in the sense that we know efficient methods for computing $B = n A$ given $n$ and $A$, but we do not know very efficient methods to figure out $n$ given $A$ and $B$. This problem of reversing a multiplication is known as Elliptic Curve Discrete Logarithm Problem (ECDLP).

Elliptic Curve Cryptography is based on multiplication of elliptic curve points by integers and its security is given mainly by the difficulty of solving the ECDLP.

In order to use Elliptic Curve Cryptography, we first have to generate a private-public key pair:

  • the private key is a random integer $s$;
  • the public key is the result of multiplying the integer $s$ with the generator $G$ of the elliptic curve group: $P = s G$.

Let’s now see a method to use Elliptic Curve Cryptography to encrypt arbitrary data, so that we can demystify the common belief that elliptic curves cannot be used to encrypt.

Elliptic Curve ElGamal

One method to encrypt data with elliptic curve keys is ElGamal. This is not the only method, of course, but it’s the one that I chose because it’s well known and simple enough. ElGamal is a cryptosystem that takes the name from its author and works on any cyclic group, not just elliptic curve groups.

If we want to encrypt a message using the public key $P$ via ElGamal, we can do the following:

  1. map the message to a point $M$ on the elliptic curve
  2. generate a random integer $t$
  3. compute $C_1 = t G$
  4. compute $C_2 = t P + M$
  5. return the tuple $(C_1, C_2)$

To decrypt an encrypted tuple $(C_1, C_2)$ using the private key $s$, we can do the following:

  1. compute $M = C_2 - s C_1$
  2. map the point $M$ back to a message

The scheme works because: $$\begin{align*} s C_1 & = s (t G) \\ & = t (s G) \\ & = t P \end{align*}$$ therefore: $$\begin{align*} C_2 - s C_1 & = (t P + M) - (t P) \\ & = M \end{align*}$$

There’s however a big problem with this scheme: how do we map a message to a point, and vice versa? How can we perform step 1 of the encryption algorithm, or step 2 of the decryption algorithm?

Mapping a message to a point

A message can be an arbitrary byte string. An elliptic curve point is, generally speaking, a pair of integers $(x, y)$ belonging to the elliptic curve field. How can we transform a byte string into a pair of field integers?

Well, as far as computers are concerned, both byte strings and integers have the same nature: they are just sequences of bits, so there’s a natural map between the two. We could take the message, split it into two parts, and interpret the first part as an integer $x$ and the second part as an integer $y$. This would work for obtaining two arbitrary integers, but there’s a problem: the coordinates $x$ and $y$ of an elliptic curve point are related by a mathematical equation (the curve equation), so we cannot choose two arbitrary $x$ and $y$ and expect them to identify a valid point on the curve. In fact, for curves in Weierstrass form, given $x$ there are at most two possible choices for $y$, so it’s very unlikely that this splitting method will yield a valid point.

Let’s change our strategy a little bit: instead of transforming the message to a pair $(x, y)$, we transform it to $x$ and then we compute a valid $y$ from the curve equation. This is a much better method, but there’s still a problem: generally speaking, not every $x$ will have a corresponding $y$. Not every $x$ can satisfy the curve equation.

Luckily, most of the popular elliptic curves used in cryptography have an interesting property: about half of the possible field integers are valid $x$-coordinates. To see this, let’s take a look at an example: the curve secp384r1. This is a Weierstrass curve that has the following order:

0xffffffffffffffffffffffffffffffffffffffffffffffffc7634d81f4372ddf581a0db248b0a77aecec196accc52973

I remind you that the order is the number of valid points that belong to the elliptic curve group. Because this is a Weierstrass curve, for each $x$ there are 2 possible points, so the number of valid $x$-coordinates is order / 2. Given an arbitrary 384-bit integer, what are the chances that this is a valid $x$-coordinate? The answer is (order / 2) / (2 ** 384) which is approximately 0.5 or 50%.

OK, but how does this help with our goal: mapping an arbitrary message to a valid $x$-coordinate? It’s simple: we can append a random byte (or multiple bytes) to the message. We call this extra byte (or bytes): padding. If the resulting padded message does not translate to a valid $x$-coordinate, we choose another random padding and try again, until we find one that works. Given that there’s 50% chance of finding a valid $x$ coordinate, this method will find a valid $x$-coordinate very quickly: on average, this will happen on the first or the second try.

Padding a message to obtain a valid elliptic curve point Example of how to use padding to obtain a valid elliptic curve point from an arbitrary message.

This operation can be easily reversed: if you have a point $(x, y)$, in order to recover the message that generated it, just take the $x$ coordinate and remove the padding. That’s it!

It’s worth noting that there are some standard curves where all the possible byte strings (of the proper size) can be translated to elliptic curve points, without any random padding needed. For example, with Curve25519, every 32-byte string is a valid elliptic curve point. Another curve like that is Curve448.

It’s also important to note that the padding does not need to be truly random. In the image above I show a padding that is simply a constantly increasing sequence of numbers: 1, 2, 3, ... That’s enough to find a valid point.

Putting everything together

We have seen how to map a message to a point and how ElGamal works, so now we have all the elements to write some working code. I’m choosing Python and the ECPy package to work with elliptic curves, which you can install with pip install ecpy.

import random
from ecpy.curves import Curve, Point


def message_to_point(curve: Curve, message: bytes) -> Point:
    # Number of bytes to represent a coordinate of a point
    coordinate_size = curve.size // 8
    # Minimum number of bytes for the padding. We need at least 1 byte so that
    # we can try different values and find a valid point. We also add an extra
    # byte as a delimiter between the message and the padding (see below)
    min_padding_size = 2
    # Maximum number of bytes that we can encode
    max_message_size = coordinate_size - min_padding_size

    if len(message) > max_message_size:
        raise ValueError('Message too long')

    # Add a padding long enough to ensure that the resulting padded message has
    # the same size as a point coordinate. Initially the padding is all 0
    padding_size = coordinate_size - len(message)
    padded_message = bytearray(message) + b'\0' * padding_size

    # Put a delimiter between the message and the padding, so that we can
    # properly remove the padding at decrypt time
    padded_message[len(message)] = 0xff

    while True:
        # Convert the padded message to an integer, which may or may not be a
        # valid x-coordinate
        x = int.from_bytes(padded_message, 'little')
        # Calculate the corresponding y-coordinate (if it exists)
        y = curve.y_recover(x)
        if y is None:
            # x was not a valid coordinate; increment the padding and try again
            padded_message[-1] += 1
        else:
            # x was a valid coordinate; return the point (x, y)
            return Point(x, y, curve)


def encrypt(public_key: Point, message: bytes) -> bytes:
    curve = public_key.curve
    # Map the message to an elliptic curve point
    message_point = message_to_point(curve, message)
    # Generate a randon number
    seed = random.randrange(0, curve.field)
    # Calculate c1 and c2 according to the ElGamal algorithm
    c1 = seed * curve.generator
    c2 = seed * public_key + message_point
    # Encode c1 and c2 and return them
    return bytes(curve.encode_point(c1) + curve.encode_point(c2))


def point_to_message(point: Point) -> bytes:
    # Number of bytes to represent a coordinate of a point
    coordinate_size = curve.size // 8
    # Convert the x-coordinate of the point to a byte string
    padded_message = point.x.to_bytes(coordinate_size, 'little')
    # Find the padding delimiter
    message_size = padded_message.rfind(0xff)
    # Remove the padding and return the resulting message
    message = padded_message[:message_size]
    return message


def decrypt(curve: Curve, secret_key: int, ciphertext: bytes) -> bytes:
    # Decode c1 and c2 and convert them to elliptic curve points
    c1_bytes = ciphertext[:len(ciphertext) // 2]
    c2_bytes = ciphertext[len(ciphertext) // 2:]
    c1 = curve.decode_point(c1_bytes)
    c2 = curve.decode_point(c2_bytes)

    # Calculate the message point according to the ElGamal algorithm
    message_point = c2 - secret_key * c1
    # Convert the message point to a message and return it
    return point_to_message(message_point)

And here is an usage example:

curve = Curve.get_curve('secp384r1')

secret_key = 0x123456789abcdef
public_key = secret_key * curve.generator

message = 'hello'
print('  Message:', message)

encrypted = encrypt(public_key, message.encode('utf-8'))
print('Encrypted:', encrypted.hex())

decrypted = decrypt(curve, secret_key, encrypted).decode('utf-8')
print('Decrypted:', decrypted)

Which produces the following output:

  Message: hello
Encrypted: 04fa333c6a03994c5bce4627de4447c5cdd358415f8db2745b67836932a0d5e81f19...
Decrypted: hello

Some considerations on padding and security

It’s important to note that padding is a very delicate problem in cryptography. There exist many padding schemes, and not all of them are secure. The padding scheme that I wrote in this article was just for demonstration purposes and may not be the most secure, so don’t use it in production systems. Take a look at OAEP if you’re looking for a modern and secure padding scheme.

Another thing to note is that the decryption method that I wrote does not check if the decryption was successful. If you try to decrypt an invalid ciphertext, or use the wrong key, you won’t get an error but instead a random result, which is not desiderable. A good padding scheme like OAEP will instead throw an error if decryption was unsuccessful.

Cost of elliptic curve encryption

With Elliptic Curve ElGamal, if we are using an n-bit elliptic curve, we can encrypt messages that are at most n-bit long (actually less than that, especially if we’re using padding), and the output is at least 2n-bit long (if the resulting points $C_1$ and $C_2$ are encoded using point compression). This means that encryption using Elliptic Curve ElGamal doubles the size of the data that we want to encrypt. It also requires a fair amount of compute resources, because it involves a random number generation and 2 point multiplications.

In short, Elliptic Curve ElGamal is expensive both in terms of space and in terms of time and compute power, and this makes it unattractive in applications like TLS or general purpose encryption.

So what can we use Elliptic Curve ElGamal for? We can use it to encrypt symmetric keys, such as AES keys or ChaCha20 keys, and then use these symmetric keys to encrypt our arbitrary data. Symmetric keys are relatively short (ranging from 128 to 256 bits nowadays), so they can be encrypted with one round of Elliptic Curve ElGamal with most curves. It’s worth noting that this is the same approach that we use with RSA encryption: we don’t use RSA to encrypt data directly (not anymore), but rather we use RSA to encrypt symmetric keys which are later used for encrypting data.

These are the reason why schemes like Elliptic Curve ElGamal, or other methods of encryption with elliptic curves, are not used in practice:

  • elliptic curve encryption is more expensive than hybrid encryption;
  • hybrid encryption scales better and is more performant;
  • elliptic curve key exchange is simpler and has fewer pitfalls than encryption.

In conclusion, no practical use case benefits from elliptic curve encryption, and that’s why we don’t use it and prefer elliptic curve key exchange instead. However, the idea that elliptic curves cannot be used for encryption is a myth, and I hope this article will help clarify that confusion.

on January 02, 2023 06:30 AM

December 30, 2022

Full Circle Magazine #188

Full Circle Magazine

This month:
* Command & Conquer
* How-To : Python, Blender and Latex
* Graphics : Inkscape
* Everyday Ubuntu
* Micro This Micro That
* Review : Kubuntu 22.10
* Review : Ubuntu Cinnamon 22.04
* Ubports Touch : OTA-24
Tabletop Ubuntu
* Ubuntu Games : Dwarf Fortress (Steam Edition)
plus: News, My Story, The Daily Waddle, Q&A, and more.

Get it while it’s hot: https://legacy.fullcirclemagazine.org/issue-188/

on December 30, 2022 05:33 PM

Here’s my (thirty-ninth) monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 48th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I do, both, technical and non-technical. Here are the things I did this month:

  • Some DebConf work.
  • Sponsoring stuff for non-DDs.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

This was my 23rd month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

I mostly worked on different things, I guess.

I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D


Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the stretch and jessie release (+2 years after LTS support).

This was my thirty-ninth month as a Debian LTS and thirtieth month as a Debian ELTS paid contributor.
I worked for 51.50 hours for LTS and 22.50 hours for ELTS.

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

  • Issued ELA 752-1, fixing CVE-2021-41182, CVE-2021-41183, CVE-2021-41184, and CVE-2022-31160, for jqueryui.
    For Debian 9 stretch, these problems have been fixed in version 1.12.1+dfsg-4+deb9u1.
  • Helped facilitate Erlang’s and RabbitMQ’s update; cf: ELA 754-1.
  • Looked through python3.4’s FTBFS on armhf. Even diff’d with Ubuntu. No luck. Inspected the traces, doesn’t give a lot of hint either. Will continue to look later next month or so but it’s a rabbit hole. (:
  • Inspected joblib’s security update upon Helmut’s investigation and see what went wrong there.
  • Started to look at other set of packages: dropbear, tiff, et al.

Other (E)LTS Work:


Until next time.
:wq for today.

on December 30, 2022 05:41 AM

December 15, 2022

My blog at popey.com/blog has gone through a number of iterations since I started it back in the 1990’s. First it was created using Microsoft FrontPage, and hosted on some free web space at CiX, and has morphed into a self-hosted WordPress site, Nikola static site, and now a Hugo static site. At various times I’ve had comment systems available underneath posts. Over the years I’ve used Microsoft FrontPage extensions (which, amusingly used to crash the entire Web Host at CiX back in the day), WordPress comments, and Disqus to facilitate visitor interaction.
on December 15, 2022 04:00 PM

December 14, 2022

At KDE we have multiple levels of quality assurance ranging from various degrees of a humans testing features to fully automated testing. Indeed automated testing is incredibly important for the continued quality of our software. A big corner stone of our testing strategy are so called unit tests, they test a specific piece of our software for its behavior in isolation. But for many aspects of our software we need a much higher level view, testing pieces of Plasma’s application launcher in isolation is all good and well but that won’t tell us if the entire UI can be easily navigated using the keyboard. For this type of test we require a different testing approach altogether. A couple months ago I’ve set set out to create a testing framework for this use case and I’m glad to say that it has matured enough to be used for writing tests. I’d like to walk you through the technical building blocks and a simple example.

Let us start of by looking at the architecture at large. So… there’s Selenium which is an incredibly popular, albeit web-oriented, testing framework. Its main advantages for us are its popularity and that it sports a server-client split. This means we can leverage the existing client tooling available for Selenium without having to write anything ourselves, we only need to grow a server. The server component, called a WebDriver, implements the actual interaction with UI elements and is generic enough to also apply to desktop applications. Indeed so thought others as well: there already exists Appium – it extends Selenium with more app-specific features and behaviors. Something for us to build upon. The clients meanwhile are completely separate and talk to the WebDriver over a well defined JSON REST protocol, meaning we can reuse the existing clients without having to write anything ourselves. They are available in a multitude of programming languages, and who knows maybe we’ll eventually get one for writing Selenium tests in QML 😉

That of course doesn’t explain how GUI testing can work with this on Linux. Enter: AT-SPI. AT-SPI is an accessibility API and pretty much the standard accessibility system for use on Linux. Obviously its primary use is assistive technologies, like the screen reader Orca, but to do its job it essentially offers a toolkit-independent way of introspecting and interacting with GUI applications. This then gives us a way to implement a WebDriver without caring about the toolkit or app specifics. As long as the app supports AT-SPI, which all Qt apps do implicitly, we can test it.

Since all the client tooling is independent of the server all we needed to get GUI testing going was a WebDriver that talks to AT-SPI.

That is what I set out to write and I’m happy to announce that we now have an AT-SPI based WebDriver, and the first tests are popping into existence already. There is also lovely documentation to hold onto.

So, without further ado. Let us write a simple test. Since the documentation already writes one in Python I’ll use Ruby this time around so we have some examples of different languages. A simple candidate is KInfoCenter. We can test its search functionality with a couple of lines of code.

First we need to install selenium-webdriver-at-spi, clone it, cmake build it, and cmake install it. You’ll also need to install the relevant client libraries. For ruby that’s simply running gem install appium_lib.

Then we can start with writing our test. We will need some boilerplate setup logic. This is more or less the same for every test. For more details on the driver setup you may also check the wiki page.

  def setup
    @appium_driver = Appium::Driver.new(
      {
        'caps' => { app: 'org.kde.kinfocenter.desktop' },
        'appium_lib' => {
          server_url: 'http://127.0.0.1:4723',
          wait_timeout: 10,
          wait_interval: 0.5
        }
      }, true
    )
    @driver = @appium_driver.start_driver
  end

The driver will take care of starting the correct application and make sure that it is actually running correctly. Next we’ll write the actual test. Let’s test the search. The first order of business is using a tool called Accerciser to inspect the AT-SPI presentation of the application. For more information on how to use this tool please refer to the wiki. Using Accerciser I’ve located the search field and learned that it is called ‘Search’. So, let’s locate it and activate it, search for the CPU module:

  def test_search
    search = driver.find_element(:name, 'Search')
    search.click
    search.send_keys('cpu')

Next let us find the CPU list item and activate it:

    cpu = driver.find_element(:class_name, '[list item | CPU]')
    assert(cpu.displayed?)
    cpu.click

And finally let’s assert that the page was actually activated:

    cpu_tab = driver.find_element(:class_name, '[page tab | CPU]')
    assert(cpu_tab.displayed?)

To run the complete test we can use the run wrapper: selenium-webdriver-at-spi-run ./kinfocentertest.rb (mind that it needs to be +x). If all has gone well we should get a successful test.

Finished in 1.345276s, 0.7433 runs/s, 1.4867 assertions/s.

1 runs, 2 assertions, 0 failures, 0 errors, 0 skips
I, [2022-12-14T13:13:53.508516 #154338]  INFO -- : tests done
I, [2022-12-14T13:13:53.508583 #154338]  INFO -- : run.rb exiting true

This should get you started with writing a test for your application! I’ll gladly help and review your forthcoming tests.
For more detailed documentation check out the writing-tests wiki page as well as the appium command reference.

Of course the work is not done. selenium-webdriver-at-spi is very much still a work in progress and I’d be glad for others to help add features as they become needed. The gitlab project is the place for that. ❤

The complete code of the example above:

#!/usr/bin/env ruby
# frozen_string_literal: true

# SPDX-License-Identifier: GPL-2.0-only OR GPL-3.0-only OR LicenseRef-KDE-Accepted-GPL
# SPDX-FileCopyrightText: 2022 Harald Sitter <sitter@kde.org>

require 'appium_lib'
require 'minitest/autorun'

class TestKInfoCenter < Minitest::Test
  attr_reader :driver

  def setup
    @appium_driver = Appium::Driver.new(
      {
        'caps' => { app: 'org.kde.kinfocenter.desktop' },
        'appium_lib' => {
          server_url: 'http://127.0.0.1:4723',
          wait_timeout: 10,
          wait_interval: 0.5
        }
      }, true
    )
    @driver = @appium_driver.start_driver
  end

  def teardown
    driver.quit
  end

  def test_search
    search = driver.find_element(:name, 'Search')
    search.click
    search.send_keys('cpu')

    cpu = driver.find_element(:class_name, '[list item | CPU]')
    assert(cpu.displayed?)
    cpu.click

    cpu_tab = driver.find_element(:class_name, '[page tab | CPU]')
    assert(cpu_tab.displayed?)
  end
end
on December 14, 2022 03:59 PM

December 12, 2022

Plasma Analyzer

Harald Sitter

It’s a Plasma widget that visualizes what’s going on on your system, music-wise that is. I’ve started this project years ago but only recently found the motivation to get it to a somewhat acceptable state. It’s pretty amazing to have bars flying across the screen to Daft Punk’s `Touch`.

https://store.kde.org/p/1953779

on December 12, 2022 03:26 PM

https://www.mixcloud.com/dholbach/cargo-cult-crew-berlin-six-beat-under/

After many years of silence, Cargo Cult Crew came back together for the occasion in everybody’s favourite basement in Berlin. It was such a fun evening from beginning to the end - thanks to everyone who made it happen and thanks Olivia FX for the great live show!

Anyway, here goes my set from 2am. I hadn’t played a Drum’n’Bass set in ages, but since I had lovers of broken beats in front of me, there was no holding back. I had the pleasure of being supported by MC Massiv La Gaza, which unfortunately wasn’t captured on this recording.

  1. DJ Fresh - Gold Dust (Bou & Used Remix)
  2. TC - Do You Rock
  3. London Elektricity - U Gotta B Crazy (Enei Remix)
  4. Mandidextrous & BiSH - Techno On My Mind
  5. Major Lazer - Get Free (Andy C Remix)
  6. Wilkinson - Scream It
  7. Jammez & 4K - Falling Down (Phibes Remix)
  8. Sub Focus, Chase & Status - Flashing Lights (S.P.Y Remix)
  9. Ed Solo & Deekline - No No No (Phibes Remix)
  10. Mutated Forms - Wastegash (The Upbeats Remix)
  11. Noisia - Diplodocus (Kill The Noise Remix)
  12. Andy C - Workout
  13. Phibes - Sensi
  14. The Prototypes - Pale Blue Dot
  15. Zinc featuring Ms Dynamite - Wile Out (DJ Marky Remix)
  16. DJ Marky & XRS - Butterfly (Craggz & Parallel Forces Remix)
  17. Deekline/Ed Solo - I Need A Dollar (Kalum remix)
  18. Camo & Krooked - All Night
on December 12, 2022 06:00 AM

December 10, 2022

(quoted from my other blog at since a new OS might be interesting for many and this is published in separate planets)

ALP - The Adaptable Linux Platform – is a new operating system from SUSE to run containerized and virtualized workloads. It is in early prototype phase, but the development is done completely openly so it’s easy to jump in to try it.

For this trying out, I used the latest encrypted build – as of the writing, 22.1 – from ALP images. I imported it in virt-manager as a Generic Linux 2022 image, using UEFI instead of BIOS, added a TPM device (which I’m interested in otherwise) and referring to an Ignition JSON file in the XML config in virt-manager.

The Ignition part is pretty much fully thanks to Paolo Stivanin who studied the secrets of it before me. But here it goes - and this is required for password login in Cockpit to work in addition to SSH key based login to the VM from host - first, create config.ign file:

{
"ignition": { "version": "3.3.0" },
"passwd": {
"users": [
{
"name": "root",
"passwordHash": "YOURHASH",
"sshAuthorizedKeys": [
"ssh-... YOURKEY"
]
}
]
},
"systemd": {
"units": [{
"name": "sshd.service",
"enabled": true
}]
},
"storage": {
"files": [
{
"overwrite": true,
"path": "/etc/ssh/sshd_config.d/20-enable-passwords.conf",
"contents": {
"source": "data:,PasswordAuthentication%20yes%0APermitRootLogin%20yes%0A"
},
"mode": 420
}
]
}
}

…where password SHA512 hash can be obtained using openssl passwd -6 and the ssh key is your public ssh key.

That file is put to eg /tmp and referred in the virt-manager’s XML like follows:

  <sysinfo type="fwcfg">
<entry name="opt/com.coreos/config" file="/tmp/config.ign"/>
</sysinfo>

Now we can boot up the VM and ssh in - or you could log in directly too but it’s easier to copy-paste commands when using ssh.

Inside the VM, we can follow the ALP documentation to install and start Cockpit:

podman container runlabel install registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-ws:latest
podman container runlabel --name cockpit-ws run registry.opensuse.org/suse/alp/workloads/tumbleweed_containerfiles/suse/alp/workloads/cockpit-ws:latest
systemctl enable --now cockpit.service

Check your host’s IP address with ip -a, and open IP:9090 in your host’s browser:

Cockpit login screen

Login with root / your password and you shall get the front page:

Cockpit front page

…and many other pages where you can manage your ALP deployment via browser:

Cockpit podman page

All in all, ALP is in early phases but I’m really happy there’s up-to-date documentation provided and people can start experimenting it whenever they want. The images from the linked directory should be fairly good, and test automation with openQA has been started upon as well.

You can try out the other example workloads that are available just as well.

on December 10, 2022 01:07 PM

December 03, 2022

Xubuntu Development Update December 2022

November was on track to be a quiet month in terms of Xubuntu development. Once daily builds became available toward the end of the month, we were finally able to start publishing some changes. With a handful of updates and the introduction of PipeWire and Flatpak, November became a much more exciting month.

23.04 November New Additions

PipeWire replaces PulseAudio

During the 22.10 release cycle, Ubuntu and several flavors replaced PulseAudio with the new PipeWire multimedia system. PipeWire has been reported to improve many of the issues users have with PulseAudio, including high CPU usage and Bluetooth connection issues. Xubuntu 23.04 adds PipeWire, WirePlumber, and the Bluetooth connection libraries.

Xubuntu Development Update December 2022PipeWire arrives in Xubuntu. Most users won&apost notice a difference since it&aposs a fully compatible, drop-in replacement. Screenshot from Lofi Girl on YouTube.

Flatpak makes it easier for users to install apps

With the addition of the flatpak and gnome-software-plugin-flatpak packages, Xubuntu now supports the popular Flatpak packaging format. You can now easily install applications from Flathub with just a couple of clicks. In fact, any .flatpakref or .flatpakrepo file is natively supported thanks to GNOME Software.

23.04 November Package Updates

In November, Xubuntu kicked off development with updates to our defaults and the latest development point releases from the Xfce 4.17 development series.

xubuntu-default-settings (23.04.0)

The latest xubuntu-default-settings package includes a handful of accessibility and usability improvements. The tooltip for the Whisker Menu now supports translations instead of always showing the English "Applications Menu". .deb packages will now correctly open in GNOME Software instead of the Archive Manager. System tray icons will now use automatic sizing, making them consistent with the other panel plugins. And finally, the Xfce Terminal font size has been increased for better consistency and contrast.

Xubuntu Development Update December 2022With an increased terminal font size, consistently-sized system tray icons, and translation support for the Whisker Menu, Xubuntu 23.04 becomes more accessible to all.

exo (4.17.3) - Xfce Application Library

Exo 4.17.3 (from 4.17.2) fixes horizonal smooth scrolling in Thunar&aposs compact view (#86) and makes text copied from the desktop item editor available after the dialog is closed (#93).

garcon (4.17.2) - Xfce Menu Library

Garcon 4.17.2 (from 4.17.1) fixes menu icon blurriness when using UI scaling (#33) and properly escapes % characters in URL desktop files (#31).

libxfce4ui (4.17.8) - Xfce UI Library

libxfce4ui 4.17.8 (from 4.17.6) expands the About Xfce dialog to include the kernel and GTK versions on the system tab (!76). It also improves the display of both client-side (CSD) and server-side (SSD) decorations (#26, !72, !78).

Xubuntu Development Update December 2022The About Xfce dialog is now more useful with the addition of the GTK and Kernel versions. When reporting issues, you can include this information for a better bug report.

thunar (4.17.11) - Xfce File Manager

Thunar 4.17.11 (from 4.17.9) adds undo and redo functionality for several file operations (#819). It adds a button to the toolbar to toggle Split View (#889, not yet enabled on Xubuntu). It also introduces a new Image preview sidepane (#357).

Xubuntu Development Update December 2022Thunar 4.17.11. The Split View button has been enabled and toggled on. The Image preview sidepane is also enabled to demonstrate the new feature.

xfce4-appfinder (4.17.1) - Xfce Application Finder

Xfce Application Finder 4.17.1 (from 4.17.0) switches to symbolic icons, adds a new option to hide window decorations (#51), and fixes a bug where duplicate entries were created (#58).

xfce4-panel (4.17.4) - Xfce Panel

Xfce Panel 4.17.4 (from 4.17.3) enables setting a custom command to run when the panel plugin is clicked (#560). It improves UI scaling support for icons and status notifier applets. Theme colors are now used for symolic icons (#635).

Xubuntu Development Update December 2022The Xfce Panel Clock plugin now allows you to set a custom command when left-clicking the clock. This can be used for good or evil.

xfce4-power-manager (4.17.0) - Xfce Power Manager

Xfce Power Manager 4.17.0 (from 4.16.0) now inhibits DPMS when it receives a power inhibit request. It also adjusts the default timeouts for inactivity to 5 minutes and sleep to 6 hours.

xfdesktop4 (4.17.1) - Xfce Desktop

Xfdesktop 4.17.1 (from 4.17.0) features UI scaling improvements to the window list and icons. Context menus, file properties, and drag-and-drop support is improved when multiple icons are selected. Keyboard typeahead functionality is also upgraded with this release.

xfwm4 (4.17.1) - Xfce Window Manager & Compositor

Xfwm 4.17.1 (from 4.16.1) includes an abundance of updates and fixes for the compositor, window positioning and placement, and focus. Additionally, symbolic icons are now used more widely.

Coming in December

Xfce 4.18 is just around the corner. The second pre-release (pre2) was just published and includes additional updates to those listed above. Once the final release is published, look forward to it landing in Xubuntu 23.04 shortly after.

It&aposs been interesting to see the development of Xfce 4.18 from the outside. These days, I have far less time to be able to support the project, and have chosen to instead focus on Xubuntu. From what I can see though, it looks like Xfce is in good hands and development is the healthiest its ever been. Keep up the great work, Xfce devs!

on December 03, 2022 02:00 PM

Full Circle Weekly News #289

Full Circle Magazine


Shufflecake:
https://research.kudelskisecurity.com/2022/11/10/introducing-shufflecake-plausible-deniability-for-multiple-hidden-filesystems-on-linux/

Thunderbird will have a redesigned calendar:
https://blog.thunderbird.net/2022/11/thunderbird-supernova-preview-the-new-calendar-design/

Release EasyOS 4.5:
https://bkhome.org/news/202211/easyos-dunfell-series-64-bit-version-45-released.html

DuckDB 0.6.0, SQLite Option for Analytical Requests:
https://duckdb.org/2022/11/14/announcing-duckdb-060.html

Release of Fedora Linux 37:
https://fedoramagazine.org/announcing-fedora-37/

Release of EuroLinux 8.7, compatible with RHEL:
https://en.euro-linux.com/blog/eurolinux-8-7-released/

Canonical has published Ubuntu builds optimized for Intel's IoT platforms:
https://ubuntu.com/blog/canonical-announces-new-enterprise-grade-ubuntu-images-designed-for-intel-iot-platforms

Release of BackBox Linux 8:
https://blog.backbox.org/2022/11/15/backbox-linux-8-released/

Release of the Rocky Linux 8.7:
https://rockylinux.org/news/rocky-linux-8-7-ga-release/

The openSUSE Leap Micro 5.3 distribution is available:
https://news.opensuse.org/2022/11/17/new-leapmicro-version-now-available/

Stable release of MariaDB 10.10:
https://mariadb.org/mariadb-10-11-1-rc-and-10-10-2-ga-now-available/

Release of Red Hat Enterprise Linux 9.1:
https://access.redhat.com/announcements/6984895

The AlmaLinux 9.1 has been published:
https://almalinux.org/blog/almalinux-91-now-available/

Release of Cinnamon 5.6:
http://cinnamon.linuxmint.com/

Third Free RPG FreedroidRPG release candidate:
https://www.freedroid.org/2022/11/14/switch-to-codeberg-rc3/

Stockfish and ChessBase settle proceedings related to violation of the GPL license:
https://stockfishchess.org/blog/2022/chessbase-stockfish-agreement/



Credits:
Full Circle Magazine
@fullcirclemag
Host: bardmoss@pm.me, @bardictriad
Bumper: Canonical
Theme Music: From The Dust - Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/
on December 03, 2022 01:55 PM

November 30, 2022

Here’s my (thirty-eighth) monthly but brief update about the activities I’ve done in the F/L/OSS world.

Debian

This was my 47th month of actively contributing to Debian. I became a DM in late March 2019 and a DD on Christmas ‘19! \o/

There’s a bunch of things I do, both, technical and non-technical. Here are the things I did this month:

Debian Uploads

Other $things:

  • Sponsoring stuff for non-DDs.
  • Mentoring for newcomers.
  • Moderation of -project mailing list.

Ubuntu

This was my 22nd month of actively contributing to Ubuntu. Now that I joined Canonical to work on Ubuntu full-time, there’s a bunch of things I do! \o/

I mostly worked on different things, I guess.

I was too lazy to maintain a list of things I worked on so there’s no concrete list atm. Maybe I’ll get back to this section later or will start to list stuff from the fall, as I was doing before. :D


Debian (E)LTS

Debian Long Term Support (LTS) is a project to extend the lifetime of all Debian stable releases to (at least) 5 years. Debian LTS is not handled by the Debian security team, but by a separate group of volunteers and companies interested in making it a success.

And Debian Extended LTS (ELTS) is its sister project, extending support to the stretch and jessie release (+2 years after LTS support).

This was my thirty-eighth month as a Debian LTS and twenty-nine month as a Debian ELTS paid contributor.
I worked for 41.00 hours for LTS and 30.25 hours for ELTS.

LTS CVE Fixes and Announcements:

ELTS CVE Fixes and Announcements:

  • Issued ELA 731-1, fixing CVE-2022-39377, for sysstat.
    For Debian 9 stretch, these problems have been fixed in version 11.4.3-2+deb9u1.
    For Debian 8 jessie, these problems have been fixed in version 11.0.1-1+deb8u1.
  • Issued ELA 749-1, fixing CVE-2022-41325, for vlc.
    For Debian 9 stretch, these problems have been fixed in version 3.0.17.4-0+deb9u2.
  • Issued ELA 750-1 for a new upstream version update of clamav.
    For Debian 9 stretch, the package has been updated to version 0.103.7+dfsg-0+deb9u1. For Debian 8 jessie, the package has been updated to version 0.103.7+dfsg-0+deb8u1.
  • Started to look at other set of packages.

Other (E)LTS Work:

  • Front desk duty from 21-11 until 27-11 for both, LTS and ELTS.
  • Triaged jqueryui, open-vm-tools, systemd, ffmpeg, lava, pngcheck, snapd, vlc, g810-led, libpgjava, dropbear, python3.5, python3.4, clamav, systat, postgresql-11, and mariadb-10.1.
  • Marked CVE-2009-1143/open-vm-tools as postponed for buster, stretch and jessie.
  • Marked CVE-2022-45873/systemd as not-affected in stretch and jessie.
  • Marked CVE-2022-396{4,5}/ffmpeg as postponed for buster and stretch.
  • Marked CVE-2022-45061/python3.{4,5} as postponed for stretch and jessie.
  • Marked CVE-2022-31160/jqueryui as not-affected for jessie instead.
  • Noted CVE-2022-45061/python3.4 to be marked as postponed; only things to fix is the armhf FTBFS.
  • Auto EOL’d linux, libpgjava, nvidia-graphics-drivers, maradns, chromium, and glance for ELTS.
  • Helped and assisted new contributors joining Freexian (LTS/ELTS/internally).
  • Answered questions (& discussions) on IRC (#debian-lts and #debian-elts) and Matrix.
  • Participated and helped fellow members with their queries via private mail and chat.
  • General and other discussions on LTS private and public mailing list.
  • Attended the monthly meeting held on IRC on November 24th.

Until next time.
:wq for today.

on November 30, 2022 05:41 AM

November 28, 2022

The Lubuntu Team is happy to announce that the Lubuntu Backports PPA with LXQt 1.2 is now available for general use.
on November 28, 2022 09:00 PM

November 26, 2022

Market Data section of the Financial Times US Edition print edition from May 5, 2021.

If you’ve flipped through printed broadsheet newspapers, you’ve probably seen pages full of tiny text listing prices and other market information for stocks and commodities. And you’ve almost certainly just turned the page. Anybody interested in this market prices today will turn to the internet where these numbers are available in real time and where you don’t need to squint to find what you need. This is presumably why many newspapers have stopped printing these types of pages or dramatically reduced the space devoted to them. Major financial newspapers however—like the Financial Times (FT)—still print multiple pages of market data daily. But does anybody read them?

The answer appears to be “no.” How do I know? I noticed an error in the FT‘s “Market Data” page that anybody looking in the relevant section of the page would have seen. And I have seen it reproduced every single day for the last 18 months.

In early May last year, I noticed that the Japanese telecom giant Nippon Telegraph and Telephone (NTT) was listed twice on the FT‘s list of the 500 largest global companies: once as “Nippon T&T” and also as “Nippon TT.” One right above the other. All the numbers are identical. Clearly a mistake.

Reproduction of the FT Market Data section showing a subset of Japanese companies from the FT 500 list of global companies. The duplicate lines are highlighted in yellow. This page is from today’s paper (November 26, 2022).

Wondering if it was a one-off error, I looked at a copy of the paper from about a week before and saw that the error did not exist then. I looked at a copy from one day before and saw that it did. Since the issue was apparently recurring, but new at the time, I figured someone at the paper would notice and fix it quickly. I was wrong. It has been 18 months now and the error has been reproduced every single day.

Looking through the archives, it seems that the first day the error showed up was May 5, 2021. I’ve included a screenshot from the electronic paper version from that day—and from the fifth of every month since then (or the sixth if the paper was not printed on the fifth)—that shows that the error is reproduced every day. A quick look in the archives suggests it not only appears in the US edition but also in the UK, European, Asian, and Middle East editions. All of them.

Why does this matter? The FT prints over 112,000 copies of its paper, six days a week. This duplicate line takes up almost no space, of course, so it’s not a big deal on its own. But devoting two full broadsheet pages to market data that is out date as soon as it is printed—much of which nobody appears to be reading—doesn’t seem like a great use of resources. There’s an argument to made that papers like the FT print these pages not because they are useful but because doing so is a signal of the publications’ identities as serious financial papers. But that hardly seems like a good enough reason on its own if nobody is looking at them. It seems well past time for newspapers to stop wasting paper and ink on these pages.

I respect that some people think that printing paper newspapers at all is wasteful when one can just read the material online. Plenty of people disagree, of course. But who will disagree with a call to stop printing material that evidence suggests is not being seen by anybody? If an error this obvious can exist for so long, it seems clear that nobody—not even anybody at the FT itself—is reading it.

on November 26, 2022 08:37 PM

November 23, 2022

The Certified Kubernetes Administrator (CKA) exam evaluates your ability to operate a Kubernetes (K8s) cluster and your knowledge of how to run jobs over the cluster.

I recently passed the exam, so I would like to share some tips and tricks, and how I prepared for it.

Illustration: road to knowledge

Illustration by unDraw.

Preparation

Before taking the exam, I already had extensive experience using K8s, however always through managed services. I had to learn everything about how a K8s cluster is actually run.

I am grateful to my employer, Google, for providing me with the time to study and covering all expenses.

I used the course hosted over A Cloud Guru. It is up-to-date, and covers everything needed for the CKA. The course is very hands-on, so if you sign up for it, I recommend doing the laboratories (they provide the environment) because the exam is a long series of tasks similar to what you would be asked to do during the lab sections of this course.

Furthermore, you should learn how to navigate the K8s doc. You can use it during the exam, so there is no point in learning every single field of every single resource. Do you need to create a pod? Go to the documentation for it, copy the basis example, and customize it.

Simulation

When you register for the CKA certification exam on the Linux Foundation site, you will receive access to a simulated exam hosted by killer.sh. The simulation is significantly more difficult and longer than the actual exam: I attempted a simulation two days before the exam, and I achieved 55%. On the actual exam, 91%.

I highly recommend taking the simulation, not only to see how prepared you are, but to familiarize yourself with the exam environment.

The exam

The exam is 2 hours long, with a variable number of tasks. Points are also assigned based on how many tasks of a question you answered. You need a score of 66% to clear the exam.

During the exam you can access K8s documentation, use this power to your advantage. The environment in which the exam will run is based on XFCE. You will have a browser, a note pad, and a terminal you can use.

Being familiar with vi can help you work more quickly, especially if you spend a lot of time in there, and you just use the browser to read the documentation.

You can use the note pad to keep track of which questions you skipped, or you want to revise later.

During your exam, you will work on multiple K8s clusters. Each question is about one cluster, and at the beginning of the question, it says which context you should use. Don’t skip to answer the question without having switched context!

Requirements

To attend the exam, you will have to download a dedicated browser, called PSI Safe Browser. You cannot download this in advance, it will be unlocked for you only 30 minutes before the beginning of the exam.

Theoretically, it is compatible with Windows, macOS, and Linux: however, many users have problems under Linux. Be aware that there could be issues, and have a macOS or Windows machine available, if possible, as a backup plan.

This browser will make sure that you are not running any background application, and that you don’t have multiple displays (only one monitor, sorry!).

After you installed the browser, you will have to identify yourself with some official document. You will have to show that you are alone in the room, that your desk is clean, that there is nothing under your desk, and so on. You will have to show your smartphone, and then show that you place it in some unreachable place. The whole check-in procedure takes time, and it is error-prone (I had to show the whole room twice, then the browser crashed, and I had to start from scratch again).

The onboarding procedure is the most stressful part of the exam: read all the requirements on the Linux Foundation Website, try to clear your desk from any unrelated thing, and start the procedure as soon as possible. In my case, it took almost 50 minutes.

Their browser is not a perfect piece of software: in my case, the virtual environment inside this browser had a resolution of 800x600, making it impossible to have two windows on a side-by-side. However, you spend the huge majority of your time in the terminal, and sometimes on the browser to copy-paste snippets from the browser.

Tips

  • Keep a Windows or macOS machine nearby, if Linux doesn’t work for you;
  • Answering a question partially is better than not answering at all;
  • Always double-check the K8s context! First thing you must do for each question, it is switching the context of your kubectl according to instructions;
  • Create files for each resource you create, so you can go back and adjust stuff. Moreover, if you have time at the end of the exam, makes way easier to check what you have done;
  • Remember to have a valid ID document with you for the exam check-in;

Conclusion

Time constraints and the unfamiliar environment will be your greatest challenges: however, if you have used K8s in production before, you should be able to clear the exam without any major difficulty. Spending sometime training before is strongly suggested, no matter your level of expertise, just to understand the format of the exam.

Good luck, and reach out if you have any question, here in the comments or by email at hello@rpadovani.com.

Ciao,
R.

on November 23, 2022 12:00 AM

November 19, 2022

The Lubuntu Team is happy to announce that an update to LXQt 1.2 through the Lubuntu Backports PPA is now available as a beta.
on November 19, 2022 05:12 AM

November 18, 2022

Our friends at Ardour have released Version 7.1, and we would like to offer them huge congratulations! While the source code and their own builds were available on release day, many of you have been waiting for Ardour 7 to come to Ubuntu’s repositories.

That day has come. Ardour 7.1 will be landing in Ubuntu Lunar Lobster (future 23.04) and will be on Ubuntu Studio’s daily spins of Lunar Lobster very soon.

Unfortunately, it is not possible to backport Ardour 7.1 into Ubuntu 22.04 LTS or 22.10, nor would we want to. This is because if we do, we might disrupt the workflow of people who are currently working with projects in 6.9 that are relying on its functionality and sound. Ardour 7.1 projects are not backwards compatible with Ardour 6.9 projects; once a 6.9 project is opened in 7.1, it is converted to a 7.1 project and cannot be used in 6.9 again unless restored from a backup.

This is also the reason we will not be releasing Ardour 7.1 into Ubuntu Studio’s main Backports PPA. However, we are giving Ardour its own Backports PPA so that users may upgrade Ardour in their Ubuntu (Studio) 22.04 LTS or 22.10 installation whenever they are ready.

To upgrade Ardour to 7.1, open a terminal and type the following:

sudo add-apt-repository ppa:ubuntustudio-ppa/ardour-backports
sudo apt upgrade

If at any point you change your mind and want to revert to Ardour 6.9 (and hopefully haven’t converted any projects):

sudo apt install ppa-purge
sudo ppa-purge ppa:ubuntustudio-ppa/ardour-backports

Enjoy Ardour 7.1! If you find it useful, please consider downloading and paying for the official package from Ardour themselves. They more than deserve it!

on November 18, 2022 05:05 PM

November 14, 2022

Ubuntu Summit 2022 Prague

Jonathan Riddell

Arriving for the first session

One of the lovelyest things about open community development is you can do it from home but you also get to travel to fancy places to meet your fellow devs and share ideas. Lockdowns stopped that for two years but with Akademy last month it felt like open tech life could return to a more fun state. Then came the return of a meeting that had last happened a decade ago, the Ubuntu Summit. A bunch of KDE devs were invited, me (who does KDE neon), Ade (who does Calamares installer), Scarlett (who does Snap packages), Aleix (who does Discover app installers), Harald (KDE neon), Luca (OpenRazor and hangs around KDE).

Scarlett Gives a Workshop on KDE apps as Snap packages

Unlike the old Ubuntu Developer Summits this wasn’t aimed at planning the next Ubuntu release, they had already spent the last two weeks in the same hotel doing that. This was just a fun sharing of ideas conference where people gave talks and workshops on what they were working on.

Me and Scarlett gave a lightning talk on KDE Snaps and Scarlett gave a workshop on KDE Snaps. KDE has over 100 apps in the Snap store, a great way to get KDE software promptly.

Ade gave a talk about his Calamares distro installer and compared it to Ubuntu’s installer which is being rewritten in Flutter. Harald gave talks on KDE neon and on secrets of KDE Plasma. Aleix spoke about the KDE community and what we do. Ade also talked about KDE Frameworks.

KDE 1 mascot Kandalf is Harald’s favourite character

There was plenty of talks on Snaps, it’s how Canonical makes money where it’s used in embedded devices, if you can call a 10 ton steel press an embedded device. Adam Szopa works for KDE and also Canonical and he gave a talk on Linux gaming, I hear Canonical has a whole team just to get gaming working well. Canonical also makes money from Microsoft’s Windows Services for Linux (WSL) and there were a bunch of talks showing this off. Using JuJu to set up servers is another large project Canonical works on which had some talks. Flutter seems very fashionable, a rival to Qt that is gaining attention, it uses the Dart programming language and is designed for mobile devices but Canonical has been working with Google to port it to Linux desktops (using GTK).

KDE spods at Ubuntu Summit 2022

It was great to catch up with Erich Eickmeyer who makes Ubuntu Studio and works for Kubuntu Focus selling laptops with Plasma. Ubuntu Studio ships with Plasma of course. I spoke to him about Wayland and he says the next release (for Ubuntu plus Plasma) is looking great for Wayland.

It was also great to meet Simon Quigley (tsimonq2) who does Lubuntu and has worked on Kubuntu. LxQt is a lightweight Linux desktop and probably one of the largest users of KDE Frameworks outside KDE, they use KScreen, KIdleTime, Solid, KWindowSystem and probably other Frameworks.

Head Honcho Shuttleworth

Canonical is reported to be profitable and hiring (after some “brutal times”) and spirits seem to be good. They have a community team now and are keen to engage.

There were also inspiring talks from e.g. a Zimbabwean developer talking about the challenges of taking software development on donkeys here he lives. Geopolitics is an interesting subject but one aspect I’ve not thought about before is how countries with a coastline can connect their internet directly to the world while countries without such as Zimbabwe are dependent on neighbouring countries to pass it through.

Lorenzo’s Music is an open source band who create on Github using Ubuntu Studio and Kdenlive. They gave a great performance on the river boat cruise under the Charles bridge in Prague.

Thanks to Canonical for sponsoring travel and helping us re-engage.

on November 14, 2022 01:49 PM

November 11, 2022

I’ve just made a new 5.3.1 release of Grantlee. The 5.3.0 release had some build issues with Qt 6 which should now be resolved with version 5.3.1.

Unlike previous releases, this release will not appear on http://www.grantlee.org/downloads/. I’ll be turning off grantlee.org soon. All previous releases have already been uploaded to https://github.com/steveire/grantlee/releases.

The continuation of Grantlee for Qt 6 is happening as KTextTemplate so as not to be constrained by my lack of availability. I’ll only make new Grantlee patch releases as needed to fix any issues that come up in the meantime.

Many thanks to the KDE community for taking up stewardship and ownership of this library!

on November 11, 2022 08:55 PM

November 07, 2022

The twenty-third edition will take place Saturday 4th and Sunday 5th February 2023 in Brussels, Belgium.

Key dates / New updates

  • Conference dates 4-5 February, 2023 In person
  • Community DevRoom date: Sunday 5th February
  • Submission deadline: Monday 21st November
  • Announcement of selected talks: Thursday 15th December 
  • You must be available in person to present your talk
  • Talk submissions should be 30 mins and this should include Q&A time.

IN MORE DETAIL 

The Community DevRoom will be back at FOSDEM 2023 (In Person). Our goals in running this DevRoom are to:

  • Educate those who are primarily software developers on community-oriented topics that are vital in the process of software development, e.g. effective collaboration
  • Provide concrete advice on dealing with squishy human problems
  • To unpack preconceived ideas of what community is and the role it plays in human society, free software, and a corporate-dominated world in 2021
  • We are seeking proposals on all aspects of creating and nurturing communities for free software projects

TALK TOPICS

Here are some topics we are interested in hearing more about this year:

  • Creating local-first communities and authentic and sustainable connections with contributors who are not able to attend in person
  • Sustainability in FOSS from a community perspective, e.g. keeping contributors motivated and able to pay their bills and addressing maintainer burnout
  • Creating a community around a corporate sponsored open source project that is respectful of its users and FOSS principles
  • Self-care in our communities, for both community members and community leaders, e.g. how are we all dealing with burnout and the ongoing impact of the pandemic
  • Balancing community expectations with company objectives in a manner that is respectful, authentic, and serves both audiences well 
  • Community Management best practices
  • Dealing with toxic people
  • Rewarding volunteer contributors and keeping volunteers motivated
  • Effective collaboration techniques
  • Recruiting new people to your project
  • Creative ways to engage and grow your developer community

Again, these are just suggestions. We welcome proposals on any aspect of community building!

HOW TO SUBMIT A TALK

  • If you already have a Pentabarf account, please don’t create a new one.
  • If you forgot your password, reset it. 
  • Otherwise, follow the instructions to create an account.
  • Once logged in, select “Create Event” and click on “Show All” in the top right corner to display the full form. 
  • Your submission must include the following information: 
  • First and last name / Nickname (optional)/ Image
  • Email address
  • Mobile phone number (this is a very hard requirement as there will be no other reliable form of emergency communication on the day)
  • Title and subtitle of your talk (please be descriptive, as titles will be listed with ~500 from other projects)
  • Track: Select “Community DevRoom” as the track
  • Event type: Lecture (talk) 
  • Persons: Add yourself as the speaker with your bio
  • Description: Abstract (required)/ Full Description (optional)
  • Links to related websites / blogs etc. 
  • Beyond giving us the above, let us know if there’s anything else you’d like to share as part of your submission – Twitter handle, GitHub activity history – whatever works for you. We especially welcome videos of you speaking elsewhere, or even just a list of talks you have done previously. First time speakers are, of course, welcome!
  • For issues with Pentabarf, please contact community-devroom@lists.fosdem.org. Feel free to send a notification of your submission to that email. 

If you need to get in touch with the organisers or program committee of the Community DevRoom, email us at community-devroom@lists.fosdem.org

Shirley Bailes, Leslie Hawthorn, and Laura Czajkowski – Community DevRoom Co-Organizers

FOSDEM website / FOSDEM code of conduct

on November 07, 2022 03:09 PM

October 29, 2022

https://www.mixcloud.com/dholbach/fams-birthday-strandbad-plötzensee/

One of the last warm September days, while it was raining and hailing in Neukölln, we enjoyed a late summer day and night in Strandbad Plötzensee and celebrated Fam’s birthday. I took over a dance-hungry crowd from Joe Carrera when we moved inside. It was a fantastic night - I couldn’t believe it had been 7 hours when I checked my watch as the place closed. Thanks everyone1 for this great event. 😍

  1. Dengue Dengue Dengue - Simiolo (Cumbia Cosmonauts Remix)
  2. Twerking Class Heroes - Hustlin´
  3. Kumbia Queers VS. Die Antword - Diz iz why im hot Remix
  4. Delaram Kamareh - Necessary Shiva (The Oddness Remix)
  5. Andi Otto ft. MD Pallavi - Six (Center Of The Universe Remix)
  6. Buraka Som Sistema - Sente
  7. Tropkillaz - Heartaches
  8. Tropkillaz - Dibre feat DKVPZ
  9. La Byle - Txê (Daniel Haaksman Remix)
  10. MC Ysa - Baile da Colômbia (Brega Funk) (Remix)
  11. Twerking Class Heroes - Vanakkam
  12. Andi Otto - Bangalore Whispers (feat. MD Pallavi)
  13. Romare - Love Song
  14. Sofi Tukker - Purple Hat
  15. Skip & Die - La Cumbia Dictadura
  16. Sukh Knight - Creation
  17. Notorious B.I.G. - Hypnotize (Benedikt Frey Edit)
  18. Nickodemus - Inmortales (Body Move) feat. Fémina (The Spy from Cairo Remix)
  19. A.Skillz - Everything to me
  20. Cabo Blanco - La Fiesta
  21. Bomba Estereo - Fuego (Maga Bo Remix)
  22. Foreigner - Cold As Ice (A.Skillz & Nick Thayer Remix - The Spank! Edit)
  23. Diplo - Express Yourself feat. Nicky da B (Chong X Cumbia Riddim refix)
  24. White Gangster - Italian Sound
  25. hubbabubbaklubb - Mopedbart (Barda Edit)
  26. Crussen - Bufarsveienen
  27. Daniël Leseman - Ease The Pain (Extended Mix)
  28. Johannes Klingebiel - Latewood
  29. Le Sale - (I’ve Had) The Time Of My Life (Le Sale’s Second Base Edit) [Dirty Dancing Remix]
  30. Quantic - You Used to Love Me feat. Denitia (Selva Remix)
  31. Mochakk - Ratata
  32. Dr. Dre - The Next Episode (We Are Nuts! Remix)
  33. Gene Farris & Kid Enigma - David Copperfield
  34. Pastor Leumund - Das Problem Mit Der Zeit feat. Bernadette La Hengst (MITTEKILL Remix)
  35. Moloko - Sing It Back (Mousse T.’s Feel Love Mix)
  36. Schlachthofbronx - Blurred Vision
  37. The Prodigy - Breathe (Zeds Dead Remix)
  38. Tropkillaz - Put It On Me (feat. Snappy Jit)
  39. Omar ؏ - Ku Ku
  40. Kalemba - Wegue Wegue (Krafty Kuts Remix)
  41. Omar ؏ - Dola Re Dola
  42. Yendry - KI-KI
  43. TNGHT & M.I.A. - BAD GOOOORLS (BAVR RMX)
  44. ATC - AROUND THE WORLD (BIFO BAILE EDIT)
  45. 808 Luke - Fluxo da Eve (MC GW, MC Dricka, MC MM)
  46. TroyBoi Feat Tropkillaz - CORNETA (DJ CHERNOBYL Baile Remix)
  47. MC’s Pett & Bobii (DJ Henrique de Ferraz & DJ Pufinho) - DÓ, RÉ, MI, FA O HENRIQUE SÓ LAMENTA
  48. Baja Frequencia - O Galop
  49. Beware + Motorpitch - Novinha Do Brazil feat. MC Iguinho
  50. The Living Graham Bond - Werk
  51. Mc Bin Laden - Bololo Haha (Uproot Andy Champeta Edit)
  52. SangoBeats - Me dê Amor
  53. Zeds Dead - Rumble In The Jungle
  54. Black Milk - Detroit’s New Dance Show
  55. Staff Benda Bilili - Bilanga (SKIP&DIE Remix)
  56. Lapo & Ago (Numa Crew) - Tuff!
  57. Yellow Claw - 4 In The Morning
  58. Toni Braxton - Unbreak My Dub (Jstar Refix)
  59. M.I.A - Bad Girls (Barbaric Merits ChainBangin Remix)
  60. Hamdi - Skanka
  61. MC Romântico - As Novinha Tão Sensacional (João Brasil Remix)
  62. Symbiz - Who Cares
  63. Natacha Atlas - Taalet (Radiohiro Remix)
  64. Rusko - Somebody To Love
  65. Chase & Status - Eastern Jam
  66. Sukh Knight - Ganja Dub
  67. The Prodigy - Smack My Bitch Up (Noisia Remix)
  68. Daniel Haaksman - Pau
  69. Yelle - A cause des garcons (Tepr Remix)
  70. Tiga - sunglasses at night (Popof remix 02)
  71. Vitalic - Poison Lips (extended)
  72. The Chemical Brothers - Go (Claude VonStroke Remix)
  73. nicholas ryan gant - Gypsy Woman (Kaytronik Remix Extended Version)
  74. Format B - Chunky
  75. Taisun - Senorita (Remix)
  76. Public Enem vs Benny Benassi - Bring the Noise (Pump-kin Remix)
  77. Josh Wink - Higher State of Concsciousness
  78. Fatboy Slim - Right Here, Right Now (CamelPhat Remix)
  79. Quantic - You Used to Love Me feat. Denitia (Selva Remix)
  80. Andri - Night Rider
  81. Mindchatter - Tough as nails
  82. Noir & Haze - Around (Solomon remix)
  83. Frankey & Sandrino - Acamar
  84. Lord Echo - Molten Lava (feat. Leila Adu)
  85. The Tribe Of Good - Heroes (edit)
  86. George Benson - Give Me The Night (X-Ray Tedit)
  87. Kriss Kross vs. Psycho Circus - Insane Jump
  88. White Gangster - Italian Sound
  89. Anita Baker - Ring My Bell (Kill Paris Remix)
  90. A.Skillz - Everything to me
  91. Fibre - I’ll Go Back
  92. WBBL - Boogie Nites (WBBL Remix)
  93. A Skillz - Poppa Soul
  94. Babe Ruth - The Mexican [X-Ray Tedit]
  95. Bruno Borlone - Wham! x Rick James x Daft Punk - Give Me The Funk (Bruno Borlone & Nacho Buscaglia Remix)
  96. TonyAdams - Estou Livre
  97. De La Soul - A Roller Skating Jam Named Saturdays (Superwash Remix)
  98. Opiuo & Vorso - Dusty Bugs
  99. Too Many T’s - The Bomb (prod. Odjbox)
  100. Psychemagik - Mink & Shoes feat Navid Izadi
  101. Dizharmonia - Yildirim (Kyrill & Redford Remix)
  102. Rey&Kjavik - Baba City (Rkadash Album Version)
  103. Yemanjo - Al Qamar Feat. Layeena & Ahmed Ragab
  104. Thornato - Chapinero
  105. Tropkillaz - Rayah (feat. Shantel)
  106. João Brasil - Nega Bass
  107. Tropkillaz + JSTJR - Lemme See
  108. Eminem - Without Me (Senior Citizen Remix)
  109. Born On Road - Aries, Fleck & Sheco ft. David Boomah - My Sound (Jstar Remix)
  110. Masia One - Warriors Tongue (An-ten-nae Remix)
  111. Acid Arab feat. Radia Menel - Staifia
  112. Disco Halal 2 - Hilbeh (Rabo and Snob Edit)
  113. 6th Borough Project - Do It To The Max
  114. Sampa the Great - Energy (The Oddness Rework)
  115. kerkayas - hakkı bulut ben köylüyüm (rework)
  116. Nicola Cruz - Bruxo (Von Party Remix)
  117. Urubu Marinka & Superbreak - That Loving Feeling (Dj Steef Edit)
  118. Whitney Houston - My Love Is Your Love (Steve Bonde Edit)
  119. coss - Come Into My Room
  120. Laraaji - All Of A Sudden (Klik & Frik Edit)
  121. Blick Bassy - Aké (Brynjard Edit)
  122. Chilled by Nature - Otherness (Black Mustang’s Frozen Moon Jam)
  123. Lukas Endhardt - Solo Tu

  1. Aside from the quite pushy outsiders who asked about every 5 minutes to have their songs played. 😬 I’ll credit some of the missed beats to them. 😂 ↩︎

on October 29, 2022 06:20 AM

October 27, 2022

OCI based linux

Serge Hallyn

Containers are most commonly distributed in two ways:

1. ‘Image based’: lxc and lxd distribute their container images as full images, a simple representation of root filesystem and some configuration info.
2. OCI: based on the original docker format, this has become an open standard for publishing not only container images, but any artifacts.

Our products are created, distributed, and installed as OCI. All services run as containers. Each container rootfs is re-created from its OCI image at every start. A physical machine’s rootfs is also shipped as an OCI image, and is recreated on every boot. A system representation therefore consists of a manifest specifying the OCI references for services to run. To make this secure,

1. Images must be verifiable. An fs extraction step, such as un-tarring, prevents us from verifying the result on next boot without re-extracting. Therefore we distribute OCI layers as squashfs instead of tarballs, and mount them using overlayfs.
2. Squashfs layers ship with their dmverity root hash in the image manifest.
3. The system manifest which lists the content-addressed OCI images is signed with a product key.
4. The certificate for a product’s manifest signing public key is stored with the system manifest. All product manifest signing certificates are signed by one manifest signing CA.
5. The manifest signing CA certificate is stored in initrd.
6. The initrd, ‘smooshed’ together with the kernel and kernel command-line into one kernel.efi, are signed with a kernel signing key.
7. The TPM keys for root filesystems and machine-identifying unique key are only unlocked for the pcr7 resulting from (our shim and) a kernel signed with the right kernel signing key certificate.

In this way, we can ship a single ‘kernel.efi’ for all TPM-enabled hardware and VM products. To protect different groups’ products from each other, products are provisioned with a product ID, which must match product ID in the product manifest signing certificate. Each machine is also provisioned with a unique keypair, supporting secure cluster bringup and remote attestation.

This allows us to use OCI as the source for (verifiably) securely installed and booted products. We can install the OS on a host in the traditional way, or we can pxe-boot specifying on the kernel command-line an OCI URL to a layer containing the manifest to boot into.

We hope to present the full solution (with source) at FOSDEM 2023.

# References
1. For more details on the OCI specification, see https://github.com/opencontainers/image-spec/blob/main/spec.md.
2. The very code for generating and mounting squashfs based OCI images is at https://github.com/project-stacker/stacker and https://github.com/project-stacker/stacker/tree/master/atomfs.
3. The in-development replacement for atomfs is puzzlefs, at https://github.com/anuvu/puzzlefs and https://github.com/anuvu/puzzlefs/blob/master/doc/index.md.
4. The TPM-based unattended encrypted filesystem solution was presented in full at LSS 2021: ‘Securing TPM secrets in the datacenter’: https://www.youtube.com/watch?v=wfJDmfPP1OA.

on October 27, 2022 02:19 AM

To create a Kubernetes deployment, we must specify the matchLabels field, even though its value must match the one we specify in the template. But why? Cannot Kubernetes be smart enough to figure it out without us being explicit?

A deep dive in K8s deployment matchLabels field

Illustration by unDraw+.

Did you know? K8s is short for Kubernetes ‘cause there are 8 letters between K and S.

A Kubernetes (K8s) Deployment provides a way to define how many replicas of a Pod K8s should aim to keep alive. I’m especially bothered by the Deployment spec’s requirement that we must specify a label selector for pods, and that label selector must match the same labels we have defined in the template. Why can’t we just define them once? Why can’t K8s infer them on its own? As I will explain, there is actually a good reason. However, to understand it, you would have to go down a rabbit hole to figure it out.

A deployment specification

Firstly, let’s take a look at a simple deployment specification for K8s:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx # Why can't K8s figure it out on its own?
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:latest

This is a basic deployment, taken from the official documentation, and here we can already see that we need to fill the matchLabels field.

What happens if we drop the “selector” field completely?

➜ kubectl apply -f nginx-deployment.yaml
The Deployment "nginx-deployment" is invalid:
* spec.selector: Required value

Okay, so we need to specify a selector. Can it be different from the “label” field in the “template”? I will try with:

matchLabels:
      app: nginx-different
➜ kubectl apply -f nginx-deployment.yaml
The Deployment "nginx-deployment" is invalid: spec.template.metadata.labels: Invalid value: map[string]string{"app":"nginx"}: `selector` does not match template `labels`

There are usually good reasons behind what it seems a not well-thought implementation, and this is true here as well, as we’ll see.

As expected, K8s doesn’t like it, it must match the template. So, we must fill a field with a well-defined value. It really seems that a computer could do that for us, why do we have to specify it manually? It drives me crazy having to do something a computer could do without any problem. Or could it?

Behind a deployment

Probably, you may never need to manipulate ReplicaSet objects: use a Deployment instead, and define your application in the spec section.

How does a deployment work? Behind the curtains, when you create a new deployment, K8s creates two different objects: a Pod definition, using as its specification what is available in the “template” field of the Deployment, and a ReplicaSet. You can easily verify this using kubectl to retrieve pods and replica sets after you have created a deployment.

A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.

A ReplicaSet needs a selector that specifies how to identify Pods it can acquire and manage: however, this doesn’t explain why we must specify it, and why K8s cannot do it on its own. In the end, a Deployment is a high-level construct that should hide ReplicaSet quirks: such details shouldn’t concern us, and the Deployment should take care of them on its own.

Digging deeper

Understanding how a Deployment works doesn’t help us find the reason of this particular behavior. Given that Googling doesn’t seem to bring any interesting result on this particular topic, it’s time to go to the source (literally): luckily, K8s is open source, so we can check its history on GitHub.

Going back in time, we find out that, actually, K8s used to infer it the matchLabels field! The behavior was removed with apps/v1beta2 (released with Kubernetes 1.8), through Pull Request #50164. Such pull request links to issue #50339, that, however, has a very brief description, and lacks the reasoning behind such a choice.

The linked issues are rich of technical details, and they have many comments. If you want to understand exactly how kubectl apply works, take a look!

Luckily, other issues provide way more context, as #26202: it turns out, the main problem with defaulting is when in subsequent updates to the resource, labels are mutated: the patch operation is somehow fickle, and apply breaks when you update a label that was used as default.

Many other concerns have been described by Brian Grant in deep in the issue #15894.

Basically, assuming a default value, creates many questions and concerns: what’s the difference between explicitly setting a label as null, and leaving it empty? How to manage all the cases where users left the default, and now they want to update the resource to manage themselves the label (or the other way around)?

Conclusion

Given that in K8s everything intend to be declarative, developers have chosen that explicit is better than implicit, especially for corner cases: specifying things explicitly allows a more robust validation on creation and update time, and remove some possible bugs that existed due to uncertainties caused by lack of clarity.

Shortly after dropping the defaulting behavior, developers also made the labels immutable, to make sure behaviors were well-defined. Maybe in the future labels will be mutable again, but to have that working, somebody needs to design a well-though document explaining how to manage all the possible edge cases that could happen when a controller is updated.

I hope you found this deep dive into the question interesting. I spent some time on it, since I was very curious, and I hope that the next person with the same question can find this article and get an answer quicker than what it took me.

If you have any question, or feedback, please leave a comment below, or write me an email at hello@rpadovani.com.

Ciao,
R.

on October 27, 2022 12:00 AM

October 22, 2022

A simple, easy, fast and useful way to paste your clipboard content (text or image) into a file! Previously, you had to open editor, paste your clipboard, save file, close editor. Now, right click and choose the menu “Clipboard to file”! HOW DOES IT WORK? TEXT Copy a text into your clipboard. Go to your file browser and do a right click: Into empty area / “Clipboard to file” menu: Will create the file clipboard-X.
on October 22, 2022 08:29 AM

October 20, 2022

Xubuntu 22.10 Released

Xubuntu 22.10, "Kinetic Kudu," has been released! The antelope-inspired release packs the latest and greatest GNOME 43, MATE 1.26, and Xfce 4.17 packages. Some long-running issues have finally been addressed, making for a better overall experience over recent releases. I hope this new release makes using your computer more enjoyable.

Xubuntu 22.10 ReleasedThe kinetic Kudu, one of two species of antelope found in eastern and southern Africa. Photo by Henning Borgersen on Unsplash

Xfce 4.17

Xfce 4.17 is the development series for the upcoming Xfce 4.18, expected later this year. With a few exceptions, it doesn&apost introduce major new features or toolkit changes but instead refines the Xfce experience. Some notable updates are included below.

  • Catfish has had a round of appearance and usability updates. The new "Open with" context menu makes it easier to open your files. The Ctrl+A accelerator adds a useful select-all function.
  • Mousepad has added search history and automatic reloading of changed files.
  • Thunar now features a built-in, recursive file search. Catfish is still within reach for more advanced file lookups. Thunar also includes a new graphical shortcut editor and per-directory zoom levels.
  • Thunar Archive Plugin now allows zip file (including common document types odt, docx, and others) to be compressed.
  • The Application Finder has added support for the PrefersNonDefaultGPU property. This improves the launching of games and other apps that depend on more powerful graphics in a multi-GPU system.
  • The Desktop will now ask for confirmation before desktop icons are rearranged. Users can optionally disable the Delete context menu item.
  • Notifications now feature improved matching of application icons and names in the settings dialog. During the slideout animation, notifications will now be correctly positioned.
  • The Panel has added a new binary time mode and middle-click options for the tasklist plugin. Support for systray and status notifiers applets has been improved.
  • The panel&aposs PulseAudio Plugin will now display an indicator when any app is recording audio. Notifications are now displayed when the microphone volume level is adjusted.
  • Screenshooter includes a fix for capturing HiDPI windows. Screenshots can be easily opened in the file manager. And you can start over with the inclusion of a new back button.
  • Task Manager now has a right-click option to copy the full process command line to the clipboard.
  • The Terminal boasts improved scrolling, a new Fill background image style, and fixes for the unsafe paste dialog.

GNOME 43

GNOME 43, "Guadalajara," features several usability enhancements, and more apps migrated to the new GTK4 toolkit. GTK4 enables building apps that have faster and smoother graphics with the addition of hardware acceleration. Some apps use the new libadwaita library that features a consistent look for GNOME apps, but one that does not fit in with the rest of the Xubuntu desktop theming. Nonetheless, the apps look good and deliver on the improved performance promises.

Of the GNOME apps included in Xubuntu 22.10, Disk Usage Analyzer (baobab), Font Viewer, and Software are now using GTK4 with libadwaita. Disk Utility and Sudoku have modest version bumps to the 43.x series. Rhythmbox is the standout application this time around, featuring an improved podcast downloader, network stream support, and Android/MTP device syncing.

MATE 1.26

MATE 1.26 has seen only modest updates since Xubuntu 22.04 this Spring. None of those updates extend to the Atril Document Viewer, Engrampa Archive Manager, or MATE Calculator. We still include the latest releases of each component.

Additional Updates

SGT Puzzles Collection

The latest update to Simon Tatham&aposs Portable Puzzle Collection includes the new grid-filling puzzle, Mosaic. I&aposd describe it as "Minesweeper, but find all the mines." It&aposs a fun challenge, so I recommend checking it out on Xubuntu or online.

Xubuntu 22.10 ReleasedMosaic, the new grid-filling puzzle from Simon Tatham.

Xubuntu Artwork

The new wallpaper for Xubuntu 22.10 features multi-colored transparent triangles overlaying a deep blue background. Small, glowing white bubbles float just above the rest of the image. You can get the full-quality version of the wallpaper on GitHub.

Xubuntu 22.10 ReleasedThe new wallpaper for Xubuntu 22.10.

Xubuntu Default Settings

We introduced a handful of small improvements to our settings this cycle.

  • The Noto fonts are now recommended instead of dependent packages. If you prefer another font, you can easily remove the Noto fonts from your system. (Launchpad #1891714)
  • The URL custom actions in Xfce Appfinder have been broken for a few releases. This has finally been addressed in 22.10. (Launchpad #1892651)
  • Swapping Caps Lock and Escape would result in the Super (Windows) key becoming a Caps Lock instead. This is now resolved. (Launchpad #1961506)
  • Xubuntu now includes default settings for the Picom compositor. Picom is the successor to the Compton compositor, which Xubuntu has included a configuration file for some time.

Xubuntu Metapackage

We didn&apost make any major updates to the packageset this cycle. But we did add a nice improvement. The Firefox transitional Debian package is now included again. This fixes a bug where the x-www-browser alias doesn&apost work with Snap packages, causing some issues loading a browser from Ubiquity, for example. You may have also noticed that the ISO manifest included the chromium package. This is also resolved. (Launchpad #1991470)

Get Xubuntu 22.10

Xubuntu 22.10 is available to download from the Xubuntu website. For installation instructions, please refer to the Xubuntu documentation.

If you&aposre using 22.04 and want to upgrade to 22.10, please check out our upgrade instructions on the Xubuntu Wiki.

on October 20, 2022 09:14 PM