February 26, 2021

Virtualisation plays a huge role in almost all of today’s fastest-growing software-based industries. It is the foundation for most cloud computing, the go-to methodology for cross-platform development, and has made its way all the way to ‘the edge’; the eponymous IoT. This article is the first in a series where we explain what virtualisation is and how it works. Here, we start with the broad strokes. Anything that goes beyond the scope of a 101 article will be covered in subsequent blog posts. Let’s get into it.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/2b35/virtualisationimage.jpg" width="720" /> </noscript>

What is virtualisation?

Virtualisation technology creates virtualised hardware environments. It uses software to create an ‘abstraction layer’ on top of hardware to divide up parts of a single computer’s resources, such as processors, memory, storage, etc, between multiple virtual computers. The result can be virtual machines (VMs) or containers. Both allow you to create isolated, secure environments for testing, debugging, legacy software, and for specific needs that do not require all of the resources on the physical hardware.

Today, virtualization is a standard practice in enterprise IT architectures, software development and at the edge. You can virtualise numerous parts of a computers ‘stack’ for a myriad of reasons. You can virtualise:

  • Desktops
  • Networks
  • Storage
  • Data
  • Applications
  • Data centres
  • CPUs
  • GPUs
  • Linux
  • Clouds

Each of these scenarios enables providers to serve users, or individual VMs, and means users only need the exact computational resources necessary for a given workload. This could be anything from virtualising single machines to more complex setups like full virtual data centre environments.

What is a virtual machine?

A virtual machine is a resource that uses software to run workloads and deploy apps. Each VM runs its own operating system (OS) (the guest OS), and behaves like an independent computer utilising a portion of the underlying computer’s resources (the host). VMs allow users to run numerous different operating systems on one machine, each with potentially different applications and libraries inside. There are numerous tools and methodologies for managing VMs in different places, the first layer of management comes from either a ‘hypervisor’ or ‘application virtualisation’.

<noscript> <img alt="" height="365" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_435,h_365/https://ubuntu.com/wp-content/uploads/9f0c/virtualmachine.png" width="435" /> </noscript>

What is a hypervisor?

A hypervisor is a layer of software that sits between VMs and hardware to manage resource allocation, general VM to hardware communications, and to make sure VMs don’t interfere with each other. There are two types of hypervisors:

  • Type 1: ‘Bare-metal’ hypervisors which interact directly with the underlying hardware and become the OS, except that you only really interact with them through the virtualisation tool. Some examples are: VMware ESXi, Microsoft Hyper-V, and Apple Boot Camp.
  • Type 2: Hypervisors that run as an application on top of the existing OS. Some examples are: Parallels Desktop for Mac, QEMU and VirtualBox.

Each operating system, macOS, Windows, Linux, and so on, use different hypervisors for different things. MacOS ships with Hyperkit, Windows with Hyper-V and Linux with KVM as their built-in ‘type 1’ hypervisors. But there are lots of organisations that offer type 1 and type 2 solutions. For example, Virtual box is a type 2 hypervisor that is popular on both Windows and macOS. VMware specialises in all different kinds of virtualisation; server, desktop, networking and storage, with different hypervisor offerings for each. The details of how hypervisors work is beyond the scope of this article.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/39be/hypervisor.png" width="720" /> </noscript>

What is application-virtualisation?

Application-based virtualisation uses an application (such as Parallels RAS) to effectively stream applications to a virtual environment on another server or host system. Instead of logging into a host computer, users gain access to the application virtually, separating applications from the operating system and allowing the user to run almost any application on other hardware. In this way users don’t have to worry about local storage and multiple applications can be run in this way with barely touching the host system.

What is virtual networking?

A key part of virtualisation is allowing virtual machines to talk to the rest of the world. VMs need the ability to talk to other VMs, internally with the host, and externally, with things outside of the virtual environment. This is done with a virtual network between the virtual machine(s) and the host OS. The network is a line of communication that goes between the VMs, and the hardware in the physical environment. There is lots more to it than that but the details are beyond the scope of this particular article.

There are many ways to implement a virtual network, two of the most common are “bridged networking” and “network address translation” (NAT). Using NAT, virtual machines are represented on external networks using the IP address of the host system. In this way virtual machines in the virtual environment are not visible to the outside, this is why virtual machines behind NAT are considered protected. . When a connection is made between an address inside and outside of the virtual environment the NAT system forwards the connection to the correct VM.

Bridged networking connects the VMs directly onto the physical network that the host is using. The DHCP server can then assign each VM its own IP address and is visible on the network. Once connected the VM is accessible over the network and can access other machines on the network as if it were a physical machine.

What are containers?

Containers are standardised units of software that bundle code and all its dependencies into one modular package. While each VM brings its own OS, containers can share the OS of the host machine or bring their own in separate containers. As a result, they are more lightweight, you can deploy a lot more at once, and they are low(er) maintenance, with everything you need in one place. We typically recommend three types of containers for different use cases:

<noscript> <img alt="" height="333" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_500,h_333/https://ubuntu.com/wp-content/uploads/9c65/container.png" width="500" /> </noscript>

Linux containers

Linux containers focus on being system containers. Containers which create an environment as close to a VM as possible without the overhead of running a kernel and virtualising the hardware. These are considered more robust because they are closer to being a machine with all the services in place, and so are used in a lot of traditional operations. Linux containers come from the Linux containers project (LXC), an open source container platform that is a userspace interface for the tools, templates, libraries and bindings to allow for the creation and management of containers.

<noscript> <img alt="" height="201" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_218,h_201/https://ubuntu.com/wp-content/uploads/8f91/lxc.png" width="218" /> </noscript>

Docker containers

Docker containers are the most popular kind of container among developers for cross-platform deployments in data centres or serverless environments. Docker containers use Docker Engine and numerous other container technologies, including LXC, to create developer-friendly environments that are reproducible regardless of the underlying infrastructure. They are standalone executable packages that include everything needed to run an application: code, runtime, system tools, libraries and settings.

<noscript> <img alt="" height="283" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_396,h_283/https://ubuntu.com/wp-content/uploads/14ee/docker.png" width="396" /> </noscript>

Snaps

Snaps are containerised software packages that focus on being singular application containers. Where LXC could be seen as a machine container, Docker as a process container, snaps can be seen as application containers. Snaps package code and dependencies in a similar way to containers to keep the application content isolated and immutable. They have a writable area that is separated from the rest of the system, but are visible to the host via user application-defined interfaces and behave more like traditional Debian apt packages.

Snaps are designed for when you want to deploy to a single machine. Applications are built and packaged as snaps using a tool called snapcraft that incorporates different container technologies to create a secure and easy-to-update way to package applications for workstations or for fleets of IoT devices. There are a few ways to develop snaps. Developers can configure snap to even run unconfined while they put it together and containerise everything later when pushing to production. Read more about the different way snaps can be configured in another article.

<noscript> <img alt="" height="283" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_417,h_283/https://ubuntu.com/wp-content/uploads/2404/snap.png" width="417" /> </noscript>

Virtual machines vs Containers

Whether you should use a VM or a container depends on your use case. They’re both great technologies for separate reasons, not necessarily competitors. Virtual machines allow users to run multiple OSes on the same hardware, and containers allow users to deploy multiple applications on the same OS, on a single machine.

Pros and cons of VMs

The benefits of using a VM include, but are not limited to:

  • Teams being more efficient with computational resources.
  • Support for larger, more complex applications that need full OS functionality on a single server.
  • The ability to turn one server into many
  • Potentially risky work can be isolated from the host environment.
  • Running multiple versions of the same OS environments on the same machine.
  • VMs support and run legacy applications that only work on outdated OSes.
  • VMs can provide disaster recovery features that abstract important data from problems on the host.

And of course there are several caveats that include, but are also not limited to:

  • Running multiple VMs on a single host can cause unstable performance and overload the host’s resources if unconstrained.
  • From some providers, especially at scale you may need to endure licensing costs for each VM.
  • Virtualisation has inherent performance differences simply as a result of the abstraction from hardware and can pose problems troubleshooting time-based/dependent issues.
  • Hosts without hardware extensions in the CPU may not allow access to specific resources. This is known as paravirtualisation, but is beyond the scope of this article.

Pros and cons of containers

The benefits of containers include but are not limited to:

  • Security; dy default containers limit what is exposed to the host system and the internet, plus, with the extra layer provided by the container the level of security is increased.
  • Scalability; large applications and services can be broken down to run in isolated containers that can be spread across multiple resources.
  • Manageability; when applications are broken down into containers developers can focus on features and individual aspects of the application rather than worrying about the whole thing.
  • Portability; containers run on any architecture and can be used most anywhere in the stack so the same container environment can be used from development to production.

And of course there are several caveats that include, but are also not limited to:

  • Setup and organisation can be difficult because users need to develop a strategy around how they want to operate their particular environment.
  • The compartmentalised approach of containers can lead to issues where changes in one container have a negative impact on the rest of the application.
  • Support and maintenance of applications, or application parts, inside containers becomes more difficult the more applications are broken down.
  • Since containers can share the same operating system, they share all the security threats and vulnerabilities of that OS too.

Conclusion

Virtualisation can exist anywhere computation is important. It is used to isolate whatever is being done from the host computer and to utilise specific resources more efficiently. There are two major kinds of virtualisation: virtual machines, and containers. Each has its pros and cons and can be used independently or together but both have the aim of providing flexibility and efficiency in deploying and managing applications. In our next article we will talk about some of the topics touched on here in more detail.

on February 26, 2021 11:13 AM

February 25, 2021

Ep 131 – Superstição

Podcast Ubuntu Portugal

Conversámos sobre navegadores de Internet, em snaps ou não, bem como upgrades de nuvens e servidores de conversa. Demos também carinho e atenção a algumas dicas e conversas vindas directamente da comunidade sendo o youtube-dl do assunto do momento.

Já sabem: oiçam, subscrevam e partilhem!

  • https://www.packtpub.com/product/mastering-ubuntu-server-third-edition/9781800564640
  • https://www.humblebundle.com/software/learn-frontend-web-development-software?partnet=PUP
  • http://keychronwireless.refr.cc/tiagocarrondo
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on February 25, 2021 10:45 PM
Xubuntu 21.04 Progress Update

Today, February 25, 2021 marks Feature Freeze for the Ubuntu 21.04 release schedule. At this point, we stop introducing new features and packages and start to focus on testing and bug fixes. Feature Freeze also marks Debian Import Freeze, which means that packages we have in common with Debian will no longer automatically sync to Xubuntu for the rest of the cycle.

This makes it a great time to update you on the goings-on in Xubuntu 21.04. So far, we have a pretty impressive list of changes, both technical and user-facing.

Xfce 4.16

The highlight of this release, of course, is Xfce 4.16. Having been released in December 2020, Xfce 4.16 includes a wide variety of new features and improvements. Most visibly, Xfce has a new color palette and refreshed icons, based loosely on Adwaita. To see the new icons in action, switch to the Adwaita icon theme in the Appearance settings.

Xubuntu 21.04 Progress UpdateXfce 4.16's new visual identity. A consistent set of icons based on a shared palette and design principles.

For a complete overview of the changes in Xfce 4.16, please check out the feature tour and changelog.

Ayatana Indicators

We've switched to the Ayatana indicator stack with the Xfce Indicator Plugin and LightDM GTK+ Greeter. Where the previous Application Indicator stack exists primarily in Ubuntu, Ayatana Indicators are cross-platform and available on Debian and elsewhere. This change may affect your indicator usage, as not all existing Application Indicators have been ported to Ayatana.

New Package Additions

Xubuntu 21.04 has added Hexchat (#12) and Synaptic to the desktop seed. adwaita-icon-theme-full is now included to make the included Adwaita icon theme fully functional, whereas it previously didn't include a large number of icons. Finally, mlocate has been replaced with plocate, which should result in even faster lookups with Catfish.

Xubuntu Documentation

It's been years since we'd last updated the included Xubuntu Documentation, and the latest packaged version doesn't even include the rewrite we completed last cycle. For now, here's what been updated since 18.04.

  • Updated contributor-docs to current practice, PPAs used, and milestones vs testing weeks
  • Updated webchat link to accomodate the new syntax
  • Moved from deprecated gnome-doc-utils to itstool (LP: #1905548)
  • Aligned appendix-packages to seed changes
  • Fixed typos and linked refs

Help Needed

For the latest changes to the Xubuntu Documentation, we're looking for help! Docbook is not the most straight-forward format, and we have a lot of changes in Google Drive that need to make their way to the docs-refresh branch on the Xubuntu GitHub. If you'd like to help out, please join us on Freenode at #xubuntu-devel.

Settings Changes

General

  • Set Gtk/CursorThemeName to DMZ-White to fix the broken mouse cursor with Snap packages (LP: #1838008)
  • Set window size of the file dialog to ensure it fits on screen with lower resolutions

Panel

  • Removed the StatusNotifier plugin, replaced by Systray (LP: #1907871)
  • Replaced the separator between the clock and tray with text padding
  • Enabled window focus support for the PulseAudio plugin
Xubuntu 21.04 Progress UpdateBy replacing the separators, there's no longer a mouse gap between plugins and the clock is no longer smashed against the side of the screen.

Desktop

  • Removed the File System and Removable Device icons
  • Removed the applications menu from the right-click menu

File Manager

  • Switched the location selector to use the pathbar layout
  • Enabled opening folders in a new tab with a mouse middle-click
  • Disabled changing the window icon for special folders (Desktop, Documents, etc)
Xubuntu 21.04 Progress UpdateThunar with the new Pathbar layout and stable window icon.
  • Added Sound entry to the Settings Manager (#7)
  • Added Xfce Terminal to the menu (LP: #1851387)
  • Removed confusing Xfce Terminal Settings from the menu
  • Removed TexInfo and Pavucontrol from the menu (#6)
Xubuntu 21.04 Progress UpdateXfce Settings Manager with the new Sound entry and dropdown search.

Keyboard Shortcuts

  • Ctrl+Alt+Delete will now display the logout dialog
  • Ctrl+Shift+Escape will now launch the Task Manager
  • Super+R will now launch the Application Finder
  • Super+E and Ctrl+Alt+F will now launch the File Manager

And More

With two months to go, there's still a lot of work to be done and plenty of changes coming from mainline Ubuntu as well. If you'd like to join in on the fun, check out the Get Involved section of the Xubuntu website.

on February 25, 2021 12:31 PM

Snapcraft Clinic

Alan Pope

At work we have a forum where developers can discuss packaging Linux applications, specifically as snaps. Sometimes developers just want to pair through a problem to get it either resolved for themselves, or for whatever is blocking to be handed off to the right people. One strategy for supporting developers we found effective was via regular live video conference. So last year we started the Snapcraft Clinic. On a semi-regular basis we dedicate time to join with anyone who has technical issues with snapping, to help them.
on February 25, 2021 12:00 PM
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/3cuXriwxnlcPaWxHdFTjIh6SAarVWGNUy1ZQSSjYg9cGxjwngKpBzKoU6Tqm2Th1lmxcGERBMrrkrYcHcoCv2MbvMLJXdYTTvVgM8hejXVFIUZqK64GvKWi6cpyAXzIt-9e3R5gP" width="720" /> </noscript>

MEC, as ETSI defines it, stands for Multi-access Edge Computing and is sometimes referred to as Mobile edge computing. MEC is a solution that gives content providers and software developers cloud-computing capabilities which are close to the end users. This micro cloud deployed in the edge of mobile operators’ networks has ultra low latency and high bandwidth which enables new types of applications and business use cases. On top of that an application running on MEC can have real-time access to a subset of radio network information that can improve the overall experience.

MEC Use Cases

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/_bNBLDDPP_TEz9fLOP-GBFsA2R057rLc1NAuFHo9vbmMxlx-xP9tSw9JGlWTFH1xbQX8xHwmr3MNN50-lJZs5cTq0oXmxagg6K-oMC6hVqQ25niM9BZ0gQLIonscKdY6DeRuaSZM" width="720" /> </noscript>

MEC opens a completely new ecosystem where a mobile network operator becomes a cloud provider, just like big hyperscalers. Its unique capabilities enabled by access to telecom specific data points and a location close to the user gives mobile network operators (MNOs) a huge advantage. From a workload perspective, we can distinguish 4 main groups of use-cases.

Services based on user location

Services based on user location utilize the location capabilities of the mobile network from functions like LMF. Location capabilities are more and more precise every standard and 5G networks aim for sub meter accuracy in sub 100ms intervals. This allows an application to track location, even for fast moving objects like drones or connected vehicles. Other simpler use cases exist: for example, if you want to make an user engagement app on a football stadium, you can now coordinate your team fans for more immersive events. And if you need to control movement of a swarm of drones, you can just deploy your command and control (C&C) server on the edge.

IoT services

IoT services are another big group. The number of connected devices grows exponentially every year. They produce unimaginable amounts of data. Yet it makes no sense to transfer each and every data point to the public cloud. In order to save some bandwidth, ML models running at the edge can aggregate the data and perform simple calculations to help with the decision making. The same goes for IoT software management, security updates, and devices fleet control. All of these use-cases make much more economical sense if they are deployed on small clouds at the edge of the network. If that is something that interests you, you can find more details on making a secure and manageable IoT device with Ubuntu Core here.

CDN and data caching

CDN and data caching are using edge to store content as close as possible to a requesting client machine, thereby reducing latency and improving page load time, video quality, and gaming experience. The main benefit of a cloud over legacy CDN is the fact that you can analyze the traffic locally and make better decisions on which content to store, and in what type of memory. A great and open source way to manage your edge storage and serve it in S3 compatible way is CEPH.

GPU intensive services

GPU intensive services such as AI/ML, AR/VR, and video analytics are all relying on the computing power available to the mobile equipment user with low latency. Any device can benefit from a powerful GPU located on an edge micro cloud, making access to ML algorithms, API for video analytics, or augmented reality based services much easier. These capabilities are also revolutionizing the mobile gaming industry, giving gamers more immersive experience and low enough latency to compete on a professional level. Together with NVidia , Canonical partnered to create a dedicated GPU accelerated edge server

MEC infrastructure requirements

In order to have MEC you need some infrastructure at the edge, including computers, storage, network, and accelerators. In telecom cases, accelerators like DPDK, SR-IOV or Numa are very important as they allow us to achieve the required performance of the whole solution. One bare metal machine is not enough, and two servers are just a single one and a spare. With 3 servers, which would be a minimum for an edge site, we have a cloud, albeit a small one. At Canonical they are called micro clouds, a new class of compute for the edge, made of resilient, self-healing and opinionated technologies reusing proven cloud primitives.

To choose a proper micro cloud setup, you need to know the workload that would be running on it. As the business need is to expose edge sites to a huge market of software developers it needs to be something familiar and liked by them. That’s why you don’t deploy OpenStack on the edge site. What you need is a Kubernetes cluster. The best case scenario for your operations team would be to have the same tools to manage, deploy, and upgrade such an edge site as tools they have in the main data center.

Typical edge site design

Canonical provides all the elements necessary to build an edge stack, top to bottom. There is MAAS to manage bare metal hardware, LXD clustering to provide an abstract layer of virtualization, Ceph for distributed storege, and MicroK8s to provide a Kubernetes cluster. All of these projects are modular, open source, and you can set it up yourself, or reach out to discuss more supported options.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/2XmTTYnmcITxV2pDUOP6abW4pPjkhgrS1HbV7CdBWX4ZeUY27vxD3wIJTXF4qbDccBnBcMYozzxY3XB47WNjgLVtLOcwv7jAhfrY8tAf9E4OP7CYFw42_he_r-AaxFxXzk6PBrhF" width="720" /> </noscript>

Obviously, edge is not a single site. Managing many micro clouds efficiently is a crucial task in front of mobile operators. I would suggest using an orchestration solution directly communicating to your MAAS instances, without any intermediate “big MAAS” as the middle man. The only thing you need is a service that returns each site name, location and network address. In order to simplify edge management even further all open source components managed by Canonical use semantic channels. You might be familiar with it already, as you use it when you install software on your local desktop using:

sudo snap install vlc –channel=3.0/stable/fix-playback

A channel is a combination of <track>/<risk>/<branch> and epochs to create a communication protocol between software developers and users. You will find the same concept used in charms, snaps and LTS Docker images.

You now know what is MEC and what is its underlying infrastructure. I encourage you to try it out yourself and share your story with us on Twitter. This is a great skill to have, as all major analyst companies are now agreeing with the fact that edge micro clouds will take over the public cloud and will be the next big environment for which all of us will write software in the future.

on February 25, 2021 07:32 AM

February 24, 2021

Helping your users stay up to date on their workstation is something I believe OS vendors should endeavour to do, to the best of their ability. Some users aren’t able to find time to install updates, or are irritated by update dialogs. Others are skeptical of their contents, some even block updates completely. No OS vendor wants to be “That Guy” featuring in the news as millions of their customers are found to be vulnerable on their watch.
on February 24, 2021 12:00 PM

February 23, 2021

I was cleaning up some old electronics (I’m a bit of a pack rat) and came across a Mac Mini I’ve owned since 2009. I was curious whether it still worked and whether it could get useful work done. This turned out to be more than a 5 minute experiment, so I thought I’d write it up here as it was just an interesting little test.

The Hardware

The particular model I have is known as “Macmini2,1” or “MB139*/A” or “Mid 2007”, with the following specs:

  • Intel Core 2 Duo T7200 at 2.0 GHz
  • 2 GB DDR2 SDRAM (originally 1GB, I upgraded)
  • 120GB HDD

The Software

The last version of Mac OS that was supported is Mac OS X 10.7 “Lion”, which has been unsupported since 2014. Since I’m a Linux guy anyway, I figured I’d see about installing Linux on this. Unfortunately, according to the Debian wiki, this device won’t boot from USB, and I don’t have any blank optical media to burn to. This was the first point where I nearly decided this wasn’t worth my time, but I decided to push on.

Linux is pretty good about booting on any hardware, even if it’s not the hardware you installed on, as kernel module drivers are loaded based on present hardware. I decided to try installing to a disk and then swapping disks and seeing if the Mac Mini would boot. The EFI on the Mac Mini supports BIOS emulation, and that seemed the more likely to work out of the box.

I plugged a spare SSD into my SATA dock and then used a virtual machine with a raw disk to install Debian testing on the SSD. I then used the excellent iFixIt teardown and my iFixit toolkit to open the Mac Mini and swap out the drive. I point to the teardown because opening a Mac Mini is neither obvious nor trivial.

Booting

I plugged in the Mac Mini along with a network cable and powered it on, hoping to see it just appear on the network. I gave it adequate time to boot and did a port scan to find it – and got nothing. Thinking it might have been a first boot issue, I rebooted the Mac Mini, waited even longer, and checked again – and once again, couldn’t find it. I checked the logs on my DHCP server, and there was nothing relevant there. This is the second point at which I considered quitting on this.

I decided to see what error I might have been getting, or at least how far it would get in booting, so I dug out a DVI cable and hooked it up to a monitor. Powering it on again, I got 30 seconds of grey screen from the EFI (due to the BIOS boot delay mentioned in the Debian wiki page), and then – Debian booted normally.

Okay, maybe networking was just broken. I did another port scan of my lab network – and there it was. Somehow it had just started working. I felt so confused at this point. I began to wonder if connecting a monitor had been the fix somehow. A few Google searches later, I had confirmed my suspicion – this Mac Mini model (and several others) will not boot unless it detects an attached monitor. There’s a workaround involving a resistor between two of the analog pins (or a commercial DVI emulator), but for the moment, I just kept the monitor attached.

At this point, I had the Mac Mini running Debian Testing and everything seemed to be more or less working. But would it be worth it in terms of computing power and electrical power?

Benchmarking & Comparison

I decided to run just a handful of CPU benchmarks. I wasn’t looking to tweak this system to find the maximal performance, just to get an idea of where it stands as a system.

The first run was a 7-zip benchmark. The Mac Mini managed about 3700 MB/s for compression. (Average across all dictionary sizes.) My laptop with a Core i5-5200U did 6345MB/s, and my Ryzen 7 3700X in my desktop managed a whopping 57,250MB/s!

With OpenSSL, I checked both SHA-512 and AES-128-CBC mode. For SHA-512 computations, the Mac Mini managed about 200 MB/s, my laptop 470 MB/s, and my desktop 903 MB/s. For AES-128-CBC, the Mac Mini is 89MB/s, my laptop 594MB/s, and my desktop a whopping 1.6GB/s! This result is obviously heavily skewed by the AES-NI instructions present on my laptop and desktop, but not the Mac Mini. (These are all single-thread results.)

Finally, I ran the POV-Ray 3.7 benchmark. The Mac Mini took 952s, my laptop 452s, and my desktop just 54s.

I began to wonder how all these results compared to something like a Raspberry Pi, so I pulled out a Pi 3B+ and a Pi 4B and ran the same benchmarks again.

Device 7-Zip SHA-512 AES-128 POV-Ray 3.7
Mac Mini w/T7200 3713 MB/s 193 MB/s 89 MB/s 952s
Laptop (i5-5200U) 6345 MB/s 470 MB/s 593 MB/s 452s
Desktop (R7-3700X) 57250 MB/s 903 MB/s 1591 MB/s 54s
Raspberry Pi 3B+ 1962 MB/s 31 MB/s 47 MB/s 1897s
Raspberry Pi 4B 3582 MB/s 204 MB/s 91 MB/s 597s

As can be seen, in most of the tests, the Mac Mini with a Core 2 Duo is trading blows back and forth with the Raspberry Pi 4B – and gets handily beat in the POV-Ray 3.7 test. Below is a chart of normalized test results, with the slowest device a 1.0 (always the Pi 3B+), and all others represent how many times faster the other systems are.

Normalized Relative Performance

During all of these tests, I had the Mac Mini plugged into a Kill-A-Watt Meter to measure the power consumption. Idling, it’s around 20 watts. Under one of these load tests, it reaches about 45-49 watts. Given that the Raspberry Pi 4B only uses around 5W under full load, the Pi 4B absolutely destroys this Mac Mini in performance-per-watt. (Note, again, this is an old Mac Mini – it’s no surprise that it’s not an even comparison.)

Conclusion

Given the lack of expandability, the mediocre baseline performance, and the very poor performance per watt, I can’t see using this for much, if anything. Running it 24/7 for a home server doesn’t offer much over a Raspberry Pi 4B, and the I/O is only slightly better. At this point, it’s probably headed for the electronics recycling center.

on February 23, 2021 08:00 AM

February 22, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 671 for the week of February 14 – 20, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on February 22, 2021 10:00 PM

February 21, 2021

ReText turns 10 years

Dmitry Shachnev

Exactly ten years ago, in February 2011, the first commit in ReText git repository was made. It was just a single 364 lines Python file back then (now the project has more than 6000 lines of Python code).

Since 2011, the editor migrated from SourceForge to GitHub, gained a lot of new features, and — most importantly — now there is an active community around it, which includes both long-time contributors and newcomers who create their first issues or pull requests. I don’t always have enough time to reply to issues or implement new features myself, but the community members help me with this.

Earlier this month, I made a new release (7.2), which adds a side panel with directory tree (contributed by Xavier Gouchet), option to fully highlight wrapped lines (contributed by nihillum), ability to search in the preview mode and much more — see the release page on GitHub.

Side panel in ReText

Also a new version of PyMarkups module was released, which contains all the code for processing various markup languages. It now supports markdown-extensions.yaml files which allow specifying complex extensions options and adds initial support for MathJax 3.

Also check out the release notes for 7.1 which was not announced on this blog.

Future plans include making at least one more release this year, adding support for Qt 6. Qt 5 support will last for at least one more year.

on February 21, 2021 06:30 PM

February 20, 2021

Late February 2021 Miscellany

Stephen Michael Kellat

In no particular order:

  • The online filmed church services are presently in abeyance. A suitable post was made to the YouTube channel indicating that the work may resume eventually. For now there is no anticipated resumption date.
  • Alan Pope recently posted a blog entry talking about what he listens to in terms of podcasts. Oddly enough I track what I subscribe to in terms of podcast feeds in a git repository on Launchpad at the moment.
  • I remain flabbergasted that I was asked to be a local candidate in this year’s municipal elections. That unexpected project is now in progress apparently.
  • The project on building a community newspaper-like thing in a semi-automated fashion using LaTeX continues.
  • Sometimes I see discussions threads out there that perhaps beggar belief.
  • This was not unexpected though still somewhat funny.
  • The county broadband needs assessment project continues. I sat in a focus group videoconference on Thursday. It did not go so well. A needs assessment is when you need to bring up needs and what is not going right rather than highlight “points of pride” and what is going well.
  • Adventures with WSL continue.
on February 20, 2021 07:47 PM

February 18, 2021

Ep 130 – FOSDEM 2021

Podcast Ubuntu Portugal

Revisitando o Hacktoberfest voltámos a olhar para o projecto Onde é que pára a Cultura, do Interruptor, olhando para suas as últimas movimentações. Reflectimos ainda sobre a FOSDEM 2021, nesta edição em moldes totalmente distintos do habitual, mas sempre com elevando interesse e relevância.

Já sabem: oiçam, subscrevam e partilhem!

  • https://github.com/InterruptorPt/ate-onde-chega-cultura/
  • https://twitter.com/AlexNetoGeo/status/1358180479313272832
  • https://github.com/InterruptorPt/ate-onde-chega-cultura/issues/65
  • https://github.com/InterruptorPt/ate-onde-chega-cultura/pull/66
  • https://twitter.com/waldyrious/status/1358408228695142404
  • https://github.com/repology/repology-wikidata-bot
  • https://www.wikidata.org/wiki/User:Repology_bot
  • https://twitter.com/_ShinIce/status/1358461185537044488
  • https://fosdem.org/2021/schedule/event/digitalmarketsact/
  • https://fosdem.org/2021/schedule/event/git_learning_game/
  • https://www.packtpub.com/product/mastering-ubuntu-server-third-edition/9781800564640
  • https://www.humblebundle.com/books/make-your-own-magic-inventions-make-co-books?partnet=PUP
  • http://keychronwireless.refr.cc/tiagocarrondo
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on February 18, 2021 10:45 PM

APT 2.2 released

Julian Andres Klode

APT 2.2.0 marks the freeze of the 2.1 development series and the start of the 2.2 stable series.

Let’s have a look at what changed compared to 2.2. Many of you who run Debian testing or unstable, or Ubuntu groovy or hirsute will already have seen most of those changes.

New features

  • Various patterns related to dependencies, such as ?depends are now available (2.1.16)
  • The Protected field is now supported. It replaces the previous Important field and is like Essential, but only for installed packages (some minor more differences maybe in terms of ordering the installs).
  • The update command has gained an --error-on=any option that makes it error out on any failure, not just what it considers persistent ons.
  • The rred method can now be used as a standalone program to merge pdiff files
  • APT now implements phased updates. Phasing is used in Ubuntu to slow down and control the roll out of updates in the -updates pocket, but has previously only been available to desktop users using update-manager.

Other behavioral changes

  • The kernel autoremoval helper code has been rewritten from shell in C++ and now runs at run-time, rather than at kernel install time, in order to correctly protect the kernel that is running now, rather than the kernel that was running when we were installing the newest one.

    It also now protects only up to 3 kernels, instead of up to 4, as was originally intended, and was the case before 1.1 series. This avoids /boot partitions from running out of space, especially on Ubuntu which has boot partitions sized for the original spec.

Performance improvements

  • The cache is now hashed using XXH3 instead of Adler32 (or CRC32c on SSE4.2 platforms)
  • The hash table size has been increased

Bug fixes

  • * wildcards work normally again (since 2.1.0)
  • The cache file now includes all translation files in /var/lib/apt/lists, so multi-user systems with different locales correctly show translated descriptions now.
  • URLs are no longer dequoted on redirects only to be requoted again, fixing some redirects where servers did not expect different quoting.
  • Immediate configuration is now best-effort, and failure is no longer fatal.
  • various changes to solver marking leading to different/better results in some cases (since 2.1.0)
  • The lower level I/O bits of the HTTP method have been rewritten to hopefully improve stability
  • The HTTP method no longer infinitely retries downloads on some connection errors
  • The pkgnames command no longer accidentally includes source packages
  • Various fixes from fuzzing efforts by David

Security fixes

  • Out-of-bound reads in ar and tar implementations (CVE-2020-3810, 2.1.2)
  • Integer overflows in ar and tar (CVE-2020-27350, 2.1.13)

(all of which have been backported to all stable series, back all the way to 1.0.9.8.* series in jessie eLTS)

Incompatibilities

  • N/A - there were no breaking changes in apt 2.2 that we are aware of.

Deprecations

  • apt-key(1) is scheduled to be removed for Q2/2022, and several new warnings have been added.

    apt-key was made obsolete in version 0.7.25.1, released in January 2010, by /etc/apt/trusted.gpg.d becoming a supported place to drop additional keyring files, and was since then only intended for deleting keys in the legacy trusted.gpg keyring.

    Please manage files in trusted.gpg.d yourself; or place them in a different location such as /etc/apt/keyrings (or make up your own, there’s no standard location) or /usr/share/keyrings, and use signed-by in the sources.list.d files.

    The legacy trusted.gpg keyring still works, but will also stop working eventually. Please make sure you have all your keys in trusted.gpg.d. Warnings might be added in the upcoming months when a signature could not be verified using just trusted.gpg.d.

    Future versions of APT might switch away from GPG.

  • As a reminder, regular expressions and wildcards other than * inside package names are deprecated (since 2.0). They are not available anymore in apt(8), and will be removed for safety reasons in apt-get in a later release.

on February 18, 2021 08:09 PM

February 15, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 670 for the week of February 7 – 13, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on February 15, 2021 09:34 PM

EBBR on EspressoBin

Marcin Juszkiewicz

SBBR or GTFO

Me.

Yeah, right. But world is not so nice and there are many cheap SBC on market which are not SBBR compliant and probably never will. And with small amount of work they can do EBBR (Embedded Base Boot Requirements).

NOTE: I have similar post about EBBR on RockPro64 board.

WTH is EBBR?

It is specification for devices which are not servers and do not pretend to be such. U-Boot is all they have and with properly configured one they have some subset of EFI Boot/Runtime Services to load distribution bootloader (grub-efi usually) like it is done on servers.

ACPI is not required but may be present. DeviceTree is perfectly fine. You may provide both or one of them.

Firmware can be stored wherever you wish. Even MBR partitioning is available if really needed.

Few words about board itself

EspressoBin has 4MB of SPI flash on board. Less than on RockPro64 but still enough for storing firmware (U-Boot takes less than 1MB).

This SBC is nothing new — first version was released in 2016. There were several revisions with different memory type, amount of ram chips, type of them (ddr3 or ddr4), CPU speed and some more changes.

I got EspressoBin revision 5 with 1GB ram of ddr3 in 2 chips. And 1GHz processor.

It may sound silly that I repeat that information but it matters when you start building firmware for that board.

So let us build fresh firmware

This is Marvell so abandon all hope for sanity.

Thanks to Arm Trusted Firmware authors there is good documentation on how to build firmware for EspressoBin which guides step by step and explains all arguments you need. For me it was several git clone calls and then two make calls:

make -C u-boot CROSS_COMPILE=aarch64-linux-gnu- \
mvebu_espressobin-88f3720_defconfig u-boot.bin -j12

make -C trusted-firmware-a CROSS_COMPILE=aarch64-linux-gnu- \
CROSS_CM3=arm-none-linux-gnueabihf- PLAT=a3700 \
CLOCKSPRESET=CPU_1000_DDR_800 DDR_TOPOLOGY=2 \
MV_DDR_PATH=$PWD/marvell/mv-ddr-marvell/ \
WTP=$PWD/marvell/A3700-utils-marvell/ \
CRYPTOPP_PATH=$PWD/marvell/cryptopp/ \
BL33=$PWD/u-boot/u-boot.bin \
mrvl_flash -j12

And I had to install cross toolchain for 32-bit arm because the one I had was for building kernels/bootloaders only.

Is your U-Boot friendly or not?

First you need to check which version of U-Boot and hardware you have. Then check does it recognize SPI flash or not:

Marvell>> sf probe
SF: unrecognized JEDEC id bytes: 9d, 70, 16
Failed to initialize SPI flash at 0:0 (error -2)

I had bad luck as my board used SPI chip not recognized by any official U-Boot build…

Armbian to the rescue

I asked in few places did anyone had some experiences with this board. One of them was #debian-arm IRC channel where I got hint from Xogium that Armbian may have U-Boot builds.

And they have whole page about EspressoBin. With information how to choose firmware files etc.

So I downloaded archive with proper files for UART recovery. The important thing to remember is that once you move jumpers and load all firmware files over serial they are not written in SPI flash so reset of board means you start over.

Quick check is SPI flash detected:

Marvell>> sf probe
SF: Detected is25wp032 with page size 256 Bytes, erase size 4 KiB, total 4 MiB

Yeah! Now can start USB and flash own firmware build:

Marvell>> bubt flash-image.bin spi usb
Burning U-Boot image "flash-image.bin" from "usb" to "spi"
Bus usb@58000: Register 2000104 NbrPorts 2
Starting the controller
USB XHCI 1.00
Bus usb@5e000: USB EHCI 1.00
scanning bus usb@58000 for devices... 1 USB Device(s) found
scanning bus usb@5e000 for devices... 2 USB Device(s) found
Image checksum...OK!
SF: Detected is25wp032 with page size 256 Bytes, erase size 4 KiB, total 4 MiB
Erasing 991232 bytes (242 blocks) at offset 0 ...Done!
Writing 990944 bytes from 0x6000000 to offset 0 ...Done!

Quick reset and board boots to fresh, mainline U-Boot:

TIM-1.0
WTMI-devel-18.12.1-1a13f2f
WTMI: system early-init
SVC REV: 5, CPU VDD voltage: 1.108V
NOTICE:  Booting Trusted Firmware
NOTICE:  BL1: v2.4(release):v2.4-345-g04c122310 (Marvell-devel-18.12.2)
NOTICE:  BL1: Built : 17:11:19, Feb 15 2021
NOTICE:  BL1: Booting BL2
NOTICE:  BL2: v2.4(release):v2.4-345-g04c122310 (Marvell-devel-18.12.2)
NOTICE:  BL2: Built : 17:11:20, Feb 15 2021
NOTICE:  BL1: Booting BL31
NOTICE:  BL31: v2.4(release):v2.4-345-g04c122310 (Marvell-devel-18.12.2)
NOTICE:  BL31: Built : 18:07:02, Feb 15 2021


U-Boot 2021.01 (Feb 15 2021 - 19:25:41 +0100)

DRAM:  1 GiB
Comphy-0: USB3_HOST0    5 Gbps    
Comphy-1: PEX0          2.5 Gbps  
Comphy-2: SATA0         5 Gbps    
SATA link 0 timeout.
AHCI 0001.0300 32 slots 1 ports 6 Gbps 0x1 impl SATA mode
flags: ncq led only pmp fbss pio slum part sxs 
PCIE-0: Link down
MMC:   sdhci@d0000: 0, sdhci@d8000: 1
Loading Environment from SPIFlash... SF: Detected is25wp032 with page size 256 Bytes, erase size 4 KiB, total 4 MiB
OK
Model: Globalscale Marvell ESPRESSOBin Board
Card did not respond to voltage select! : -110
Net:   eth0: neta@30000
Hit any key to stop autoboot:  0 

Final steps

OK, so SBC has fresh, mainline firmware. Nice. But still some stuff needs to be done.

First note MAC addresses of Ethernet ports. Use printenv command to check stored environment and note few variables:

eth1addr=f0:ad:4b:aa:97:01
eth2addr=f0:ad:4b:aa:97:02
eth3addr=f0:ad:4b:aa:97:03
ethaddr=f0:ad:4e:72:10:ef

Of course you may also skip that step and rely on random ones or choose own ones (I had router with C0:FF:EE:C0:FF:EE in past).

Then reset environment to default values stored in U-Boot binary and set those MAC addresses by hand:

=> env default -a -f
=> setenv eth1addr f0:ad:4b:aa:97:01
=> setenv eth2addr f0:ad:4b:aa:97:02
=> setenv eth3addr f0:ad:4b:aa:97:03
=> setenv ethaddr f0:ad:4e:72:10:ef
=> saveenv
Saving Environment to SPIFlash... Erasing SPI flash...Writing to SPI flash...done
OK
=> 

What EBBR brings?

Now your board is ready to boot Debian, Fedora and several other distribution install media with two commands:

=> set boot_targets usb0
=> boot

It will find EFI bootloader and start it. Just like on any other boring SBBR/EBBR system.

Distributions with old style ‘boot.scr’ script (like OpenWRT for example) will also work so no functionality loss.

on February 15, 2021 08:53 PM

Back in October 2018 the Ubuntu MATE team released bespoke images of Ubuntu MATE 18.10 for the GPD Pocket and GPD Pocket 2 that included hardware specific tweaks to get these devices working “out of the box” without any faffing about. Today we are releasing Ubuntu MATE 18.04.2 and Ubuntu MATE 19.04 images for both devices. Read on to find out more…

Ubuntu MATE 18.04.2 running on the GPD Pocket (left) and 19.04 on the GPD Pocket 2 (right) Ubuntu MATE 18.04.2 running on the GPD Pocket (left) and 19.04 on the GPD Pocket 2 (right)

What’s new?

Ubuntu MATE 18.04.2

Thanks to the recent hardware enablement stack upgrade in Ubuntu it is now possible to create images based on Ubuntu MATE 18.04.2 for the GPD Pocket and GPD Pocket 2. These images are final and available to download now!

Ubuntu MATE 19.04

The Ubuntu MATE 19.04 release is just days away, so we have also prepared Ubuntu MATE 19.04 Beta images for both devices. Also availble for download now and you can simply collect updates to get from the beta to final release on April 18th 2019.

Improvements

Thanks to the feedback and contributions from the community, these are the improvement we’ve made since the Ubuntu MATE 18.10 images were created:

  • Frame buffer and Xorg display rotation now works with modesetting and xorg-video-intel display drivers.
  • Enabled TearFree rendering by default.
  • Updated touch screen rotation to support Xorg and Wayland.
  • Enabled an emulated mouse scroll wheel, activated by holding down the right track point button and moving the trackpoint up/down.
  • GRUB is now usable post-install for both devices!

More Details & Downloads

Find out more about Ubuntu MATE for the GPD Pocket and Pocket 2. Get the downloads!

Details & Downloads
on February 15, 2021 11:29 AM

The releases following an LTS are always a good time ⌚ to make changes the set the future direction 🗺️ of the distribution with an eye on where we want to be for the next LTS release. Therefore, Ubuntu MATE 20.10 ships with that latest MATE Desktop 1.24.1, keeps paces with other developments within Ubuntu (such as Active Directory authentication) and migrated to the Ayatana Indicators project.

If you want bug fixes :bug:, kernel updates :corn:, a new web camera control :movie_camera:, and a new indicator :point_right: experience, then 20.10 is for you :tada:. Ubuntu MATE 20.10 will be supported for 9 months until July 2021. If you need Long Term Support, we recommend you use Ubuntu MATE 20.04 LTS.

Read on to learn more… :point_down:

Ubuntu MATE 20.10 (Groovy Gorilla) Ubuntu MATE 20.10 (Groovy Gorilla)

What’s changed since Ubuntu MATE 20.04?

MATE Desktop

If you follow the Ubuntu MATE twitter account 🐦 you’ll know that MATE Desktop 1.24.1 was recently released. Naturally Ubuntu MATE 20.10 features that maintenance release of MATE Desktop. In addition, we have prepared updated MATE Desktop 1.24.1 packages for Ubuntu MATE 20.04 that are currently in the SRU process. Given the number of MATE packages being updated in 20.04, it might take some time ⏳ for all the updates to land, but we’re hopeful that the fixes and improvements from MATE Desktop 1.24.1 will soon be available for those of you running 20.04 LTS 👍

Active Directory

The Ubuntu Desktop team added the option to enroll your computer into an Active Directory domain 🔑 during install. We’ve been tracking that work and the same capability is available in Ubuntu MATE too.

Active Directory Enrollment Enroll your computer into an Active Directory domain

Ayatana Indicators

There is a significant under the hood change 🔧 in Ubuntu MATE 20.10 that you might not even notice 👀 at a surface level; we’ve replaced Ubuntu Indicators with Ayatana Indicators.

We’ll explain some of the background, why we’ve made this change, the short term impact and the long term benefits.

What are Ayatana Indicators?

In short, Ayatana Indicators is a fork of Ubuntu Indicators that aims to be cross-distro compatible and re-usable for any desktop environment 👌 Indicators were developed by Canonical some years ago, initially for the GNOME2 implementation in Ubuntu and then refined for use in the Unity desktop. Ubuntu MATE has supported the Ubuntu Indicators for some years now and we’ve contributed patches to integrate MATE support into the suite of Ubuntu Indicators. Existing indicators are compatible with Ayatana Indicators.

We have migrated Ubuntu MATE 20.10 to Ayatana Indicators and Arctica Greeter. I live streamed 📡 the development work to switch from Ubuntu Indicators to Ayatana Indicators which you can find below if you’re interested in some of the technical details 🤓

The benefits of Ayatana Indicators

Ubuntu MATE 20.10 is our first release to feature Ayatana Indicators and as such there are a couple of drawbacks; there is no messages indicator and no graphical tool to configure the display manager greeter (login window) 😞

Both will return in a future release and the greeter can be configured using dconf-editor in the meantime.

Arctica Greeter dconf configuration Configuring Arctica Greeter with dconf-editor

That said, there are significant benefits that result from migrating to Ayatana Indicators:

  • Debian and Ubuntu MATE are now aligned with regards to Indicator support; patches are no longer required in Ubuntu MATE which reduces the maintenance overhead.
  • MATE Tweak is now a cross-distro application, without the need for distro specific patches.
  • We’ve switched from Slick Greeter to Arctica Greeter (both forks of Unity Greeter)
    • Arctica Greeter integrates completely with Ayatana Indicators; so there is now a consistent Indicator experience in the greeter and desktop environment.
  • Multiple projects are now using Ayatana Indicators, including desktop environments, distros and even mobile phone projects such as UBports. With more developers collaborating in one place we are seeing the collection of available indicators grow 📈
  • Through UBports contributions to Ayatana Indicators we will soon have a Bluetooth indicator that can replace Blueman, providing a much simpler way to connect and manage Bluetooth devices. UBports have also been working on a network indicator and we hope to consolidate that to provide improved network management as well.
  • Other indicators that are being worked on include printers, accessibility, keyboard (long absent from Ubuntu MATE), webmail and display.

So, that is the backstory about how developers from different projects come together to collaborate on a shared interest and improve software for their users 💪

Webcamoid

We’ve replaced Cheese :cheese: with Webcamoid :movie_camera: as the default webcam tool for several reasons.

  • Webcamoid is a full webcam/capture configuration tool with recording, overlays and more, unlike Cheese. While there were initial concerns :pensive:, since Webcamoid is a Qt5 app, nearly all the requirements in the image are pulled in via YouTube-DL :tada:.
  • We’ve disabled notifications :bell: for Webcamoid updates if installed from universe pocket as a deb-version, since this would cause errors in the user’s system and force them to download a non-deb version. This only affects users who don’t have an existing Webcamoid configuration.

Linux Kernel

Ubuntu MATE 20.10 includes the 5.8 Linux kernel. This includes numerous updates and added support since the 5.4 Linux kernel released in Ubuntu 20.04 LTS. Some notable examples include:

  • Airtime Queue limits for better WiFi connection quality
  • Btrfs RAID1 with 3 and 4 copies and more checksum alternatives
  • USB 4 (Thunderbolt 3 protocol) support added
  • X86 Enable 5-level paging support by default
  • Intel Gen11 (Ice Lake) and Gen12 (Tiger Lake) graphics support
  • Initial support for AMD Family 19h (Zen 3)
  • Thermal pressure tracking for systems for better task placement wrt CPU core
  • XFS online repair
  • OverlayFS pairing with VirtIO-FS
  • General Notification Queue for key/keyring notification, mount changes, etc.
  • Active State Power Management (ASPM) for improved power savings of PCIe-to-PCI devices
  • Initial support for POWER10

Raspberry Pi images

We have been preparing Ubuntu MATE 20.04 images for the Raspberry Pi and we will be release final image for 20.04 and 20.10 in the coming days 🙂

Major Applications

Accompanying MATE Desktop 1.24.1 and Linux 5.8 are Firefox 81, LibreOffice 7.0.2, Evolution 3.38 & Celluloid 0.18.

Major Applications

See the Ubuntu 20.10 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE Today

This new release will be first available for PC/Mac users.

Download

Upgrading from Ubuntu MATE 20.04 LTS

You can upgrade to Ubuntu MATE 20.10 from Ubuntu MATE 20.04 LTS. Ensure that you have all updates installed for your current version of Ubuntu MATE before you upgrade.

  • Open the “Software & Updates” from the Control Center.
  • Select the 3rd Tab called “Updates”.
  • Set the “Notify me of a new Ubuntu version” drop down menu to “For any new version”.
  • Press Alt+F2 and type in update-manager -c -d into the command box.
  • Update Manager should open up and tell you: New distribution release ‘XX.XX’ is available.
    • If not, you can use /usr/lib/ubuntu-release-upgrader/check-new-release-gtk
  • Click “Upgrade” and follow the on-screen instructions.

There are no offline upgrade options for Ubuntu MATE. Please ensure you have network connectivity to one of the official mirrors or to a locally accessible mirror and follow the instructions above.

Known Issues

Here are the known issues.

Component Problem Workarounds Upstream Links
Ayatana Indicators Clock missing on panel upon upgrade to 20.10

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on February 15, 2021 11:29 AM

OpenUK Belonging

Jonathan Riddell

OpenUK is an organisation promoting open tech, come join us and belong. OpenUK Belonging video.

Sign up to our letter by sharing it on social media with the #OpenUKBelonging? OpenUK seeks Belonging Partners – not for profit organisations who encourage a range diversity and inclusion through their activities –  to be a part of our ecosystem to advance belonging in Open Technology together and sign up to this letter by sharing it on social media. We will launch these partnerships on International Women’s Day on 8 March and will support each of the partners throughout the year.

on February 15, 2021 11:23 AM

February 14, 2021

The upcoming version of Clang 12 includes a new traversal mode which can be used for easier matching of AST nodes.

I presented this mode at EuroLLVM and ACCU 2019, but at the time I was calling it “ignoring invisible” mode. The primary aim is to make AST Matchers easier to write by requiring less “activation learning” of the newcomer to the AST Matcher API. I’m analogizing to “activation energy” here – this mode reduces the amount of learning of new concepts must be done before starting to use AST Matchers.

The new mode is a mouthful – IgnoreUnlessSpelledInSource – but it makes AST Matchers easier to use correctly and harder to use incorrectly. Some examples of the mode are available in the AST Matchers reference documentation.

In clang-query, the mode affects both matching and dumping of AST nodes and it is enabled with:

set traversal IgnoreUnlessSpelledInSource

while in the C++ API of AST Matchers, it is enabled by wrapping a matcher in:

traverse(TK_IgnoreUnlessSpelledInSource, ...)

The result is that matching of AST nodes corresponds closely to what is written syntactically in the source, rather than corresponding to the somewhat arbitrary structure implicit in the clang::RecursiveASTVisitor class.

Using this new mode makes it possible to “add features by removing code” in clang-tidy, making the checks more maintainable and making it possible to run checks in all language modes.

Clang does not use this new mode by default.

Implicit nodes in expressions

One of the issues identified is that the Clang AST contains many nodes which must exist in order to satisfy the requirements of the language. For example, a simple function relying on an implicit conversion might look like.

struct A {
    A(int);
    ~A();
};

A f()
{
    return 42;
}

In the new IgnoreUnlessSpelledInSource mode, this is represented as

ReturnStmt
`-IntegerLiteral '42'
and the integer literal can be matched with
returnStmt(hasReturnValue(integerLiteral().bind("returnVal")))

In the default mode, the AST might be (depending on C++ language dialect) represented by something like:

ReturnStmt
`-ExprWithCleanups
  `-CXXConstructExpr
    `-MaterializeTemporaryExpr
      `-ImplicitCastExpr
        `-CXXBindTemporaryExpr
          `-ImplicitCastExpr
            `-CXXConstructExpr
              `-IntegerLiteral '42'

To newcomers to the Clang AST, and to me, it is not obvious what all of the nodes there are for. I can reason that an instance of A must be constructed. However, there are two CXXConstructExprs in this AST and many other nodes, some of which are due to the presence of a user-provided destructor, others due to the temporary object. These kinds of extra nodes appear in most expressions, such as when processing arguments to a function call or constructor, declaring or assigning a variable, converting something to bool in an if condition etc.

There are already AST Matchers such as ignoringImplicit() which skip over some of the implicit nodes in AST Matchers. Still though, a complete matcher for the return value of this return statement looks something like

returnStmt(hasReturnValue(
    ignoringImplicit(
        ignoringElidableConstructorCall(
            ignoringImplicit(
                cxxConstructExpr(hasArgument(0,
                    ignoringImplicit(
                        integerLiteral().bind("returnVal")
                        )
                    ))
                )
            )
        )
    ))

Another mouthful.

There are several problems with this.

  • Typical clang-tidy checks which deal with expressions tend to require extensive use of such ignoring...() matchers. This makes the matcher expressions in such clang-tidy checks quite noisy
  • Different language dialects represent the same C++ code with different AST structures/extra nodes, necessitating testing and implementing the check in multiple language dialects
  • The requirement or possibility to use these intermediate matchers at all is not easily discoverable, nor are the required matchers to saitsfy all language modes easily discoverable
  • If an AST Matcher is written without explicitly ignoring implicit nodes, Clang produces lots of surprising results and incorrect transformations

Implicit declarations nodes

Aside from implicit expression nodes, Clang AST Matchers also match on implicit declaration nodes in the AST. That means that if we wish to make copy constructors in our codebase explicit we might use a matcher such as

cxxConstructorDecl(
    isCopyConstructor()
    ).bind("prepend_explicit")

This will work fine in the new IgnoreUnlessSpelledInSource mode.

However, in the default mode, if we have a struct with a compiler-provided copy constructor such as:

struct Copyable {
    OtherStruct m_o;
    Copyable();
};

we will match the compiler provided copy constructor. When our check inserts explicit at the copy constructor location it will result in:

struct explicit Copyable {
    OtherStruct m_o;
    Copyable();
};

Clearly this is an incorrect transformation despite the transformation code “looking” correct. This AST Matcher API is hard to use correctly and easy to use incorrectly. Because of this, the isImplicit() matcher is typically used in clang-tidy checks to attempt to exclude such transformations, making the matcher expression more complicated.

Implicit template instantiations

Another surpise in the behavior of AST Matchers is that template instantiations are matched by default. That means that if we wish to change class members of type int to type safe_int for example, we might write a matcher something like

fieldDecl(
    hasType(asString("int"))
    ).bind("use_safe_int")

This works fine for non-template code.

If we have a template like

template  
struct TemplStruct {
    TemplStruct() {}
    ~TemplStruct() {}

private:
    T m_t;
};

then clang internally creates an instantiation of the template with a substituted type for each template instantation in our translation unit.

The new IgnoreUnlessSpelledInSource mode ignores those internal instantiations and matches only on the template declaration (ie, with the T un-substituted).

However, in the default mode, our template will be transformed to use safe_int too:

template  
struct TemplStruct {
    TemplStruct() {}
    ~TemplStruct() {}

private:
    safe_int m_t;
};

This is clearly an incorrect transformation. Because of this, isTemplateInstantiation() and similar matchers are often used in clang-tidy to exclude AST matches which produce such transformations.

Matching metaphorical code

C++ has multiple features which are designed to be simple expressions which the compiler expands to something less-convenient to write. Range-based for loops are a good example as they are a metaphor for an explicit loop with calls to begin and end among other things. Lambdas are another good example as they are a metaphor for a callable object. C++20 adds several more, including rewriting use of operator!=(...) to use !operator==(...) and operator<(...) to use the spaceship operator.

[I admit that in writing this blog post I searched for a metaphor for “a device which aids understanding by replacing the thing it describes with something more familiar” before realizing the recursion. I haven’t heard these features described as metaphorical before though…]

All of these metaphorical replacements can be explored in the Clang AST or on CPP Insights.

Matching these internal representations is confusing and can cause incorrect transformations. None of these internal representations are matchable in the new IgnoreUnlessSpelledInSource mode.

In the default matching mode, the CallExprs for begin and end are matched, as are the CXXRecordDecl implicit in the lambda and hidden comparisons within rewritten binary operators such as spaceship (causing bugs in clang-tidy checks).

Easy Mode

This new mode of AST Matching is designed to be easier for users, especially newcomers to the Clang AST, to use and discover while offering protection from typical transformation traps. It will likely be used in my Qt-based Gui Quaplah, but it must be enabled explicitly in existing clang tools.

As usual, feedback is very welcome!

on February 14, 2021 02:08 PM

Full Circle Weekly News #200

Full Circle Magazine


Apache Project Hoping for New Rust Cryptography Module, with Help from Google
https://www.zdnet.com/article/google-funds-project-to-secure-apache-web-server-project-with-new-rust-component/
Ubuntu Core 20 Out
https://ubuntu.com/blog/ubuntu-core-20-secures-linux-for-iot

AlmaLinux 8.3 Beta Out
https://blog.almalinux.org/introducing-almalinux-beta-a-community-driven-replacement-for-centos/

EndeavourOS’s first 2021 Update Release Out
https://endeavouros.com/news/our-first-release-of-2021-has-arrived/

Solus 4.2 Out
https://getsol.us/2021/02/03/solus-4-2-released/

Ubuntu’s Yaru Theme with GTK4 Support Out
https://discourse.ubuntu.com/t/call-for-testing-yaru-gtk4-theme/20668

Ubuntu 21.04 Artwork Out
https://twitter.com/sylvia_ritter/status/1356648612231536641?s=20

KDE’s Februrary App Update Out
https://kde.org/announcements/releases/2021-02-apps-update/

LibreOffice 7.1 Out
https://blog.documentfoundation.org/blog/2021/02/03/libreoffice-7-1-community/

Darktable 3.4.1 Out
https://www.darktable.org/2021/02/darktable-341-released/

Feral Interactive Have Announced Linux Support for Warhammer III
https://twitter.com/feralgames/status/1356982376673509380?s=20

on February 14, 2021 12:05 PM

I love free software

Torsten Franz

Today is „I love Free Software Day“. And I also love free software for many reasons. I would like to take this opportunity to thank everyone who contributes to free software. This happens in many different ways. Some write this software, some make sure that it is understandable, some bring this into the different languages, some support the use of the software and so on. They all contribute to making free software better.

Thank you for that and let’s continue to build on this great success.

on February 14, 2021 11:36 AM

JetBrains Qodana is a new product, still in early access, that brings the “Smarts” of JetBrains IDEs into your CI pipeline, and it can be easily integrated in GitLab.

cover

Qodana on Gitlab Pages

In this blog post we will see how to integrate this new tool by JetBrains in our GitLab pipeline, including having a dedicated website to see the cool report it produces.

Aren’t you convinced yet? Read 4 Benefits of CI/CD! If you don’t have a proper GitLab pipeline to lint your code, run your test, and manage all that other annoying small tasks, you should definitely create one! I’ve written an introductory guide to GitLab CI, and many more are available on the documentation website.

JetBrains Qodana

Qodana comprises two main parts: a nicely packaged GUI-less IntelliJ IDEA engine tailored for use in a CI pipeline as a typical “linter” tool, and an interactive web-based reporting UI. It makes it easy to set up workflows to get an overview of the project quality, set quality targets, and track progress on them. You can quickly adjust the list of checks applied for the project and include or remove directories from the analysis.

Qodana launching post.

I’m a huge fun of JetBrains products, and I happily pay their license every year: my productivity using their IDEs is through the roof. This very blog post has been written using WebStorm :-)

Therefore, when they announced a new product to bring the smartness of their IDE on CI pipelines, I was super enthusiastic! In their documentation there is also a small paragraph about GitLab, and we’ll improve the example to have a nice way to browse the output.

At the moment, Qodana has support for Java, Kotlin, and PHP. With time, Qodana will support all languages and technologies covered by JetBrains IDEs.

Remember: Qodana is in an early access version. Using it, you expressly acknowledge that the product may not be reliable, work as not intended, and may contain errors. Any use of the EAP product is at your own risk.

I suggest you to also taking a look to the official GitHub page of the project, to see licenses and issues.

Integrating Qodana in GitLab

The basic example provided by JetBrains is the following:

qodana:
  image: 
    name: jetbrains/qodana
    entrypoint: [sh, -c]
  script:
    - /opt/idea/bin/entrypoint --results-dir=$CI_PROJECT_DIR/qodana --save-report --report-dir=$CI_PROJECT_DIR/qodana/report
  artifacts:
    paths:
      - qodana

While this works, it doesn’t provide a way to explore the report without firstly downloading it on your PC. If we have GitLab Pages enabled, we can publish the report and explore it online, thanks to the artifacts:expose_as keyword.

We also need GitLab to upload the right directory to Pages, so we change the artifact path as well:

qodana:
  image: 
    name: jetbrains/qodana
    entrypoint: [sh, -c]
  script:
    - /opt/idea/bin/entrypoint --results-dir=$CI_PROJECT_DIR/qodana --save-report --report-dir=$CI_PROJECT_DIR/qodana/report
  artifacts:
    paths:
      - qodana/report/
    expose_as: 'Qodana report'

Now in our merge requests page we have a new button, as soon as the Qodana job finishes, to explore the report! You can see such a merge request here, with the button “View exposed artifact”, while here you can find an interactive online report, published on GitLab Pages!

Configuring Qodana

Full reference for the Qodana config file can be found on GitHub. Qodana can be easily customized: we only need to create a file called qodana.yaml, and enter our preferences!

There are two options I find extremely useful: one is exclude, that we can use to skip some checks, or to skip some directories, so we can focus on what is important and save some time. The other is failThreshold. When this number of problems is reached, the container would exit with error 255. In this way, we can fail the pipeline, and enforce a high quality of the code!

Qodana shows already a lot of potential, also if it only at the first version! I am really looking forward to support for other languages, and to improvements that JetBrains will do in the upcoming releases!

Questions, comments, feedback, critics, suggestions on how to improve my English? Reach me on Twitter (@rpadovani93) or drop me an email at riccardo@rpadovani.com.

Ciao,
R.

on February 14, 2021 09:00 AM

February 12, 2021

Writing a Résumé in LaTeX

Michael Lustfield

Way back in my college days, I was trying to write the first version of my résumé. As a pedantic IT guy, I wanted to get everything perfect so that I could stand out despite having very little practical experience.

Trying to write my résumé in OpenOffice--yes, I'm that old--proved …

on February 12, 2021 12:00 AM

February 09, 2021

You are using LXD containers and you want a container (or more) to use an IP address from the LAN (or, get an IP address just like the host does).

LXD currently supports four ways to do that, and depending on your needs, you select the appropriate way.

  1. Using macvlan. See https://blog.simos.info/how-to-make-your-lxd-container-get-ip-addresses-from-your-lan/
  2. Using bridged. See https://blog.simos.info/how-to-make-your-lxd-containers-get-ip-addresses-from-your-lan-using-a-bridge/
  3. Using routed. See https://blog.simos.info/how-to-get-lxd-containers-get-ip-from-the-lan-with-routed-network/
  4. Using ipvlan. It is this tutorial, you are reading it now.

Why use the ipvlan networking?

You would use the ipvlan networking if you want to expose containers to the local network (LAN, or the Internet if you are using an Internet server, and have allocated several public IPs).

Any containers with ipvlan will appear on the network to have the MAC address of the host. Therefore, this will work even when you use it on your laptop that is connected to the network over WiFi (or any router with port security). That is, you can use ipvlan when macvlan and bridged cannot work.

You have to use static network configuration for these containers. Which means,

  1. You need to make sure that the IP address on the network that you give to the ipvlan container, will not be assigned by the router in the future. Otherwise, there will be an IP conflict. You can do so if you go into the configuration of the router, and specify that the IP address is in use.
  2. The container (i.e. the services running in the container) should not be performing changes to the network interface.

If you use some special Linux distribution, you can verify whether your LXD installation supports ipvlan by running the following command:

$ lxc info
...
api_extensions:
...
- container_nic_ipvlan
- container_nic_ipvlan_gateway
- container_nic_ipvlan_host_table
- container_nic_ipvlan_mode
...
  lxc_features:
    network_ipvlan: "true"
...
$

Special requirements for container images

The default network configuration in Ubuntu 18.04 or newer is to use netplan and get eth0 to use DHCP for the configuration. The way netplan does this, messes up with ipvlan, so we are using a workaround. Depending on the Linux distribution in the container, you may need special configuration. The Ubuntu workaround is based on cloud-init, so it is the whole section for cloud-init in the profile below. Below is the list of LXD profiles per Linux distribution in the container image.

  1. Ubuntu container images
  2. CentOS container images
  3. Debian container images

ipvlan LXD profile for Ubuntu container images

Here is the ipvlan profile, which has been tested on Ubuntu. Create a profile with this name. Then, for each container that uses the ipvlan network, we will be creating a new individual profile based on this initial profile. The reason why we create such individual profiles, is that we need to hard-code the IP address in them. Below, in bold, you can see the values that changes, specifically, the IP address (in two locations, replace with your own public IP addresses), the parent interface (on the host), and the nameserver IP address (that one is a public DNS server from Google). You can create an empty profile, then edit it and replace the existing content with the following (lxc profile create ipvlan, lxc profile edit ipvlan).

config:
  user.network-config: |
    #cloud-config
    version: 2
    ethernets:
      eth0:
        addresses:
          - 192.168.1.200/32
        dhcp4: no
        dhcp6: no
        nameservers:
          addresses: [8.8.8.8, 1.1.1.1]
        routes:
         - to: 0.0.0.0/0
           via: 169.254.0.1
           on-link: true
description: "ipvlan LXD profile"
devices:
  eth0:
    ipv4.address: 192.168.1.200
    nictype: ipvlan
    parent: enp3s0
    type: nic
name: ipvlan
used_by:

We are going to make copies of the ipvlan profile to individual new ones, one for each IP address. Therefore, let’s create the LXD profiles for 192.168.1.200 and 192.168.1.201. When you edit them

$ lxc profile copy ipvlan ipvlan_192.168.1.200
$ EDITOR=nano lxc profile edit ipvlan_192.168.1.200
$ lxc profile copy ipvlan ipvlan_192.168.1.201
$ EDITOR=nano lxc profile edit ipvlan_192.168.1.201

Skip to the next main section to test the profile.

ipvlan LXD profile for Debian container images (not fully working!)

The following is an alternative LXD routed profile that can be used on Debian. It might be useful for other Linux distributions as well. If this specific LXD profile works for a distribution other than Debian, please report it below so that I can update the post. It explicitly makes the container not to set network configuration through DHCP. It further uses cloud-init instructions to manually create a /etc/resolv.conf because without DHCP there wouldn’t be such a file in the container. The suggested DNS server is 8.8.8.8 (Google), and you may change if you would like. In bold, you can see the two items that you need to update for your case; the IP address for the container, and the network interface of the host that this container will attach to (through ipvlan).

config:
  user.network-config: |
    #cloud-config
    version: 2
    ethernets:
        eth0:
          dhcp4: false
          dhcp6: false
          routes:
          - to: 0.0.0.0/0
            via: 169.254.0.1
            on-link: true
  user.user-data: |
    #cloud-config
    bootcmd:
      - echo 'nameserver 8.8.8.8' > /etc/resolvconf/resolv.conf.d/tail
      - systemctl restart resolvconf
description: ipvlan profile for Debian container images
devices:
  eth0:
    ipv4.address: 192.168.1.201
    name: eth0
    nictype: ipvlan
    parent: enp3s0
    type: nic
name: ipvlan_debian

You can launch such a Debian container with ipvlan using a command line like the following.

lxc launch images:debian/11/cloud mydebian --profile default --profile ipvlan_debian

ipvlan LXD profile for Fedora container images

The following is an alternative LXD routed profile that can be used on Fedora. It might be useful for other Linux distributions as well. If this specific LXD profile works for a distribution other than Fedora, please report it below so that I can update the post. The profile has two sections; the cloud-init section that configures once the networking in the container using NetworkManager, and the LXD network configuration that directs LXD on how to setup the routed networking on the host. The suggested DNS server is 8.8.8.8 (Google), and you may change if you would like with other free public DNS servers. In bold, you can see the two items that you need to update for your case; the IP address for the container, and the network interface of the host that this container will attach to (through ipvlan).

Note that you would launch the container with a command line

lxc launch images:fedora/33/cloud myfedora --profile default --profile ipvlan_fedora
config:
  user.user-data: |
    #cloud-config
    bootcmd:
      - nmcli connection modify "System eth0" ipv4.addresses 192.168.1.202/32
      - nmcli connection modify "System eth0" ipv4.gateway 169.254.0.1
      - nmcli connection modify "System eth0" ipv4.dns 8.8.8.8
      - nmcli connection modify "System eth0" ipv4.method manual
      - nmcli connection down "System eth0"
      - nmcli connection up "System eth0"
description: Default LXD profile
devices:
  eth0:
    ipv4.address: 192.168.1.202
    name: eth0
    nictype: routed
    parent: enp3s0
    type: nic
name: ipvlan_fedora

Using the ipvlan networking in LXD

We create a container called myipvlan using the default profile and on top of that the ipvlan profile.

$ lxc launch ubuntu:20.04 myipvlan --profile default --profile ipvlan
Creating myipvlan
Starting myipvlan
$ lxc list myipvlan 
+----------+---------+----------------------+-----------+-----------+
|   NAME   |  STATE  |         IPV4         |   TYPE    | SNAPSHOTS |
+----------+---------+----------------------+-----------+-----------+
| myipvlan | RUNNING | 192.168.1.200 (eth0) | CONTAINER | 0         |
+----------+---------+----------------------+-----------+-----------+
$ 

According to LXD, the container has configured its IP address that was packaged into the cloud-init configuration.

Get a shell into the container and ping

  1. other IP addresses on your LAN
  2. an Internet host such as www.google.com.

Here is a test try using a Fedora container image.

$ lxc launch images:fedora/33/cloud myfedora --profile default --profile ipvlan_fedora
Creating myfedora
Starting myfedora                         
$ lxc list myfedora
+----------+---------+----------------------+-----------+-----------+
|   NAME   |  STATE  |         IPV4         |   TYPE    | SNAPSHOTS |
+----------+---------+----------------------+-----------+-----------+
| myfedora | RUNNING | 192.168.1.202 (eth0) | CONTAINER | 0         |
+----------+---------+----------------------+-----------+-----------+
$ lxc shell myfedora
[root@myfedora ~]# ping -c 3 8.8.8.8
PING 8.8.8.8 (8.8.8.8) 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=111 time=12.1 ms
64 bytes from 8.8.8.8: icmp_seq=2 ttl=111 time=12.2 ms
64 bytes from 8.8.8.8: icmp_seq=3 ttl=111 time=12.1 ms

--- 8.8.8.8 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2002ms
rtt min/avg/max/mdev = 12.148/110.215/201.306/117.007 ms
[root@myfedora ~]# logout
$ 

Conclusion

We have seen how to setup and use ipvlan in LXD, when launching Ubuntu and Fedora container images (Debian is still pending, if you figure it out, please write a comment).

We show how to use LXD profiles to setup easily the creation of the container, and in the profile we add the IP address of the container. This means that for each container we would need to create individual LXD profiles. Note that a LXD profile is attached to a container, so if you want to change it for another container, the change will apply to any existing container as well (i.e. mess). You also could create the containers without needing an additional LXD profile, by perform lxc config commands on the host, and networking commands inside the container. We do not show that here.

You get a similar result when using ipvlan and routed. I do not go into detail about the practical differences between the two.

on February 09, 2021 07:21 PM

Previously: v5.7

Linux v5.8 was released in August, 2020. Here’s my summary of various security things that caught my attention:

arm64 Branch Target Identification
Dave Martin added support for ARMv8.5’s Branch Target Instructions (BTI), which are enabled in userspace at execve() time, and all the time in the kernel (which required manually marking up a lot of non-C code, like assembly and JIT code).

With this in place, Jump-Oriented Programming (JOP, where code gadgets are chained together with jumps and calls) is no longer available to the attacker. An attacker’s code must make direct function calls. This basically reduces the “usable” code available to an attacker from every word in the kernel text to only function entries (or jump targets). This is a “low granularity” forward-edge Control Flow Integrity (CFI) feature, which is important (since it greatly reduces the potential targets that can be used in an attack) and cheap (implemented in hardware). It’s a good first step to strong CFI, but (as we’ve seen with things like CFG) it isn’t usually strong enough to stop a motivated attacker. “High granularity” CFI (which uses a more specific branch-target characteristic, like function prototypes, to track expected call sites) is not yet a hardware supported feature, but the software version will be coming in the future by way of Clang’s CFI implementation.

arm64 Shadow Call Stack
Sami Tolvanen landed the kernel implementation of Clang’s Shadow Call Stack (SCS), which protects the kernel against Return-Oriented Programming (ROP) attacks (where code gadgets are chained together with returns). This backward-edge CFI protection is implemented by keeping a second dedicated stack pointer register (x18) and keeping a copy of the return addresses stored in a separate “shadow stack”. In this way, manipulating the regular stack’s return addresses will have no effect. (And since a copy of the return address continues to live in the regular stack, no changes are needed for back trace dumps, etc.)

It’s worth noting that unlike BTI (which is hardware based), this is a software defense that relies on the location of the Shadow Stack (i.e. the value of x18) staying secret, since the memory could be written to directly. Intel’s hardware ROP defense (CET) uses a hardware shadow stack that isn’t directly writable. ARM’s hardware defense against ROP is PAC (which is actually designed as an arbitrary CFI defense — it can be used for forward-edge too), but that depends on having ARMv8.3 hardware. The expectation is that SCS will be used until PAC is available.

Kernel Concurrency Sanitizer infrastructure added
Marco Elver landed support for the Kernel Concurrency Sanitizer, which is a new debugging infrastructure to find data races in the kernel, via CONFIG_KCSAN. This immediately found real bugs, with some fixes having already landed too. For more details, see the KCSAN documentation.

new capabilities
Alexey Budankov added CAP_PERFMON, which is designed to allow access to perf(). The idea is that this capability gives a process access to only read aspects of the running kernel and system. No longer will access be needed through the much more powerful abilities of CAP_SYS_ADMIN, which has many ways to change kernel internals. This allows for a split between controls over the confidentiality (read access via CAP_PERFMON) of the kernel vs control over integrity (write access via CAP_SYS_ADMIN).

Alexei Starovoitov added CAP_BPF, which is designed to separate BPF access from the all-powerful CAP_SYS_ADMIN. It is designed to be used in combination with CAP_PERFMON for tracing-like activities and CAP_NET_ADMIN for networking-related activities. For things that could change kernel integrity (i.e. write access), CAP_SYS_ADMIN is still required.

network random number generator improvements
Willy Tarreau made the network code’s random number generator less predictable. This will further frustrate any attacker’s attempts to recover the state of the RNG externally, which might lead to the ability to hijack network sessions (by correctly guessing packet states).

fix various kernel address exposures to non-CAP_SYSLOG
I fixed several situations where kernel addresses were still being exposed to unprivileged (i.e. non-CAP_SYSLOG) users, though usually only through odd corner cases. After refactoring how capabilities were being checked for files in /sys and /proc, the kernel modules sections, kprobes, and BPF exposures got fixed. (Though in doing so, I briefly made things much worse before getting it properly fixed. Yikes!)

RISCV W^X detection
Following up on his recent work to enable strict kernel memory protections on RISCV, Zong Li has now added support for CONFIG_DEBUG_WX as seen for other architectures. Any writable and executable memory regions in the kernel (which are lovely targets for attackers) will be loudly noted at boot so they can get corrected.

execve() refactoring continues
Eric W. Biederman continued working on execve() refactoring, including getting rid of the frequently problematic recursion used to locate binary handlers. I used the opportunity to dust off some old binfmt_script regression tests and get them into the kernel selftests.

multiple /proc instances
Alexey Gladkov modernized /proc internals and provided a way to have multiple /proc instances mounted in the same PID namespace. This allows for having multiple views of /proc, with different features enabled. (Including the newly added hidepid=4 and subset=pid mount options.)

set_fs() removal continues
Christoph Hellwig, with Eric W. Biederman, Arnd Bergmann, and others, have been diligently working to entirely remove the kernel’s set_fs() interface, which has long been a source of security flaws due to weird confusions about which address space the kernel thought it should be accessing. Beyond things like the lower-level per-architecture signal handling code, this has needed to touch various parts of the ELF loader, and networking code too.

READ_IMPLIES_EXEC is no more for native 64-bit
The READ_IMPLIES_EXEC flag was a work-around for dealing with the addition of non-executable (NX) memory when x86_64 was introduced. It was designed as a way to mark a memory region as “well, since we don’t know if this memory region was expected to be executable, we must assume that if we need to read it, we need to be allowed to execute it too”. It was designed mostly for stack memory (where trampoline code might live), but it would carry over into all mmap() allocations, which would mean sometimes exposing a large attack surface to an attacker looking to find executable memory. While normally this didn’t cause problems on modern systems that correctly marked their ELF sections as NX, there were still some awkward corner-cases. I fixed this by splitting READ_IMPLIES_EXEC from the ELF PT_GNU_STACK marking on x86 and arm/arm64, and declaring that a native 64-bit process would never gain READ_IMPLIES_EXEC on x86_64 and arm64, which matches the behavior of other native 64-bit architectures that correctly didn’t ever implement READ_IMPLIES_EXEC in the first place.

array index bounds checking continues
As part of the ongoing work to use modern flexible arrays in the kernel, Gustavo A. R. Silva added the flex_array_size() helper (as a cousin to struct_size()). The zero/one-member into flex array conversions continue with over a hundred commits as we slowly get closer to being able to build with -Warray-bounds.

scnprintf() replacement continues
Chen Zhou joined Takashi Iwai in continuing to replace potentially unsafe uses of sprintf() with scnprintf(). Fixing all of these will make sure the kernel avoids nasty buffer concatenation surprises.

That’s it for now! Let me know if there is anything else you think I should mention here. Next up: Linux v5.9.

© 2021, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

on February 09, 2021 12:47 AM

The uninvited eldrich terror

You use local::lib, and the pain unfolds: A shell is started, and you find a dreaded:

Cwd.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xdb80080)

Pennywise, because I could not find the right gif, from the Uninvited, of Sabrina's netflix

Which means: that the module (Cwd in this case) is not compatible (Because it’s an XS module) with your current version of perl, installed on your system: Likely it was compiled for a previous version, leadin to those binaries mismatching

Don’t panic!

I wrote about this however, I left out how to get to the point where you have already an usable Perl again?

The light

Instead of downloading local::lib from git as I used to do… this time I decided to do it on a much simpler way: use perl-local-lib from my distribution, and let it do the magic, I mean that’s why I run openSUSE Tumbleweed

 $ rm -rf perl5-old || echo "First time eh?"
 $ mv perl5 perl5-old
 $ perl-migrate-modules --from ~/perl5-old/lib/perl5 /usr/bin/perl
 $ perl -MCPAN -Mlocal::lib -e 'CPAN::install(App::cpanminus)'
 $ cpanm App::MigrateModules
 $ perl-migrate-modules --from ~/perl5-old/lib/perl5 /usr/bin/perl

Et voilà, ma chérie!

It's alive!

on February 09, 2021 12:00 AM

February 07, 2021

Full Circle Weekly News #199

Full Circle Magazine


Networking and Touchpads Work in Linux on M1 Macs
https://twitter.com/cmwdotme/status/1355660127433535490?s=20
Greg Kroah-Hartman Needs Commercial Buy In For Longer Kernel Support
https://lore.kernel.org/lkml/ef30af4d-2081-305d-cd63-cb74da819a6d@broadcom.com/
Linux Mint Ported 20.1 Features to LMDE 4
https://blog.linuxmint.com/?p=4024
on February 07, 2021 12:13 PM

February 05, 2021

Lay Down The Pitchfork

Stephen Michael Kellat

We live in an age of religious fervor. I’m not talk about traditional religions by any stretch of imagination, though. The QAnon conspiracy theories and how people handled them bordered on being considered a religion depending upon which academic you spoke to.

Of course, we then reach the unhappy lands in our computer-related realm. Alan Pope touches upon this in his blog post relative to a controversy concerning Raspberry Pi OS. I concur with Alan wholeheartedly.

There comes a point at which we can fight wars over who is the most of pure of heart and deed yet find that nobody wins. Sometimes compromises need to be made in life. Sometimes we don’t all have the same goals in life. Alan points out that Raspberry Pi OS is not the same as a more general purpose Ubuntu/Xubuntu/Kubuntu/Lubuntu. There’s nothing wrong with that.

If you want more freedom on a Raspberry Pi you can always run NetBSD or some other distro. The infamous DistroWatch has many choices. Choice is wonderful.

Unless you are on a farm you may want to lay down the pitchfork. Baling hay is not fun! As fun as it may seem to wield a pitchfork for other reasons it just gets really messy.

I was listening to a Shelley Shepard Gray author talk on Thursday night. Her Amish romance novels are great places to see pitchforks in action as well as apple butter being made. Leave the pitchforks there rather than trying to reenact your conception of the Frankenstein films.

on February 05, 2021 06:07 PM

The second point release update to Kubuntu 20.04 LTS (Focal Fossa) is out now. This contains all the bug-fixes added to 20.04 since its first release in April 2020. Users of 20.04 can run the normal update procedure to get these bug-fixes.

See the Ubuntu 20.04.2 Release Notes and the Kubuntu Release Notes.

Download all available released images.

on February 05, 2021 02:13 AM

February 04, 2021

Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 20.04.2 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight Qt Desktop Environment (LXQt). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock-solid Ubuntu […]
on February 04, 2021 10:12 PM

February 02, 2021

Recently there was an announcement from Ubuntu that the desktop team are working on a replacement for the Ubiquity installer. The really interesting part of the post by Martin Wimpress, head of the Ubuntu Desktop team at Canonical, is that the new installer will be built using Flutter. Flutter is a cross-platform User Interface framework that can target Linux, macOS, Windows, Android, and iOS all from the same source code. I have been aware of Flutter for some time now but have been trepidatious in jumping in to sample the water, because I am completely unfamilier with the Dart programming language and was worried about making the time investment.
on February 02, 2021 07:37 PM
Recently there was an announcement from Ubuntu that the desktop team are working on a replacement for the Ubiquity installer. The really interesting part of the post by Martin Wimpress, head of the Ubuntu Desktop team at Canonical, is that the new installer will be built using Flutter. Flutter is a cross-platform User Interface framework that can target Linux, macOS, Windows, Android, and iOS all from the same source code. I have been aware of Flutter for some time now but have been trepidatious in jumping in to sample the water, because I am completely unfamilier with the Dart programming language and was worried about making the time investment.
on February 02, 2021 07:37 PM

Yikes, my head is still spinning from what a crazy month January was. Only managed to squeeze in a few uploads. I’ve also been working on an annual DPL summary that I got to about 80% in December and was barely able to touch it during January, might end up simplifying it just so that I can get it released. In the meantime there’s a lot of interesting stuff happening, stay tuned :)

2021-01-08: Sponsor package python-strictyaml (1.1.1-1) for Debian unstable (Python team request).

2021-01-12: Sponsor package buildbot (2.10.0-1) for Debian unstable (Python team request).

2021-01-12: Sponsor package peewee (3.14.0+dfsg-2) for Debian unstable (Python team request).

2021-01-12: Sponsor package crashtest (0.3.1-1) for Debian unstable (Python team request).

2021-01-12: Sponsor package sqlobject (3.9.0+dfsg-1) for Debian unstable (Python team request).

2021-01-12: Upload package kpmcore (29.12.1-1) to Debian unstable.

2021-01-12: Upload package xabacus (8.3.2-1) to Debian unstable.

2021-01-13: Upload package partitionmanager (20.12.1-1) to Debian unstable.

2021-01-13: Review package clikit (Waiting on dependencies) (Python team request).

2021-01-26: Upload package gdisk (1.0.6-1) to Debian unstable.

on February 02, 2021 07:34 PM

February 01, 2021

PipeWire plays it

Bryan Quigley

I'm running Debian 11 (testing) with XFCE and getting PipeWire up and running was relatively easy - although explicitly unsupported for Debian 11.

sudo apt install pipewire pipewire-audio-client-libraries
sudo apt remove pulseaudio pulseaudio-utils
sudo apt autoremove

At some future point there will be something like pipewire-pulse which will do the rest, but for now you must:

sudo touch /etc/pipewire/media-session.d/with-pulseaudio
sudo cp /usr/share/doc/pipewire/examples/systemd/user/pipewire-pulse.* /etc/systemd/user/
systemctl --user enable pipewire-pulse pipewire

I suggest a reboot after, but a logout may be enough. Then try playing some music. If it worked, it should play just like it has before.

More processes

1456 bryan     20   0 1023428 102436  50396 S   1.7   2.6   0:02.06 quodlibet                     
690 bryan      9 -11  898044  27364  20932 S   1.0   0.7   0:00.31 pulseaudio

PipeWire runs as 3 separate processes compared to PulseAudio above. Of note, apparently PipeWire does want to adjust it's nice level, but in it's current state it doesn't depend on it - and I haven't seen any need for it.

PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND                       
936 bryan     20   0  826812 100484  50472 S   1.3   2.5   0:02.71 quodlibet                     
692 bryan     20   0   94656  12480   5928 S   0.7   0.3   0:00.38 pipewire-pulse                
693 bryan     20   0  107408  15228   7192 S   0.3   0.4   0:00.39 pipewire
701 bryan     20   0  225340  22756  17280 S   0.0   0.6   0:00.06 pipewire-media-

What's works? Everything so far..

  • Playing music locally
  • Playing videos locally
  • Playing music/videos on the web
  • Video calls via Jitsi
  • Changing volume using xfce's pulseaudio applet

Except I can't change individual application volumes because pavucontrol was removed. I belive pavucontrol could actually control it, but I haven't tried it.

So worth switching?

If you want to be an early adopter, jump on in. If not Fedora and Ubuntu will both be including it this year (although I'm not sure if Ubuntu will replace PulseAudio with it).

This is my favorite line from the Fedora proposal: "...with both the PulseAudio and JACK maintainers and community. PipeWire is considered to be the successor of both projects."

It's generally a lot of work to get three projects to agree on standards between them, much less to have general agreement on a future path. I'm very impressed with all three groups to figure out a path to improve Linux audio together.

on February 01, 2021 07:14 PM

Debian on Thinkpad T14

Raphaël Hertzog

I switched my main computer and this time I opted for Lenovo’s Thinkpad T14 that comes with an AMD Processor. It’s the first time that I have 8 cores in my laptop with this AMD Ryzen 7 PRO 4750U CPU and it gives a real performance boost together with the 32GB of RAM.

Despite the fact that it’s a laptop I use it mainly on my desktop where it’s now connected to the “USB-C Dock Gen2” so that I can connect it with a single USB-C cable to power/ethernet/keyboard/mouse and two external displays. I use the display port output and I had some hiccups with the HDMI output where the screen would become blank for a few seconds…

The Linux support of this hardware is rather good so far but I went through a few hiccups when I started using it, in particular I’m not sure what made the external display work as they were not working after the initial install but they ended up working after installing all the packages that I had on my former computer. But the suspend/resume works fine… even when you unplug the laptop from the dock with the lid closed. It might be seen as a given but the suspend/resume was broken on my old X260 (at least on recent kernels, I was able to keep using Linux 4.19 where it worked).

I tried to document relevant information in the wiki, have a look at https://wiki.debian.org/InstallingDebianOn/Thinkpad/T14 and I have uploaded a Linux hardware database probe if you want to look the gory details including the firmware version that I upgraded to before starting any setup.

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

on February 01, 2021 10:47 AM

January 30, 2021

forty-five

Stuart Langridge

It’s my birthday!

This year, in the midst of a coronavirus lockdown, it’s been something of a quiet one. I got lots of nice best wishes from a bunch of people, which is terribly pleasing, and I had a nice conversation with the family over zoom. Plus, a really good Chinese takeaway delivered as a surprise from my mum and dad, and I suspect that if there were a video of them signing up for a Deliveroo account to do so it would probably be in the running for the Best Comedy BAFTA award.

Also I spent some time afternoon doing the present from my daughter, which is the Enigmagram, an envelope of puzzles which unlock a secret message (which is how I discovered it was from my daughter). I like this sort of thing a lot; I’ve bought a couple of the Mysterious Package Company‘s experiences as presents and they’re great too. Must be a fun job to make these things; it’s like an ARG or something, which I’d also love to run at some point if I had loads of time.

I’ve just looked back at last year’s birthday post, and I should note that Gaby has excelled herself again with birthday card envelope drawing this year, but nothing will ever, ever exceed the amazing genius that is the bookshelf portal that her and Andy got me for Christmas. It is amazing. Go and watch the video immediately.

Time for bed. I have an electric blanket now, which I was mocked for, but hey; I’m allowed. It’s really cosy and warm. Shut up.

on January 30, 2021 10:14 PM

January 29, 2021

The Lubuntu support team has noticed some issues with the HWE kernel (5.8) on 20.04 and certain hardware. We have a post on our Discourse forum with information that we currently know. We will add more information to the post as we continue to get it.
on January 29, 2021 11:35 AM

January 28, 2021

Today I was vaccinated for Covid.

It occurred to me that people might have a question or two about the process, what it’s like, and what happens, and I think that’s reasonable.

A roadsign in Birmingham reading 'Stay Home, Essential Travel Only, Covid'

Hang on, why did you get vaccinated? You’re not 70, you’re, what, thirty-six or something?
[bad Southern damsel accent] “Why, Mr Vorce in my Haird, I do declare, I’ve come over all a-flutter!” [bats eyelashes].
Nah, it’s ‘cos I’m considered clinically extremely vulnerable.
What’s vulnerable?
According to Google if you’re strong and vulnerable then it’s recommended to open with a no-Trump bid. That sounds like a good thing to me.
What’s it like?
It’s like the flu jab.
Like what?
Fair enough. If you’re reading this to learn about vaccination around the time that I’m posting it, January 2021, then maybe you’re someone like me who has had a bunch of injections and vaccinations, such as the flu jab every year. But if you’re reading it later, you may not be, and the idea of being vaccinated against something for the first time since school might be vaguely worrying to you because it’s an unknown sort of experience. This is fair. So, here’s the short form of what happens: you walk into a room and roll up your sleeve, they give you an injection into your arm that you hardly even feel, you sit in the foyer for 15 minutes, and then you go home. It’s like buying a kebab, except with fewer sheep eyebrows.
That’s a pretty short form. Some more detail than that would be nice.
True. OK, well, the first tip I can give you is: don’t put your big heavy coat and a scarf on in the mistaken belief that it must be cold because it’s January and then walk for 45 minutes to get to the hospital, because you’ll be boiling hot when you get there and aggravated.
Just the coat was the problem?
Well, it also turns out that if you stay indoors “shielding” for a year and then go on a long walk, you may discover that your formerly-Olympic-level of fitness has decayed somewhat. Might have to do a sit-up or two.
Or two hundred and two maybe, fatso.
Shut up, Imaginary Voice in my Head.
What next?
I was told to go to the hospital for the vaccination. Other people may be told to go to their GP’s office instead; it seems to vary quite a lot depending on where you live and what’s most accessible to you, and it’s possible that I am very lucky that I was sent to City Hospital, somewhere within walking distance. I’ve heard of others being sent to hospitals twenty miles away, which would have been a disaster for me because I don’t have a car. So, either Sandwell and West Birmingham NHS Trust are particularly good, or others are particularly bad, or I happened to roll double-sixes this time, not sure which.
How were you told?
I got a phone call, yesterday, from my GP. They asked when I was available, and suggested 12.55pm today, less than twenty-four hours later; I said I was available; that was it.
And at the hospital?
Finding the specific building was annoying. SWBH: put up some signs or something, will you? I mean, for goodness sake, I’ve been to the hospital about twenty times and I still had no idea where it was.
No more griping!
I haven’t got anything else to gripe about. It was a marvellously smooth process, once I found the building. This is what happened:
I walked up to the door at 12.45 for my appointment at 12.55. A masked woman asked for my name and appointment time; I gave them; she checked on a list and directed me inside, where I was met by a masked man. He gave me some hand sanitiser (I like hospital hand sanitiser. Seems better than the stuff I have) and directed me to a sort of booth thing, behind which was sat another man.
The booth seemed like a quite temporary thing; a rectangular box, like a ticket booth but probably made of thick cardboard, and with a transparent plastic screen dividing me from him; one of two or three of them in a line, I think. He asked me to confirm my details — name, appointment time, address — and then asked four or five very basic questions such as “do you have the symptoms of coronavirus?” Then he handed me two leaflets and directed me to two women, one of whom led me directly to an examination room in which were a man and a woman.
The man confirmed my details again, and asked if I’d been injected with anything else in the previous month; the woman asked me to roll up my sleeve (I ended up half taking my shirt off because it’s long-sleeved and rolling it all the way up to the shoulder is hard), and then gave me the injection in the upper part of my arm. Took about two seconds, and I hardly felt anything.
Wait, that’s it?
Yup. That whole process, from walking up to the door to being done, took maybe ten minutes maximum.
And then you left?
Not quite: they ask you to sit in the waiting room for fifteen minutes before leaving, just in case you have some sort of a reaction to the injection. People who have had the flu jab will recognise that they do the same thing there, too. This gave me a chance to read the two leaflets, both of which were fairly boring but important descriptions of what the vaccine is, what vaccines are in general, and any risks.
They also stuck a big label on my shirt showing when my fifteen minutes was up, which I think is a really good idea — they don’t do this for the flu jab, in my experience, but it’s a good idea for a vaccination where you have to put basically everybody through the process. I also got a little card which I’m meant to show at the second vaccination, which is now safely in my wallet and will probably still be there in twenty years unless they take it off me, along with the receipts for everything I’ve bought since about 2007.
So then you left?
Yes. Another of the staff confirmed I’d been there for long enough, asked if I was feeling OK (which I was), and asked if I had any questions. I didn’t, but I did ask if I had to go back to work now or if I could just hang out there for a while longer — I’m sure she’d heard variations on that eighty times already that day, but she laughed anyway, and I like trying to chat to hospital staff as human beings. God alone knows what a dreadful year they’ve had; they deserve courtesy and smiles from me at least.
Were they smiling?
Indeed they were. Even though you can’t tell because of the masks. Everyone I spoke to and everyone there was cheery, upbeat, courteous, and competent, without being dismissive or slick. I can’t speak to other NHS trusts, or other hospitals, or even other days in my hospital, but every time I’ve been there the staff have been helpful and nice and ready to share a joke or a chat or to answer a question, and this time was no exception.
What, you didn’t ask any questions at all? What about whether you’re being microchipped by Bill Gates? And the risks of 5G with the vaccine? And…
No, of course I didn’t. Vaccination is safe, and it’s one of the greatest inventions humanity has ever come up with. The NHS guidance on vaccinations is a nice summary here. If I’d walked in the door and someone in scrubs had said to me, “the best way to make web pages is to require 500KB of JavaScript to load before you put any text on screen”, or “Ubuntu has secret motives to undermine free software”, I would have said to them “no, you’re wrong about that” and argued. But me and the medical profession have an agreement: they don’t tell me how to build software, and I don’t tell them how to save millions of lives. This agreement is working out fine for both of us so far.
What’s that about risks and side-effects, though?
Apparently I may feel tired, or an ache in my arm, for the next couple of days. I’ll keep an eye out. They said that if it aches a bit, taking paracetamol is OK, but I am not a medical professional and you should ask the question yourself when you get vaccinated.
Which vaccine did you have?
Remember the two leaflets? One of them is a generic NHS leaflet about Covid vaccines; the other is specific to the one I had and is from BioNTech, which means it’s the Pfizer one. The little wallet card also says it’s Pfizer. I didn’t ask the staff because I could work it out myself and I’m not going to change anything based on their answer anyway; it’d be pure curiosity on my part. Also, see previous point about how I don’t tell them how to do their jobs. I assume that which vaccine I got was decided at some sort of higher-up area level rather than tailored for me specifically, but hey, maybe I got a specific one chosen for me. Dunno; that’s for medical people to know about and for me to follow.
What now?
The vaccine doesn’t properly kick in for a few weeks, plus you need the second injection to be properly vaccinated. That should be 9-12 weeks from now, I’m told, so I’ll be staying inside just as I was before. Might empty all the litter out of my wallet, too.
But I have more questions!
Well, I’m on twitter, if they’re sensible ones. Conspiracy stuff will just get you blocked and possibly reported with no interaction, so don’t do that. But this has been a delightfully simple process, made very easy by a bunch of people in the NHS who deserve more than having people clap a bit for them and then ignore the problems. So if I can help by answering a question or two to alleviate the load, I’m happy to do that. And thank you to them.
Are you really thirty-six?
Ha! As if. It is my birthday on Saturday, though.
You know that thing you said about bridge bids is nonsense, right?
Ah, a correction: also do not ask me questions about bridge. Please.
on January 28, 2021 02:21 PM

January 27, 2021

Hello interwebz!

I am looking for a non-invasive front door key turning lock addon like https://nuki.io without too much brain. I'd rather run the brain myself in a way I can trust.

 The invasive version that requires changing the lock is easy to find. The non-invasive one, not so much...

Any hints?

on January 27, 2021 09:19 AM

January 26, 2021

Just in case, if you’ve ever wondered how to stop cron from sending empty emails, a quick look at man mail will give you the answer you’re looking for (if you know what you’re searching for):

    -E

    If an outgoing message does not contain any text in its first or only message part, do not send it but discard it silently,
    effectively setting the skipemptybody variable at program startup. This is useful for sending messages from scripts started 
    by cron(8).

I got this after visiting couple of forums, and some threads at stack exchange, however this one nailed it

So all you need to do is, fire up that crontab -e and make your script run every five minutes, without fear of the noise

*/5 * * * * /usr/local/bin/only-talk-if-there-are-errors-script |& mail -E -r $(hostname)@opensuse.org -s "[CRON][monitoring] foo bar $(date  --iso-8601='minutes')"  do-not-spam-me@example.com

Et voilà, ma chérie! It's alive!

on January 26, 2021 12:00 AM

January 22, 2021

New times, new solutions

José Antonio Rey

Our world is changing every day. Drastic changes can happen really quickly. Technology is advancing at a much faster pace than it used to a hundred years ago, and humans are adapting to those changes. The way we think, the way we operate, and how even how we communicate has drastically changed in the last 15 years.

Just as humans change, the Ubuntu community is also changing. People interact in different ways. Platforms that did not exist before are now available, and the community changes as the humans in it change as well.

When we started the Local Communities project several years ago, we did it with the sole purpose of celebrating Ubuntu. The ways in which we celebrated included release parties, conferences, and gatherings in IRC. However, we have lately seen a decline in the momentum we had with regards to participation in this project. We have not done a review of the project since its inception, and inevitably, the Community Council believes that it is time to do a deep dive at how we can regain that momentum and continue getting together to celebrate Ubuntu.

As such, we are putting together the Local Communities Research Committee, an independent entity overseen by the Community Council, which will help us understand the behavior of Local Community teams, how to better adapt to their needs, and to create a model that is suitable for the world we are living in today.

We are looking for between 6 and 9 people, and we will require to have at least one person per continent. We require that you are an Ubuntu Member, are not a current Community Council member, have experience working with worldwide communities, and strongly recommend that you have participated with a Local Community team in the past. If this sounds like you, instructions on how to apply can be found here: https://discourse.ubuntu.com/t/local-communities-research-committee/20186/4

I am personally very excited about this project, as it will allow us to gather perspectives from people all around the world, and to better adapt the project for you, the community.

If you have any questions or want to chat with me, you can always reach out to me at jose at ubuntu dot com, or jose on irc.freenode.net.

Eager to see your nominations/applications!

on January 22, 2021 11:36 PM