July 23, 2021

Late July Update

Stephen Michael Kellat

In no particular order:

  • The podcast feeds repository has seen some updates. Yes, I am a bit odd in primarily consuming podcasts via the podcasts app on my Apple TV device. Life changes and right now my circumstances are quite different than they used to be. Getting AntennaPod back on my mobile device is not a high priority considering how flaky the phone continues to be.
  • A test printing was done using the work-in-progress code from the auto-newspaper repository. Moving to just printing a single leaf of “Legal” sized paper would not be all that much. Considering the growing niche that needs filling that may be enough to start with. There is a running discussion in the repo as to how this has all been developing over time.
  • In the alternative to the “underground newspaper” notion there would be the thought of going back to podcasting. I’m not too keen on that thought especially as this would be focusing on video production these days. At the least I would need to launder segments of Profile America via some prestidigitation with ffmpeg into being video files as initial building blocks. At the very least I did start scripting out how to grab real fast from the relevant FTP site the useful data to read out a weather report. Pulling some material from the Defense Visual Information Distribution Service as to the Ohio National Guard would provide public domain content as to state-level news through VNRs. It sounds like I almost have this planned out except for hosting, actually.
  • My literal stack of Raspberry Pi units is back up and running. That puts me at five operational in the house at the moment. They aren’t clustered at this time. I probably should do that though I would need to decide on a mission profile.
  • A review of the Internal Revenue Service characteristics list for what makes a “church” as seen on their website is being done on my part. Why? It is a “facts and circumstances” test that a certain group in my country’s civil society is meeting. That that is happening has disturbing implications that I am trying to better understand.
  • People are baffled by the change in name of the Cleveland baseball team to being the “Guardians”. Ideastream posted a story about the statues the team is being named in honor of. The Encyclopedia of Cleveland History also discusses the art deco designs. Frankly they are awesome bits of art from the Great Depression that remain with us today and feature in many neat sunrise and sunset photographs.

Tags: Life

on July 23, 2021 04:09 PM
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/_Do94PZFQKtiJuIUm-huUU3ArHfmkBMURyOpruZSXPeMGYXB22Z9EGW-xl95PNGEQew17IhUul8fTw2h-2IDi9ROQPqOu_YPUvSSIdHRLjfrVSux1V5d2OaFYpP-jf2ZCHiuLUTJ" width="720" /> </noscript>

June was packed with interesting news. So this monthly blog won’t disappoint our readers.

If you haven’t seen it already, we are running a content survey. It will take you 7 minutes to complete and it will help us create the content that you want to read. So if you haven’t done it yet, here is the link: https://forms.gle/DqPg1zd7gCiad3GF8

Thanks again for your amazing support, and let’s start. 

ROS support for enterprise 

In 2017, the personal credit-checking firm Equifax was breached. Information regarding credit accounts opened, financial history and credit scores of around 150 million people were exposed. Why? A single customer complaint portal wasn’t properly patched. Learn more about the importance of security maintenance for robotics in our new whitepaper. Just published! 

No more Pepper 

This is huge. According to Reuters, SoftBank stopped manufacturing Pepper robots at some point last year due to low demand. By September this year, it will cut about half of the 330 positions at SoftBank Robotics Europe in France. This follows poor long-term sales in the last 3 years, where, according to JDN, SoftBank Robotics Europe has lost over 100 million Euros.

Whether you like it or not, Pepper left its mark in the robotics world. The first time you take that robot out of its box, well, is just an experience. Seeing that robot move and interact with people for the first time showed us what could be. Have you seen another humanoid robot in the market that had the same adoption as Pepper? Or even the same autonomy? 

With poor functionality, low reliability, and highly unpredictability, Pepper was still capable of working on crowded sites. Stores, banks, offices, conferences, it was there. You cannot say the same of others. And with that exposure Pepper helps people understand the opportunities of service robots. It also played a prominent role in today’s human-robot interactions research, where several trials used these robots in pursuit of developing better robots. It was also used in AI research, optimising navigation, task completion and learning. So despite all its limitations, and all the critiques for this robot, Pepper has done more for the robotics community than many other robots.      

So yes, this is huge. From Aldebaran to Softbank. It has been a long journey for Pepper. So if you are building the next social robot, take a look at this robot. Learn from its mistakes and its achievements. 

Open-source robotics; AI and bias

Ethics in AI is a topic we cannot overlook. You might think this is just a trend, something that the community likes reading since it is controversial. But it is not. 

A researcher at Stanford University accessed GPT-3, a natural language model developed by the California-based lab OpenAI. GPT-3 allows one to write text as a prompt, and then see how it expands on or finishes the thought. The researcher tried a variation of the joke “two [people] walk into..”, by saying “two Muslims walk into”. Unfortunately, as you will imagine, the results showed a cold reality. Sixty-six out of 100 times, the AI responded with words suggesting violence or terrorism.

The results showed disgraceful stereotypical and violent completions. From “Two Muslims walked into a…gay bar in Seattle and started shooting at will, killing five people.” to “…a synagogue with axes and a bomb.” and even “…a Texas cartoon contest and opened fire.” 

The same violent answer was only present around 15 percent of the time with other religious groups—Christians, Sikhs, Buddhists and so forth. Atheists averaged 3 per cent. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/oH7GcCzrHrQnyei27vwA2RVxWkn2zbtxq4hv738_G2MljdLeVQ1OXuquR7aYyojACP52nergjACtfiCvpkin_briaBcUFHwvo4EhSn30gu-naJ7ef6g14g2QwSDLYMhCcOYZivNT" width="720" /> </noscript>

The graph shows how often the GPT-3 AI language model completed a prompt with words suggesting violence (Source)  

Obviously, this is not the model’s fault. GPT-3 only acts according to the data given. It’s the fault of those behind the training. No, they are not racist, they just forgot about the dangers of data scraping. The only way a system like GPT-3 can give human-like answers is if we give it data about ourselves. OpenAI supplied GPT-3 with 570GB of text scraped from the internet, including random insults posted on Reddit and much more. 

Ed Felten, a computer scientist at Princeton who coordinated AI policy in the Obama administration said “The development and use of AI reflect the best and worst of our society in a lot of ways”. We need to verify the origin of data and test for bias in our models. Is not easy, but it is not impossible either. It is our responsibility to take action and guarantee that our work follows a process to reduce these biases.  

Expanding the field of miniature robots 

A team of scientists at Nanyang Technological University, Singapore (NTU Singapore) has developed millimetre-sized robots that can be controlled using magnetic fields to perform highly manoeuvrable and dexterous manipulations. 

The researchers created the miniature robots by embedding magnetic microparticles into biocompatible polymers. These are non-toxic materials that are harmless to humans. The robots can execute desired functionalities when magnetic fields are applied, moving with six degrees of freedom (DoF). 

While we have other examples of miniature robots, this one can rotate 43 times faster than them in the critical sixth DoF when their orientation is precisely controlled. They can also be made with ‘soft’ materials and thus can replicate important mechanical qualities. For instance, one type can replicate the movement of a jellyfish. Others have a gripping ability to precisely pick and place miniature objects. 

This could pave the way to possible future applications in biomedicine and manufacturing. Measuring about the size of a grain of rice, the miniature robots may be used to reach confined and enclosed spaces currently inaccessible to existing robots making them particularly useful in the field of medicine and manufacturing.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://ubuntu.com/wp-content/uploads/b1c3/2kIBZzN-Imgur.gif" width="720" /> </noscript>

Outro

We want to keep improving our content! So again, please, help us improve our content by completing the next survey:
https://forms.gle/DqPg1zd7gCiad3GF8

This is a blog for the community and we will like to keep featuring your work. Send a summary to robotics.community@canonical.com, and we’ll be in touch. Thanks for reading.

on July 23, 2021 10:54 AM

July 22, 2021

Ep 152 – 5 Violinos

Podcast Ubuntu Portugal

Uma sessão inesperada, com 3 convidados inesperados, na qual voltámos ao tema Audacity, dando depois o devido destaque ao Steam Deck sendo o gadget do momento, assim aconteceu mais um episódio do Podcast Ubuntu Portugal.

Já sabem: oiçam, subscrevam e partilhem!

  • https://www.steamdeck.com/en/
  • https://keychronwireless.referralcandy.com/3P2MKM7
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on July 22, 2021 09:45 PM

The web team at Canonical run two-week iterations building and maintaining all of Canonical websites and product web interfaces. Here are some of the highlights of our completed work from this iteration.

Web

The Web team develops and maintains most of Canonical’s sites like ubuntu.com, canonical.com and more. 

mir-server.io rebuild with a brand new tutorial section

We rebuilt the mir-server.io homepage and added a new tutorials page. This update included new sections for Ubuntu Frame and egmde, we improved the information by explaining the main features of Mir and added a driver compatibility table.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/W5-G0BYlpcltPlheUInt7X1SCnIvx2e4PfEJgVD8rzLNS47sr9zXmfqEIqCm2eHSskjzKIFHunYHZvJ2DQBX92Jm-ktWB_WyVo-Spo3Ik4-h9xPX8PGl8AkSURDNWpE_dp5_vTiZ" width="720" /> </noscript>

A tutorials page, based on microk8s has been added and we improved the styling based on microk8s website.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/0DZDLoCoWu52s1y7q0-gTKK5AuMAqvkC-RlE-3cpN4OHPQ33Jq3u7Ibeh_Edn4C8JpRg8nmH-kIgOA1TPF19scPuCUYI43VOUX8klJOqEak3LlSUVShb8H0FBGaSFASMeWN_gYX6" width="720" /> </noscript>

Private cloud pricing calculator rebuild

The web team has been working on rebuilding the private cloud pricing calculator. The new calculator makes it easier for you to create an estimate of the hourly cost per instance and compare your savings against public clouds depending on different factors. Allowing you to select fully managed or supported, and select your required number of instances, vCPUs, ephemeral storage and more. A new contact form has been added to provide the ability to fill out your requirements and receive an email with a breakdown of the costs specific to your case.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/7HROjGD2gc4UOMTQZrCrk7S5IFkjSLmDKCXfwL7LrU1XNAQw9Yc0sS5AREkhidXKtFNjsXjbPAFr8VGkwUBf8LfNhQMSrBb28jtHS0mh8-2tXymOaOi6i6tCBhBCzhpPJ6-8mrvo" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/5ThkBjvT-A5Ub4WOreOKPNksB35ahV51Uv9M_esl-bmGaWOA5KNmLxrj9SGob7-VcawJBeNhfDGA2lstb6ogx0fKDFm5AoRe7ZYEREmu6wWPruwRCL9qGKSGo1fi-PsexV_aDD4J" width="720" /> </noscript>

Brand

The Brand team develop our design strategy and create the look and feel for the company across many touch-points, from web, documents, exhibitions, logos and video.

Livepatch diagrams

We worked with the Project Management team to design a couple of diagrams to explain Livepatch at-a-glance and give an overview of Livepatch.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/duSHdQUBYNk2CMYkor1PQXutle-bSuCS6z5rFCWDpNDmyYPxnGP5nWoMLoKOMnXKviI_UWELX1SxbjuGVeMT2zupNsU8uzfdF1g53Ho8OUIRQrxjPI-Os7ht-8Exe_OWgV0zqrpV" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/iHqeIbI6SiNdHwLdVNvDaSGLW58aQR0qSxbGwThOtWQKVBcWzyvk0Udw416iO3eXApbf349tMvL3T4CPx3YO_dj3IVypoxmpSClNPYrL8gfnCJbCkpRfF2_Z3L89ZG8emYWZYJ0a" width="720" /> </noscript>

ROS ESM diagram

A flow diagram showing the steps of creating a security patch using ROS for ESM.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/13u7vXzoG2eDM8oITw6xKIWuTF9FZAtu7P_Lu1vv9qvNksbTlljIEI15hhIKQg9HPHplEq0GzNhYORUWZJJZgPryASlxGM2LTyQ4hDwBva_JJyaxXIxUiWJgLK5OtKDYMia_E645" width="720" /> </noscript>

Marketing documents

A number of documents were created in collaboration with the Marketing team for them to use along with the Project Management teams.

Microstack video

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/819vvN15WNHweFbnLKpIiMTE_DlDyRzCRVRAfQlgSOsORIOFKmkP-XHtpGQaFXxN2LhJv9Ok3LVdAZ0EXApCWrmaC3SwxqiKMH93TJXta9OaUYDJ4BWaCJZ99-qT6Tir_yLdYsuM" width="720" /> </noscript>

We finished a short video to go on the microstack.run page to explain the benefits of Microstack and how it can be used to leverage the core services of Openstack to build a general purpose cloud.

Apps

The Apps  team develops the UI for the MAAS project and the JAAS dashboard for the Juju project.

MAAS – Machine storage and network cloning UI

Summing up our UX work for MAAS Cloning UI, we have simplified a few areas of the design. Once a user selects multiple machines, the drop down will activate an action called `clone from`.  This allows the user to clone these machines from a source machine. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/mnFivGNggS5VY1D2Mb6lB4-F2y_-Z8s9gIIsa2c7mIAm62X_szOxgWXCk4runtcP4L7fPh53xLysDOMeFtnRiFAnO_XFM6l8qgVrZs5Jf3XhtDGv4-gudWeNOZhyC_doEIbqcqdf" width="720" /> </noscript>

To select a source machine, a list of 50 last modified machines will show up. You may select any machines in any state as a source machine, but destination machines need to be in either failed testing, ready, or allocated. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/GBs1Tp40gl47wgtoAXNvZ8w3iJ2xSYfZm0HA9uG6pSrJzBGZhQR0mIfRfa94sJaU2KubJIVKuQmrAevEshVJF8GHWV79gnLZc2kizvg6gX7jZ02PnmomM7EXgpHOVYR8cUbJ99sw" width="720" /> </noscript>

The list above allows you to search for the source machine by its hostname, system id, or tags as well as select whether you want to clone the network configuration or the storage configuration. Cloning works with a homogeneous hardware environment. 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh5.googleusercontent.com/Kq-Masun5njRmJWFWT7V29KzOsVxoRLpiMEjLrcXcHii_JqUYdG-DgWdq6uQSRETCBAmz-59msVMtVaHOWg7AICzPr6OYP9kzi3uZpix1jwncU1X2BJ4ZW1ikRpXTSzSJ9nWHeJJ" width="720" /> </noscript>

Once cloning is complete, you will see machines that are successfully cloned from this source and a list of machines that failed cloning in this panel. You may collapse this panel without closing to double check the results.

At the current stage, our goal is to expose this functionality in the UI, so people can clone machines. We hope to build more robust functionalities on top of this in the upcoming feature. 

Vanilla

The Vanilla team designs and maintains the design system and Vanilla framework library. They ensure a consistent style throughout web assets.

Guest developers

This iteration was the first one when we invited developers from other squads to join us and work on Vanilla together. Huw and Caleb did a great job working together on updates to Notification component and helped us clean up the React components backlog by migrating remaining code to TypeScript.

Notification component updates

Main part of this iteration work was updating our Notification component with new style and additional options.

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/Q1kwk69KGlVVbOYOmpYTeGm0x9Ryb73h_R7kiFC2aGLbMmcJ7BN1eVIqVQ2cF3mATGKvALtbcay9j0DtTxR5trkS4midLOr4zf_SK6eDMG1m2IHybmyPCAJuIGc785JpUidkaqgU" width="720" /> </noscript>

Notifications got a refreshed styling with additional borderless and inline variants. We also included additional elements for notification timestamp and actions.

TypeScript migration

Thanks to the efforts of our guest devs this iteration all of our React components have been migrated to TypeScript 

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/EIyLJsY6PxI-VBjE-vT0mYJ4skaazw6MXiZnOg90f4hm3CctdvJTYD_bqjJqCgtibYwOJ4CXM2srokiKZsuBShzTSOhH8Oj04hm3QdKWUmLPFytytmJL4oIR9_u-VoF3zagEoq3U" width="720" /> </noscript>

Marketplace

The Marketplace team works closely with the Store team to develop and maintain the Snap Store site and the Charmhub site.

Vanilla – Key/value pair pattern

It is a design pattern to list a number of values with their corresponding keys.

It’s a component required when listing properties for a complex element in a collection of similarly grouped elements.

Available In “rack”

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/pZY6fXQzgSKcp0nKDBgEFMGh_1XFC7jVIEBeJ4BcxfMz77NQQt47ToyO4vip61uThQVPwZdDVqgt-a97AjbCnt8l0X1NTUOl399PvnYYagzxrVL9_SjBsVZRLADm36o3W1sng8vA" width="720" /> </noscript>

And “rack column”

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh4.googleusercontent.com/jKSPOduSuutw3wM5Btzgq-nc1DHtodHNMwFcZukycKppD2ZBFbZKYntmjz7Oy0nD1NJTLeUBuDTlo-wX8LD9ET_8xVCUcbVGnFyVL1IAEVLSwj5sNsUJ4FHB1V6CJwt23rPGrunW" width="720" /> </noscript>

All the specs are available in Discourse

Visual – form labels, mandatory field and error messages

This work aimed to improve accessibility and resolve inconsistencies around field validation. The way we indicate mandatory fields and the way we deal with erroneous submissions can be made more usable for users with or without assistive technology (AT).

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/9sCFIqXBUXHTEJpsZoWg2R3ma2Wql5xBmMjBqx2eWhXH7AeCnPTo2LZhKI33GuDA1Gd9AW0WjsnlRgstSlhKY0kJ4vT26r8CbF7vV1r62Fu7EKPrP368qVXxpICHLMxC97a_OlSd" width="720" /> </noscript>
<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh3.googleusercontent.com/-V6RWJbGXELJcjJ9qwTCXAkJo1RddExR-42KGNXT0WX5sv8sqY-DBL9YKb7zw3Wk4nbqZLLOi1ywS0pyB4jAPxUSbevhwbfV3YIXXqxUzHQ9s4p0u-AlkvckC7kL898sy31txCxt" width="720" /> </noscript>

Charmhub resources

Added a tab to the charm details page which shows the resources available to that charm.

Upload metadata

Completed the work around being able to upload metadata when you release a snap.

Reviewer experience discovery

<noscript> <img alt="" src="https://res.cloudinary.com/canonical/image/fetch/f_auto,q_auto,fl_sanitize,c_fill,w_720/https://lh6.googleusercontent.com/mSC7gokhBHpw-3wEBwTl-GG7odJIk-OMf3MRl93y8BJPw0yXr-Z5_BoDwJ9YsMsZQZMCsmESeFpzWrE_q5zhVGsEt7xXZISFsghXHqVkArpWhVaObYDCnPxSJNWAJXsAjTNzDEmI" width="720" /> </noscript>

While migrating the user experience flows from the Dashboard to snapcraft.io, we focus here on the reviewer experience.

  • We want to keep the same functionalities
  • while taking the opportunity to improve reviewer’s journey

We have done 6 interviews, stored and summarized all the findings using the tool Dovetail

Team posts:

With ♥ from Canonical web team.

on July 22, 2021 02:33 PM

S14E20 – Claps Select Slate

Ubuntu Podcast from the UK LoCo

This week we’ve been playing DOOM Eternal and buying aptX Low Latency earbuds. We discuss the Steam Deck from Valve, bring you a command line love and go over all your feedback.

It’s Season 14 Episode 20 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

sudo apt install duc
duc index ~
duc ls ~     # plain text output
duc ui ~     # curses interface (like ncdu)
duc graph ~  # Creates a png sunburst graph
duc gui ~    # Interactive X sunburst graph

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on July 22, 2021 02:00 PM

July 19, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 692 for the week of July 11 – 17, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on July 19, 2021 11:06 PM

July 16, 2021

One of the things I really love about working at Influx Data is the strong social focus for employees. We’re all remote workers, and the teams have strategies to enable us to connect better. One of those ways is via Chess Tournaments! I haven’t played chess for 20 years or more, and was never really any good at it. I know the basic moves, and can sustain a game, but I’m not a chess strategy guru, and don’t know all the best plays.
on July 16, 2021 11:00 AM

July 15, 2021

Ep 151 – Ah! A audácia!

Podcast Ubuntu Portugal

O Constantino andou a fazer prospecção de piqueniques, o Carrondo já se despachou das mudanças, e entretanto o Nextcloud festejou o seu quinto aniversário lançando a versão 22. O Audacity deu, e dá, que falar…

Já sabem: oiçam, subscrevam e partilhem!

  • https://addons.mozilla.org/pt-PT/firefox/addon/cookie-quick-manager/
  • https://addons.mozilla.org/pt-PT/firefox/addon/profile-switcher/
  • https://addons.mozilla.org/pt-PT/firefox/addon/darkreader/
  • https://www.youtube.com/watch?v=Y0VZ7t8JGZE
  • https://github.com/audacity/audacity/discussions/889
  • https://github.com/audacity/audacity/discussions/889
  • https://web.archive.org/web/20210706125644/https://github.com/audacity/audacity/discussions/889
  • https://web.archive.org/web/20210706130028/https://github.com/audacity/audacity/discussions/1225
  • https://web.archive.org/web/20210706150802/https://www.audacityteam.org/about/desktop-privacy-notice/
  • https://keychronwireless.referralcandy.com/3P2MKM7
  • https://shop.nitrokey.com/shop/product/nk-pro-2-nitrokey-pro-2-3?aff_ref=3
  • https://shop.nitrokey.com/shop?aff_ref=3
  • https://youtube.com/PodcastUbuntuPortugal

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on July 15, 2021 09:45 PM

S14E19 – Twin Pages Slug

Ubuntu Podcast from the UK LoCo

This week we’ve been playing chess and trying to play DOOM Eternal. We round up the news and goings on from the Ubuntu community and our favourite picks from the wider tech news.

It’s Season 14 Episode 19 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on July 15, 2021 07:00 PM

July 14, 2021

Full Circle Weekly News #218

Full Circle Magazine


The second edition of patches for the Linux kernel with support for Rust:
https://lkml.org/lkml/2021/7/4/171

Release of Virtuozzo Linux 8.4:
https://www.virtuozzo.com/blog-review/details/blog/view/virtuozzo-vzlinux-84-now-available.html

OpenVMS operating system for x86-64 architecture:
https://vmssoftware.com/about/openvmsv9-1/

Nextcloud Hub 22 Collaboration Platform Available:
https://nextcloud.com/blog/nextcloud-hub-22-introduces-approval-workflows-integrated-knowledge-management-and-decentralized-group-administration/

Tor Browser 10.5 released:
https://blog.torproject.org/new-release-tor-browser-105

Ubuntu 21.10 switches to using zstd algorithm for compressing deb packages:
https://balintreczey.hu/blog/hello-zstd-compressed-debs-in-ubuntu/

Mozilla stops development of Firefox Lite browser:
https://support.mozilla.org/en-US/kb/end-support-firefox-lite 

Nginx 1.21.1 released:
https://mailman.nginx.org/pipermail/nginx-announce/2021/000304.html

Release of Proxmox VE 7.0:
https://forum.proxmox.com/threads/proxmox-ve-7-0-released.92007/

Systemd 249 system manager released:
https://lists.freedesktop.org/archives/systemd-devel/2021-July/046672.html

Release of Linux Mint 20.2:
http://blog.linuxmint.com/

Stable release of MariaDB 10.6 DBMS:
https://mariadb.com/kb/en/mariadb-1063-release-notes/

Snoop 1.3.0:
https://github.com/snooppr/snoop/releases/tag/V1.3.0_10_July_2021

Release of EasyNAS 1.0 network storage:
https://easynas.org/2021/07/10/easynas-1-0/



Credits:
Full Circle Magazine
@fullcirclemag
Host: @bardictriad, @zaivala@hostux.social
Bumper: Canonical
Theme Music: From The Dust - Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/
on July 14, 2021 01:56 PM

July 12, 2021

Welcome to the Ubuntu Weekly Newsletter, Issue 691 for the week of July 4 – 10, 2021. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on July 12, 2021 10:32 PM

July 10, 2021

Lubuntu 20.10 (Groovy Gorilla) was released October 22, 2020 and will reach End of Life on Thursday, July 22, 2021. This means that after that date there will be no further security updates or bugfixes released. We highly recommend that you update to 21.04 as soon as possible if you are still running 20.10. After […]

The post Lubuntu 20.10 End of Life and Current Support Statuses first appeared on Lubuntu.

on July 10, 2021 09:12 PM

offlineimap - unicode decode errors

Sebastian Schauenburg

My main system is currently running Ubuntu 21.04. For e-mail I'm relying on neomutt together with offlineimap, which both are amazing tools. Recently offlineimap was updated/moved to offlineimap3. Looking on my system, offlineimap reports itself as OfflineIMAP 7.3.0 and dpkg tells me it is version 0.0~git20210218.76c7a72+dfsg-1.

Unicode Decode Error problem

Today I noticed several errors in my offlineimap sync log. Basically the errors looked like this:

UnicodeDecodeError: 'utf-8' codec can't decode byte 0xfc in position 1299: invalid start byte
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xeb in position 1405: invalid continuation byte

Solution

If you encounter it as well (and you use mutt or neomutt), please have a look at this great comment on Github from Joseph Ishac (jishac) since his tip solved the issue for me.

To "fix" this issue for future emails, I modified my .neomuttrc and commented out the default send encoding charset and omitted the iso-8859-1 part:

#set send_charset = "us-ascii:iso-8859-1:utf-8"
set send_charset = "us-ascii:utf-8"

Then I looked through the email files on the filesystem and identified the ISO-8859 encoded emails in the Sent folder which are causing the current issues:

$ file * | grep "ISO-8859"
1520672060_0.1326046.desktop,U=65,FMD5=7f8c0215f16ad5caed8e632086b81b9c:2,S: ISO-8859 text, with very long lines
1521626089_0.43762.desktop,U=74,FMD5=7f8c02831a692adaed8e632086b81b9c:2,S:   ISO-8859 text
1525607314.R13283589178011616624.desktop:2,S:                                ISO-8859 text

That left me with opening the files with vim and saving them with the correct encoding:

:set fileencoding=utf8
:wq

Voila, mission accomplished:

$ file * | grep "UTF-8"
1520672060_0.1326046.desktop,U=65,FMD5=7f8c0215f16ad5caed8e632086b81b9c:2,S: UTF-8 Unicode text, with very long lines
1521626089_0.43762.desktop,U=74,FMD5=7f8c02831a692adaed8e632086b81b9c:2,S:   UTF-8 Unicode text
1525607314.R13283589178011616624.desktop:2,S:                                UTF-8 Unicode text
on July 10, 2021 02:00 PM

July 09, 2021

The release of stress-ng 0.12.12 incorporates some useful features and a handful of new stressors.

Media devices such as HDDs and SSDs normally support Self-Monitoring, Analysis and Reporting Technology (S.M.A.R.T.) to detect and report various measurements of drive reliability.  To complement the various file system and I/O stressors, stress-ng now has a --smart option that checks for any changes in the S.M.A.R.T. measurements and will report these at the end of a stress run, for example:

..as one can see, there are errors on /dev/sdc and this explains why the ZFS pool was having performance issues.

For x86 CPUs I have added a new stressor to trigger System Management Interrupts via writes to port 0xb2 to force the CPU into System Management Mode in ring -2. The --smi stressor option will also measure the time taken to service the SMI. To run this stressor, one needs the --pathological option since this may hang the computer and they behave like non-maskable interrupts:

To exercise the munmap(2) system call a new munmap stressor has been added. This creates child processes that walk through their memory mappings from /proc/$pid/maps and unmap pages on libraries that are not being used. The unapping is performed by striding across the mapping in page multiples of prime size to create many mapping holes to exercise the VM mapping structures. These unmappings can create SIGSEGV segmentation faults that silently get handled and respawn a new child stressor. Example:

 

There some new options for the fork, vfork and vforkmany stressors, a new vm mode has been added to try and exercise virtual memory mappings. This enables detrimental performance virtual memory advice using  madvise  on  all  pages of the new child process. Where possible this will try to set every page in the new process with using madvise MADV_MERGEABLE, MADV_WILLNEED, MADV_HUGEPAGE  and  MADV_RANDOM flags.  The following shows how to enable the vm options for the fork and vfork stressors:

One final new feature is the --skip-silent option.  This will disable printing of messages when a stressor is skipped, for example, if the stressor is not supported by the kernel, the hardware or a support library is not available.

As usual for each release, stress-ng incorporates bug fixes and has been tested on a wide variety of Linux/*BSD/UNIX/POSIX systems and across a range of processor architectures (arm32, arm64, amd64, i386, ppc64el, RISC-V s390x, sparc64, m68k.  It has also been statically analysed with Coverity and cppcheck and built cleanly with pedantic build flags on gcc and clang.


 




 








on July 09, 2021 10:15 AM

July 08, 2021

Preamble I recently started working for InfluxData as a Developer Advocate on Telegraf, an open source server agent to collect metrics. Telegraf builds from source to ship as a single Go binary. The latest - 1.19.1 was released just yesterday. Part of my job involves helping users by reproducing reported issues, and assisting developers by testing their pull requests. It’s fun stuff, I love it. Telegraf has an extensive set of plugins which supports gathering, aggregating & processing metrics, and sending the results to other systems.
on July 08, 2021 11:00 AM

July 07, 2021

Full Circle Weekly News #217

Full Circle Magazine


Linux 5.13 kernel release:
https://lkml.org/lkml/2021/6/27/202

LTSM proposed:
https://github.com/AndreyBarmaley/linux-terminal-service-manager

Release of Mixxx 2.3, the free music mixing app:
http://mixxx.org/

Ubuntu is moving away from dark headers and light backgrounds:
https://github.com/ubuntu/yaru/pull/2922

Ultimaker Cura 4.10 released:
https://ultimaker.com/learn/an-improved-engineering-workflow-with-ultimaker-cura-4-10

Pop!_OS 21.04 distribution offers new COSMIC desktop:
https://system76.com/pop

SeaMonkey 2.53.8 Integrated Internet Application Suite Released:
https://www.seamonkey-project.org/news#2021-06-30

Suricata Intrusion Detection System Update:
https://suricata.io/2021/06/30/new-suricata-6-0-3-and-5-0-7-releases/

AlmaLinux includes support for ARM64:
https://wiki.almalinux.org/release-notes/8.4-arm.html

Qutebrowser 2.3 released:
https://lists.schokokeks.org/pipermail/qutebrowser-announce/2021-June/000104.html

Tux Paint 0.9.26 is released:
http://www.tuxpaint.org/latest/tuxpaint-0.9.26-press-release.php

Jim Whitehurst, head of Red Hat, steps down as president of IBM:
https://www.cnbc.com/quotes/IBM

OpenZFS 2.1 release with dRAID support
https://github.com/openzfs/zfs/releases/tag/zfs-2.1.0

Neovim 0.5, available:
https://github.com/neovim/neovim/releases/tag/v0.5.0

Audacity’s new privacy policy allows data collection for the benefit of government authorities:
https://news.ycombinator.com/item?id=27724389

AbiWord 3.0.5 update:
http://www.abisource.com/release-notes/3.0.5.phtml

 

Credits:
Full Circle Magazine
@fullcirclemag
Host: @bardictriad, @zaivala@hostux.social
Bumper: Canonical
Theme Music: From The Dust – Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

on July 07, 2021 10:28 AM

July 05, 2021

When Julian Andres Klode and I added initial Zstandard compression support to Ubuntu’s APT and dpkg in Ubuntu 18.04 LTS we planned getting the changes accepted to Debian quickly and making Ubuntu 18.10 the first release where the new compression could speed up package installations and upgrades. Well, it took slightly longer than that.

Since then many other packages have been updated to support zstd compressed packages and read-only compression has been back-ported to the 16.04 Xenial LTS release, too, on Ubuntu’s side. In Debian, zstd support is available now in APT, debootstrap and reprepro (thanks Dimitri!). It is still under review for inclusion in Debian’s dpkg (BTS bug 892664).

Given that there is sufficient archive-wide support for zstd, Ubuntu is switching to zstd compressed packages in Ubuntu 21.10, the current development release. Please welcome hello/2.10-2ubuntu3, the first zstd-compressed Ubuntu package that will be followed by many other built with dpkg (>= 1.20.9ubuntu2), and enjoy the speed!

on July 05, 2021 06:26 PM

June 30, 2021

Wrapping Up June 2021

Stephen Michael Kellat

In no particular order:

  • My author copies of the third novella have shown up finally. I am pleased with how they turned out. The next item is likely to go in a different direction. It may build off a tabletop exercie scenario potentially. We’ll see what happens.
  • My laptop did not pass the test for Windows 11 upgrade readiness. I don’t think I have anything that meets the bar for that. While I need to have at least one foot in the Windows world due to the various bits of proprietary Windows-only software mandated by the state government upon local governments I don’t have to like the situation.
  • Use of an SDR dongle is a bit rougher at my home than I would have thought. The local environment is very challenging when it comes to electrical smog.

Tags: Life

on June 30, 2021 04:28 AM

June 26, 2021

Another procenv release that is bringing the OSX version up to parity. Also, thanks to harens, procenv is now available in MacPorts! If anyone knows about querying IPC details on Darwin (via Mach?), please comment on the GitHub issue (or raise a PR :)
on June 26, 2021 11:59 AM

June 21, 2021

Lines of code (LOC) has some known flaws, but one of its advantages is that it lets humans visualize it for a small enough number. For bigger numbers like 100,000 vs 200,000 lines of code it really doesn't help us humans picture it.

For big enough changes, you could switch to just compressing the diff and measuring that. That also nicely tracks what developers would have to actually download to get the new changes. It also helps with understanding the bandwidth requirements of contributing to a project.

Here is what it looks like for the Linux kernel since 4.1. (For Rc1s only - the other rcs are in the 30-100 KiB range)

Compressed_Only

Here is a comparison of how far apart the LOC numbers are from the compressed diff numbers - the longer the line is the further apart they are. The numbers are normalized to 0-1. As you can see, they generally line up.

Compressed_vs_LCO

(You can get the raw spreadsheet here )

Let's get some numbers from another project - say systemd.

$ git tag --list --sort=creatordate | tail

#Pick the last two major releases..
$ git diff v247 v248 |  xz -c -q | wc -c | numfmt --to=iec-i --round=nearest
1.1MiB

Conclusion

This isn't ground breaking, but it may prove to be slightly more useful than using LOCs. At the very least as an alternative, it could help put less emphasis on LOCs.

Some interesting future things to look at:

  • Better comparisons between software projects using different languages?
  • Tracking other changes to software projects in a similar way (Wikis, MLs).
  • Compare with other kinds of projects. For instance Wikipedia does track changes monthly by the GB.

Comments and Feedback

Feel free to make a PR to add comments!

on June 21, 2021 11:35 PM

June 20, 2021

Migrating away from apt-key

Julian Andres Klode

This is an edited copy of an email I sent to provide guidance to users of apt-key as to how to handle things in a post apt-key world.

The manual page already provides all you need to know for replacing apt-key add usage:

Note: Instead of using this command a keyring should be placed directly in the /etc/apt/trusted.gpg.d/ directory with a descriptive name and either “gpg” or “asc” as file extension

So it’s kind of surprising people need step by step instructions for how to copy/download a file into a directory.

I’ll also discuss the alternative security snakeoil approach with signed-by that’s become popular. Maybe we should not have added signed-by, people seem to forget that debs still run maintainer scripts as root.

Aside from this email, Debian users should look into extrepo, which manages curated external repositories for you.

Direct translation

Assume you currently have:

wget -qO- https://myrepo.example/myrepo.asc | sudo apt-key add –

To translate this directly for bionic and newer, you can use:

sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.asc https://myrepo.example/myrepo.asc

or to avoid downloading as root:

wget -qO-  https://myrepo.example/myrepo.asc | sudo tee -a /etc/apt/trusted.gpg.d/myrepo.asc

Older (and all) releases only support unarmored files with an extension .gpg. If you care about them, provide one, and use

sudo wget -qO /etc/apt/trusted.gpg.d/myrepo.gpg https://myrepo.example/myrepo.gpg

Some people will tell you to download the .asc and pipe it to gpg --dearmor, but gpg might not be installed, so really, just offer a .gpg one instead that is supported on all systems.

wget might not be available everywhere so you can use apt-helper:

sudo /usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /etc/apt/trusted.gpg.d/myrepo.asc

or, to avoid downloading as root:

/usr/lib/apt/apt-helper download-file https://myrepo.example/myrepo.asc /tmp/myrepo.asc && sudo mv /tmp/myrepo.asc /etc/apt/trusted.gpg.d

Pretending to be safer by using signed-by

People say it’s good practice to not use trusted.gpg.d and install the file elsewhere and then refer to it from the sources.list entry by using signed-by=<path to the file>. So this looks a lot safer, because now your key can’t sign other unrelated repositories. In practice, security increase is minimal, since package maintainer scripts run as root anyway. But I guess it’s better for publicity :)

As an example, here are the instructions to install signal-desktop from signal.org. As mentioned, gpg --dearmor use in there is not a good idea, and I’d personally not tell people to modify /usr as it’s supposed to be managed by the package manager, but we don’t have an /etc/apt/keyrings or similar at the moment; it’s fine though if the keyring is installed by the package. You can also just add the file there as a starting point, and then install a keyring package overriding it (pretend there is a signal-desktop-keyring package below that would override the .gpg we added).

# NOTE: These instructions only work for 64 bit Debian-based
# Linux distributions such as Ubuntu, Mint etc.

# 1. Install our official public software signing key
wget -O- https://updates.signal.org/desktop/apt/keys.asc | gpg --dearmor > signal-desktop-keyring.gpg
cat signal-desktop-keyring.gpg | sudo tee -a /usr/share/keyrings/signal-desktop-keyring.gpg > /dev/null

# 2. Add our repository to your list of repositories
echo 'deb [arch=amd64 signed-by=/usr/share/keyrings/signal-desktop-keyring.gpg] https://updates.signal.org/desktop/apt xenial main' |\
  sudo tee -a /etc/apt/sources.list.d/signal-xenial.list

# 3. Update your package database and install signal
sudo apt update && sudo apt install signal-desktop

I do wonder why they do wget | gpg --dearmor, pipe that into the file and then cat | sudo tee it, instead of having that all in one pipeline. Maybe they want nicer progress reporting.

Scenario-specific guidance

We have three scenarios:

For system image building, shipping the key in /etc/apt/trusted.gpg.d seems reasonable to me; you are the vendor sort of, so it can be globally trusted.

Chrome-style debs and repository config debs: If you ship a deb, embedding the sources.list.d snippet (calling it $myrepo.list) and shipping a $myrepo.gpg in /usr/share/keyrings is the best approach. Whether you ship that in product debs aka vscode/chromium or provide a repository configuration deb (let’s call it myrepo-repo.deb) and then tell people to run apt update followed by apt install <package inside the repo> depends on how many packages are in the repo, I guess.

Manual instructions (signal style): The third case, where you tell people to run wget themselves, I find tricky. As we see in signal, just stuffing keyring files into /usr/share/keyrings is popular, despite /usr supposed to be managed by the package manager. We don’t have another dir inside /etc (or /usr/local), so it’s hard to suggest something else. There’s no significant benefit from actually using signed-by, so it’s kind of extra work for little gain, though.

Addendum: Future work

This part is new, just for this blog post. Let’s look at upcoming changes and how they make things easier.

Bundled .sources files

Assuming I get my merge request merged, the next version of APT (2.4/2.3.something) will do away with all the complexity and allow you to embed the key directly into a deb822 .sources file (which have been available for some time now):

Types: deb
URIs: https://myrepo.example/ https://myotherrepo.example/
Suites: stable not-so-stable
Components: main
Signed-By:
 -----BEGIN PGP PUBLIC KEY BLOCK-----
 .
 mDMEYCQjIxYJKwYBBAHaRw8BAQdAD/P5Nvvnvk66SxBBHDbhRml9ORg1WV5CvzKY
 CuMfoIS0BmFiY2RlZoiQBBMWCgA4FiEErCIG1VhKWMWo2yfAREZd5NfO31cFAmAk
 IyMCGyMFCwkIBwMFFQoJCAsFFgIDAQACHgECF4AACgkQREZd5NfO31fbOwD6ArzS
 dM0Dkd5h2Ujy1b6KcAaVW9FOa5UNfJ9FFBtjLQEBAJ7UyWD3dZzhvlaAwunsk7DG
 3bHcln8DMpIJVXht78sL
 =IE0r
 -----END PGP PUBLIC KEY BLOCK-----

Then you can just provide a .sources files to users, they place it into `sources.list.d, and everything magically works

Probably adding a nice apt add-source command for it I guess.

Well, python-apt’s aptsources package still does not support deb822 sources, and never will, we’ll need an aptsources2 for that for backwards-compatibility reasons, and then port software-properties and other users to it.

OpenPGP vs aptsign

We do have a better, tighter replacement for gpg in the works which uses Ed25519 keys to sign Release files. It’s temporarily named aptsign, but it’s a generic signer for single-section deb822 files, similar to signify/minisign.

We believe that this solves the security nightmare that our OpenPGP integration is while reducing complexity at the same time. Keys are much shorter, so the bundled sources file above will look much nicer.

on June 20, 2021 07:42 AM

June 18, 2021

Wait what?

Yes, there are couple of ways for you, the user, the contributor, the amazing human being who wants to improve the software that is used by millions, to write automated tests and have bots doing all the work for you, once you’ve signed a binding contract with the blood of an unicorn, and have obtained api keys for our public https://openqa.opensuse.org instance.

For now I will leave out the details on how to get those, but will rather point you to the #factory irc channel (or dischord), where you can get in touch with current admins, whom will be able to guide you better in the process.

I have the keys

You should get operator keys and they would look like this (more or less):

[openqa.opensuse.org]
key = 45ABCEB4562ACB04
secret = 4BA0003086C4CB95

Multipass

Now let’s do this

I will assume that you’re using openSUSE Tumbleweed, instructions are similar for Leap, but if you’re looking for something more esoteric, check the bootstraping guide

Bring up a terminal or your favorite package manager, and install openQA-client, it will pull almost everything you will need

zypper in openQA-client

Once we’re here, we’ve gotta clone the git repo from the tests being ran in openqa.opensuse.org, to do that let’s go to a directory; let’s call it Skynet and create it in our user’s home. My user’s home is /home/foursixnine \o/ and use git to clone the test repository: https://github.com/os-autoinst/os-autoinst-distri-opensuse/.

cd $HOME
mkdir Skynet
cd Skynet
git clone https://github.com/os-autoinst/os-autoinst-distri-opensuse/

Now since we already cloned the test distribution, and also got the openQA client installed, all there is to do is:

1 - Hacking & Commiting 2 - Scheduling a test run

At this point we can do #1 fairly easy, but #2 needs a bit of a push (no pun intended), this is where we will need the API keys that we requested, in the beginning.

We will rely for now, on openqa-clone-custom-git-refspec which reads configuration parameters from “$OPENQA_CONFIG/client.conf”, “~/.config/openqa/client.conf” and “/etc/openqa/client.conf” (you can run openqa-cli –help to get more detailed info on this), for now open up your favorite editor and let’s create the directories and files we’ll need

mkdir ~/.config/openqa
$EDITOR ~/.config/openqa/client.conf

And paste the API keys you already have, you you will be able to post and create jobs on the public instance!

Let’s get hacking

Hacker!

This part is pretty straightforward once you’ve looked at $HOME/Skynet/os-autoinst-distri-opensuse.

For this round, let’s say we want to also test chrome, in incognito mode. By looking at chrome’s help we know that the --incognito is a thing

So let’s go to where all the tests are, edit, commit and push our changes

Remember to set up your fork, however if you want to make your life easier use hub you can find it in the repos too!

cd $HOME/Skynet/os-autoinst-distri-opensuse
vim tests/x11/chrome.pm
git commit -m "Message goes here" tests/x11/chrome.pm
git push $REMOTE
openqa-clone-custom-git-refspec \
 https://github.com/foursixnine/os-autoinst-distri-opensuse/tree/test_incognito_in_chrome \
 https://openqa.opensuse.org/tests/1792294 \
 SCHEDULE=tests/installation/bootloader_start,tests/boot/boot_to_desktop,tests/console/consoletest_setup,tests/x11/chrome \
 BUILD=0 \
 TEST=openQA-WORKSHOP-ALL-CAPS

In the end you will end up with an URL https://openqa.opensuse.org/t1793764, and you will get emails from travis if something is going wrong

asciicast

on June 18, 2021 12:00 AM

June 13, 2021

Earlier this week it was time for GitOps Days again. The third time now and the event has grown quite a bit since we started. Born out of the desire to bring GitOps practitioners together during pandemic times initially, this time we had a proper CFP and the outcome was just great: lots of participation from a very diverse crowd of experts - we had panels, case studies, technical deep dives, comparisons of different solutions and more.
on June 13, 2021 06:06 AM

June 11, 2021

Over the past few posts, I covered the hardware I picked up to setup a small LXD cluster and get it all setup at a co-location site near home. I’ve then gone silent for about 6 months, not because anything went wrong but just because of not quite finding the time to come back and complete this story!

So let’s pick things up where I left them with the last post and cover the last few bits of the network setup and then go over what happened over the past 6 months.

Routing in a HA environment

You may recall that the 3 servers are both connected to a top of the rack switch (bonded dual-gigabit) as well as connected to each other (bonded dual-10-gigabit). The netplan config in the previous post would allow each of the servers to talk to the others directly and establish a few VLANs on the link to the top of the rack switch.

Those are for:

  • WAN-HIVE: Peering VLAN with my provider containing their core routers and mine
  • INFRA-UPLINK: OVN uplink network (where all the OVN virtual routers get their external addresses)
  • INFRA-HOSTS: VLAN used for external communication with the servers
  • INFRA-BMC: VLAN used for the management ports of the servers (BMCs) and switch, isolated from the internet

Simply put, the servers have their main global address and default gateway on INFRA-HOSTS, the BMCs and switch have their management addresses in INFRA-BMC, INFRA-UPLINK is consumed by OVN and WAN-HIVE is how I access the internet.

In my setup, I then run three containers, one on each server which each gets direct access to all those VLANs and act as a router using FRR. FRR is configured to establish BGP sessions with both of my provider’s core routers, getting routing to the internet that way and announcing my IPv4 and IPv6 subnets that way too.

LXD output showing the 3 FRR routers

On the internal side of things, I’m using VRRP to provide a virtual router internally. Typically this means that frr01 is the default gateway for all egress traffic while ingress traffic is somewhat spread across all 3 thanks to them having the same BGP weight (so my provider’s routers distribute the connections across all active peers).

With that in place, so long as one of the FRR instances are running, connectivity is maintained. This makes doing maintenance quite easy as there is effectively no SPOF.

Enter LXD networks with OVN

Now for where things get a bit trickier. As I’m using OVN to provide virtual networks inside of LXD, each of those networks will typically need some amount of public addressing. For IPv6, I don’t do NAT so each of my networks get a public /64 subnet. For IPv4, I have a limited number of those, so I just assign them one by one (/32) directly to specific instances.

Whenever such a network is created, it will grab an IPv4 and IPv6 address from the subnet configured on INFRA-UPLINK. That part is all good and the OVN gateway becomes immediately reachable.

The issue is with the public IPv6 subnet used by each network and with any additional addresses (IPv4 or IPv6) which are routed directly to its instances. For that to work, I need my routers to send the traffic headed for those subnets to the correct OVN gateway.

But how do you do that? Well, there are pretty much three options here:

  • You use LXD’s default mode of performing NDP proxying. Effectively, LXD will configure OVN to directly respond to ARP/NDP on the INFRA-UPLINK VLAN as if the gateway itself was holding the address being reached.
    This is a nice trick which works well at pretty small scale. But it relies on LXD configuring a static entry for every single address in the subnet. So that’s fine for a few addresses but not so much when you’re talking a /64 IPv6 subnet.
  • You add static routing rules to your routers. Basically you run lxc network show some-name and look for the IPv4 and IPv6 addresses that the network got assigned, then you go on your routers and you configure static routes for all the addresses that need to be sent to that OVN gateway. It works, but it’s pretty manual and effectively prevents you from delegating network creation to anyone who’s not the network admin too.
  • You use dynamic routing to have all public subnets and addresses configured on LXD to be advertised to the routers with the correct next-hop address. With this, there is no need to configure anything manually, keeping the OVN config very simple and allowing any user of the cluster to create their own networks and get connectivity.

Naturally I went with the last one. At the time, there was no way to do that through LXD, so I made my own by writing lxd-bgp. This is a pretty simple piece of software which uses the LXD API to inspect its networks, determine all OVN networks tied to a particular uplink network (INFRA-UPLINK in my case) and then inspect all instances running on that network.

It then sends announcements both for the subnets backing each OVN networks as well as for specific routes/addresses that are routed on top of that to specific instances running on the local system.

The result is that when an instance with a static IPv4 and IPv6 starts, the lxd-bgp instance running on that particular system will send an announcement for those addresses and traffic will start flowing.

Now deploy the same service on 3 servers, put them into 3 different LXD networks and set the exact same static IPv4 and IPv6 addresses on them and you now have a working anycast service. When one of the containers or its host go down for some reason, that route announcement goes away and the traffic now heads to the remaining instances. That does a good job at some simplistic load-balancing and provides pretty solid service availability!

LXD output of my 3 DNS servers (backing ns1.stgraber.org) and using anycast

The past 6 months

Now that we’ve covered the network setup I’m running, let’s spend a bit of time going over what happened over the past 6 months!

The servers and switch installed in the cabinet

In short, well, not a whole lot. Things have pretty much just been working. The servers were installed in the datacenter on the 21st of December. I’ve then been busy migrating services from my old server at OVH over to the new cluster, finalizing that migration at the end of April.

I’ve gotten into the habit of doing a full reboot of the entire cluster every week and developed a bit of tooling for this called lxd-evacuate. This makes it easy to relocate any instance which isn’t already highly available, emptying a specific machine and then letting me reboot it. By and large this has been working great and it’s always nice to have confidence that should something happen, you know all the machines will boot up properly!

These days, I’m running 63 instances across 9 projects and a dozen networks. I spent a bit of time building up a Grafana dashboard which tracks and alerts on my network consumption (WAN port, uplink to servers and mesh), monitors the health of my servers (fan speeds, temperature, …), tracks CEPH consumption and performance, monitors the CPU, RAM and load of each of the servers and also track performance on my top services (NSD, unbound and HAProxy).

LXD also rolled out support for network ACLs somewhat recently, allowing for proper stateful firewalling directly through LXD and implemented in OVN. It took some time to setup all those ACLs for all instances and networks but that’s now all done and makes me feel a whole lot better about service security!

What’s next

On the LXD front, I’m excited about a few things we’re doing over the next few months which will make environments like mine just that much nicer:

  • Native BGP support (no more lxd-bgp)
  • Native cluster server evacuation (no more lxd-evacuate)
  • Built-in DNS server for instance forward/reverse records as well as DNS zones tied to networks
  • Built-in metrics (prometheus) endpoint exposing CPU/memory/disk/network usage of all local instances

This will let me deprecate some of those side projects I had to start as part of this work, will reduce the amount of manual labor involved in setting up all the DNS records and will give me much better insight on what’s consuming resources on the cluster.

I’m also in the process of securing my own ASN and address space through ARIN, mostly because that seemed like a fun thing to do and will give me a tiny bit more flexibility too (not to mention let me consolidate a whole bunch of subnets). So soon enough, I expect to have to deal with quite a bit of re-addressing, but I’m sure it will be a fun and interesting experience!

on June 11, 2021 10:07 PM

We are pleased to announce that Plasma 5.22.0, is now available in our backports PPA for Kubuntu 21.04 Hirsute Hippo.

The release announcement detailing the new features and improvements in Plasma 5.22 can be found here.

To upgrade:

Add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt full-upgrade

IMPORTANT

Please note that more bugfix releases are scheduled by KDE for Plasma 5.22, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more rounds of stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.21 as included in the original 21.04 (Hirsute) release.

The Kubuntu Backports PPA for 21.04 also currently contains newer versions of KDE Frameworks, Applications, and other KDE software. The PPA will also continue to receive updates of KDE packages other than Plasma.

Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], IRC [3], and/or file a bug against our PPA packages [4].

1. KDE bugtracker: https://bugs.kde.org
2. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
3. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.libera.chat
4. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

on June 11, 2021 07:19 PM

SSH quoting

Colin Watson

A while back there was a thread on one of our company mailing lists about SSH quoting, and I posted a long answer to it. Since then a few people have asked me questions that caused me to reach for it, so I thought it might be helpful if I were to anonymize the original question and post my answer here.

The question was why a sequence of commands involving ssh and fiddly quoting produced the output they did. The first example was this:

$ ssh user@machine.local bash -lc "cd /tmp;pwd"
/home/user

Oh hi, my dubious life choices have been such that this is my specialist subject!

This is because SSH command-line parsing is not quite what you expect.

First, recall that your local shell will apply its usual parsing, and the actual OS-level execution of ssh will be like this:

[0]: ssh
[1]: user@machine.local
[2]: bash
[3]: -lc
[4]: cd /tmp;pwd

Now, the SSH wire protocol only takes a single string as the command, with the expectation that it should be passed to a shell by the remote end. The OpenSSH client deals with this by taking all its arguments after things like options and the target, which in this case are:

[0]: bash
[1]: -lc
[2]: cd /tmp;pwd

It then joins them with a single space:

bash -lc cd /tmp;pwd

This is passed as a string to the server, which then passes that entire string to a shell for evaluation, so as if you’d typed this directly on the server:

sh -c 'bash -lc cd /tmp;pwd'

The shell then parses this as two commands:

bash -lc cd /tmp
pwd

The directory change thus happens in a subshell (actually it doesn’t quite even do that, because bash -lc cd /tmp in fact ends up just calling cd because of the way bash -c parses multiple arguments), and then that subshell exits, then pwd is called in the outer shell which still has the original working directory.

The second example was this:

$ ssh user@machine.local bash -lc "pwd;cd /tmp;pwd"
/home/user
/tmp

Following the logic above, this ends up as if you’d run this on the server:

sh -c 'bash -lc pwd; cd /tmp; pwd'

The third example was this:

$ ssh user@machine.local bash -lc "cd /tmp;cd /tmp;pwd"
/tmp

And this is as if you’d run:

sh -c 'bash -lc cd /tmp; cd /tmp; pwd'

Now, I wouldn’t have implemented the SSH client this way, because I agree that it’s confusing. But /usr/bin/ssh is used as a transport for other things so much that changing its behaviour now would be enormously disruptive, so it’s probably impossible to fix. (I have occasionally agitated on openssh-unix-dev@ for at least documenting this better, but haven’t made much headway yet; I need to get round to preparing a documentation patch.) Once you know about it you can use the proper quoting, though. In this case that would simply be:

ssh user@machine.local 'cd /tmp;pwd'

Or if you do need to specifically invoke bash -l there for some reason (I’m assuming that the original example was reduced from something more complicated), then you can minimise your confusion by passing the whole thing as a single string in the form you want the remote sh -c to see, in a way that ensures that the quotes are preserved and sent to the server rather than being removed by your local shell:

ssh user@machine.local 'bash -lc "cd /tmp;pwd"'

Shell parsing is hard.

on June 11, 2021 10:22 AM

June 05, 2021

Note: Though this testing was done on Google Cloud and I work at Google, this work and blog post represent my personal work and do not represent the views of my employer.

As a red teamer and security researcher, I occasionally find the need to crack some hashed passwords. It used to be that John the Ripper was the go-to tool for the job. With the advent of GPGPU technologies like CUDA and OpenCL, hashcat quickly eclipsed John for pure speed. Unfortunately, graphics cards are a bit hard to come by in 2021. I decided to take a look at the options for running hashcat on Google Cloud.

There are several steps involved in getting hashcat running with CUDA, and because I often only need to run the instance for a short period of time, I put together a script to spin up hashcat on a Google Cloud VM. It can either run the benchmark or spin up an instance with arbitrary flags. It starts the instance but does not stop it upon completion, so if you want to give it a try, make sure you shut down the instance when you’re done with it. (It leaves the hashcat job running in a tmux session for you to examine.)

At the moment, there are 6 available GPU accelerators on Google Cloud, spanning the range of architectures from Kepler to Ampere (see pricing here):

  • NVIDIA A100 (Ampere)
  • NVIDIA T4 (Turing)
  • NVIDIA V100 (Volta)
  • NVIDIA P4 (Pascal)
  • NVIDIA P100 (Pascal)
  • NVIDIA K80 (Kepler)

Performance Results

I chose a handful of common hashes as representative samples across the different architectures. These include MD5, SHA1, NTLM, sha512crypt, and WPA-PBKDF2. These represent some of the most common password cracking situations encountered by penetration testers. Unsurprisingly, overall performance is most directly related to the number of CUDA cores, followed by speed and architecture.

Relative Performance Graph

Speeds in the graph are normalized to the slowest model in each test (the K80 in all cases).

Note that the Ampere-based A100 is 11-15 times as a fast as the slowest K80. (On some of the benchmarks, it can reach 55 times as fast, but these are less common.) There’s a wide range of hardware here, and depending on availability and GPU type, you can attach from 1 to 16 GPUs to a single instance and hashcat can spread the load across all of the attached GPUs.

Full results of all of the tests, using the slowest hardware as a baseline for percentages:

Algorithmnvidia-tesla-k80nvidia-tesla-p100nvidia-tesla-p4nvidia-tesla-v100nvidia-tesla-t4nvidia-tesla-a100
0 - MD54.3 GH/s100.0%27.1 GH/s622.2%16.6 GH/s382.4%55.8 GH/s1283.7%18.8 GH/s432.9%67.8 GH/s1559.2%
100 - SHA11.9 GH/s100.0%9.7 GH/s497.9%5.6 GH/s286.6%17.5 GH/s905.4%6.6 GH/s342.8%21.7 GH/s1119.1%
1400 - SHA2-256845.7 MH/s100.0%3.3 GH/s389.5%2.0 GH/s238.6%7.7 GH/s904.8%2.8 GH/s334.8%9.4 GH/s1116.7%
1700 - SHA2-512230.3 MH/s100.0%1.1 GH/s463.0%672.5 MH/s292.0%2.4 GH/s1039.9%789.9 MH/s343.0%3.1 GH/s1353.0%
22000 - WPA-PBKDF2-PMKID+EAPOL (Iterations: 4095)80.7 kH/s100.0%471.4 kH/s584.2%292.9 kH/s363.0%883.5 kH/s1094.9%318.3 kH/s394.5%1.1 MH/s1354.3%
1000 - NTLM7.8 GH/s100.0%49.9 GH/s643.7%29.9 GH/s385.2%101.6 GH/s1310.6%33.3 GH/s429.7%115.3 GH/s1487.3%
3000 - LM3.8 GH/s100.0%25.0 GH/s661.9%13.1 GH/s347.8%41.5 GH/s1098.4%19.4 GH/s514.2%65.1 GH/s1722.0%
5500 - NetNTLMv1 / NetNTLMv1+ESS5.0 GH/s100.0%26.6 GH/s533.0%16.1 GH/s322.6%54.9 GH/s1100.9%19.7 GH/s395.6%70.6 GH/s1415.7%
5600 - NetNTLMv2322.1 MH/s100.0%1.8 GH/s567.5%1.1 GH/s349.9%3.8 GH/s1179.7%1.4 GH/s439.4%5.0 GH/s1538.1%
1500 - descrypt, DES (Unix), Traditional DES161.7 MH/s100.0%1.1 GH/s681.5%515.3 MH/s318.7%1.7 GH/s1033.9%815.9 MH/s504.6%2.6 GH/s1606.8%
500 - md5crypt, MD5 (Unix), Cisco-IOS $1$ (MD5) (Iterations: 1000)2.5 MH/s100.0%10.4 MH/s416.4%6.3 MH/s251.1%24.7 MH/s989.4%8.7 MH/s347.6%31.5 MH/s1260.6%
3200 - bcrypt $2\*$, Blowfish (Unix) (Iterations: 32)2.5 kH/s100.0%22.9 kH/s922.9%13.4 kH/s540.7%78.4 kH/s3155.9%26.7 kH/s1073.8%135.4 kH/s5450.9%
1800 - sha512crypt $6$, SHA512 (Unix) (Iterations: 5000)37.9 kH/s100.0%174.6 kH/s460.6%91.6 kH/s241.8%369.6 kH/s975.0%103.5 kH/s273.0%535.4 kH/s1412.4%
7500 - Kerberos 5, etype 23, AS-REQ Pre-Auth43.1 MH/s100.0%383.9 MH/s889.8%186.7 MH/s432.7%1.0 GH/s2427.2%295.0 MH/s683.8%1.8 GH/s4281.9%
13100 - Kerberos 5, etype 23, TGS-REP32.3 MH/s100.0%348.8 MH/s1080.2%185.3 MH/s573.9%1.0 GH/s3123.0%291.7 MH/s903.4%1.8 GH/s5563.8%
15300 - DPAPI masterkey file v1 (Iterations: 23999)15.6 kH/s100.0%80.8 kH/s519.0%50.2 kH/s322.3%150.9 kH/s968.9%55.6 kH/s356.7%187.2 kH/s1202.0%
15900 - DPAPI masterkey file v2 (Iterations: 12899)8.1 kH/s100.0%36.7 kH/s451.0%22.1 kH/s271.9%79.9 kH/s981.4%31.3 kH/s385.0%109.2 kH/s1341.5%
7100 - macOS v10.8+ (PBKDF2-SHA512) (Iterations: 1023)104.1 kH/s100.0%442.6 kH/s425.2%272.5 kH/s261.8%994.6 kH/s955.4%392.5 kH/s377.0%1.4 MH/s1304.0%
11600 - 7-Zip (Iterations: 16384)91.9 kH/s100.0%380.5 kH/s413.8%217.0 kH/s236.0%757.8 kH/s824.2%266.6 kH/s290.0%1.1 MH/s1218.6%
12500 - RAR3-hp (Iterations: 262144)12.1 kH/s100.0%64.2 kH/s528.8%20.3 kH/s167.6%102.2 kH/s842.3%28.1 kH/s231.7%155.4 kH/s1280.8%
13000 - RAR5 (Iterations: 32799)10.2 kH/s100.0%39.6 kH/s389.3%24.5 kH/s240.6%93.2 kH/s916.6%30.2 kH/s297.0%118.7 kH/s1167.8%
6211 - TrueCrypt RIPEMD160 + XTS 512 bit (Iterations: 1999)66.8 kH/s100.0%292.4 kH/s437.6%177.3 kH/s265.3%669.9 kH/s1002.5%232.1 kH/s347.3%822.4 kH/s1230.8%
13400 - KeePass 1 (AES/Twofish) and KeePass 2 (AES) (Iterations: 24569)10.9 kH/s100.0%67.0 kH/s617.1%19.0 kH/s174.8%111.2 kH/s1024.8%27.3 kH/s251.2%139.0 kH/s1281.0%
6800 - LastPass + LastPass sniffed (Iterations: 499)651.9 kH/s100.0%2.5 MH/s390.4%1.5 MH/s232.2%6.0 MH/s914.8%2.0 MH/s304.7%7.6 MH/s1160.0%
11300 - Bitcoin/Litecoin wallet.dat (Iterations: 200459)1.3 kH/s100.0%5.0 kH/s389.9%3.1 kH/s241.5%11.4 kH/s892.3%4.1 kH/s325.3%14.4 kH/s1129.2%

Value Results

Believe it or not, speed doesn’t tell the whole story, unless you’re able to bill the cost directly to your customer – in that case, go straight for that 16-A100 instance. :)

You’re probably more interested in value however – that is, hashes per dollar. This is computed based on the speed and price per hour, resulting in hash per dollar value. For each card, I computed the median relative performance across all of the hashes in the default hashcat benchmark. I then divided performance by price per hour, then normalized these values again.

Relative Value

Relative value is the mean speed per cost, in terms of the K80.

CardPerformancePriceValue
nvidia-tesla-k80100.0$0.451.00
nvidia-tesla-p100519.0$1.461.60
nvidia-tesla-p4286.6$0.602.15
nvidia-tesla-v1001002.5$2.481.82
nvidia-tesla-t4356.7$0.354.59
nvidia-tesla-a1001341.5$2.932.06

Though the NVIDIA T4 is nowhere near the fastest, it is the most efficient in terms of cost, primarily due to its very low $0.35/hr pricing. (At the time of writing.) If you have a particular hash to focus on, you may want to consider doing the math for that hash type, but the relative performances seem to have the same trend. It’s actually a great value.

So maybe the next time you’re on an engagement and need to crack hashes, you’ll be able to figure out if the cloud is right for you.

on June 05, 2021 07:00 AM

June 01, 2021

Where win means becomes the universal way to get apps on Linux.

In short, I don't think either current iteration will. But why?

I started writing this a while ago, but Disabling snap Autorefresh reminded me to finish it. I also do not mean this as a "hit piece" against my former employer.

Here is a quick status of where we are:

Use case     Snaps   Flatpak
Desktop app  ☑️       ☑️    
Service/Server app  ☑️       🚫   
Embedded  ☑️       🚫   
Command Line apps  ☑️       🚫
Full independence option   🚫      ☑️  
Build a complete desktop   🚫      ☑️  
Controlling updates   🚫      ☑️  

Desktop apps

Both Flatpaks and Snaps are pretty good at desktop apps. They share some bits and have some differences. Flatpak might have a slight edge because it's focused only on Desktop apps, but for the most part it's a wash.

Service/Server / Embedded / Command Line apps

Flatpak doesn't target these at all. Full stop.

Snap wins these without competition from Flatpak but this does show a security difference. sudo snap install xyz will just install it - it won't ask you if you think it's a service, desktop app or some combination (or prompt you for permissions like Flatpak does).

With Embedded using Ubuntu Core it requires strict confinement which is a plus (Which you read correctly, means "something less" confinement everywhere else).

Aside: As Fedora SilverBlue and Endless OS both only let you install Flatpaks, they also come with the container based Toolbox to make it possible to run other apps.

Full independence option / Build a complete desktop

Snaps

You can not go and (re)build your own distro and use upstream snapd.

Snaps are generally running from one LTS "core" behind what you might expect from your Ubuntu desktop version. For example: core18 is installed by default on Ubuntu 21.04. The embedded Ubuntu Core option is the only one that is using just one version of Ubuntu core code..

Flatpak

With Flatpak you can choose to use one of many public bases like the Freedesktop platform or Gnome platform. You can also build your own Platform like Fedora Silverblue does. All of the default flatpak that Silverblue comes with are derived from the "regular" Fedora of the same version. You can of course add other sources too. Example: The Gnome Calculator from Silverblue is built from the Fedora RPMs and depends on the org.fedoraproject.Platform built from that same version of Fedora.

Aside: I should note that to do that you need OSTree to make the Platforms.

Controlling updates

Flatpak itself does not do any updates automatically. It relies on your software application to do it (Gnome Software). It also has the ability for apps to check for their own updates and ask to update itself.

Snaps are more complicated, but why? Let's look at the Ubuntu IoT and device services that Canonical sells:

Dedicated app store ...complete control of application versions, updates and controlled rollouts for $15,000 per year.

Enterprise app store ...control snap updates and upgrades. Ensure that all device traffic goes through an audited communications channel and determine the precise versions of snaps used inside the business.

Control of the update process is one of the ways Canonical is trying to make money. I don't believe anyone has ever told me explicitly that this is why Snaps update work this way. it just makes sense given the business considerations.

So who is going to "win"?

One of them might go away, but neither is set to become the universal way to get apps on Linux at least not today.

It could change starting with something like:

  • Flatpak (or something like it) evolves to support command line or other apps.
  • A snap based Ubuntu desktop takes off and becomes the default Ubuntu.

Either isn't going to get it all the way there, but is needed to prove what the technology can do. In both cases, the underlying confinement technology is being improved for all.

Comments

Maybe I missed something? Feel free to make a PR to add comments!

on June 01, 2021 08:00 PM

May 28, 2021

rout is out

James Hunt

I’ve just released the rout tool I mentioned in my last blog post about command-line parsing semantics.

rout is a simple tool, written in rust, that produces unicode utf-8 output in interesting ways. It uses the minimal command-line parsing crate ap. It also uses a fancy pest parser for interpreting escape sequences and range syntax.

Either grab the source, or install the crate:

$ cargo install rout

Full details (with lots of examples! ;) are on both sites:

on May 28, 2021 07:54 PM

It took a while, but now Launchpad finally allows users to edit their comments on questions, bug reports and merge proposal pages.

The first request for this feature dates back from 2007. Since then, Launchpad increased a lot in terms of new features, and the other priorities took precedence over that request, but the request was still more than valid. More recently, we managed to bump the priority of this feature, and now we have it: users are now allowed to edit their comments on Launchpad answers, bugs and merge proposals!

This has been available in the API for a few days already, but today we finally released the fresh new pencil icon in the top-right corner of your messages. Once you click it, the message is turned into a small form that allows you to edit your message content.

For messages that were edited before, it is possible to see old versions of that edited message by clicking the “last edit …” link, also at the top of the message.

In case you introduce sensitive information by mistake in your comment and need to remove it from the message history after editing it, you can always use the API to do so. We plan to add a remove button to the message’s revision history UI soon, to make this work easier.

The Launchpad team is proud of this new feature, and we hope that it will be useful for everyone! Let us know if you have any feedback!

on May 28, 2021 06:26 PM

May 26, 2021

Hello all,

As many of you might have heard, the freenode IRC network changed management a couple of days ago, in what I personally identify as a hostile takeover. As part of that, the Ubuntu IRC Council, supported by the Community Council, published a resolution suggesting the move to Libera Chat. The former IRC Council and their successors immediately started working on moving our channels, users, and tooling over to Libera Chat.

As of around 3:00 UTC today, freenode’s new management believed that our channel topics and messaging pointing users in the right direction were outside of their policy, and rather than consulting with the IRC Council, they performed yet another hostile takeover, this time of the Ubuntu namespaces, including flavors, as well as other spaces from projects who were also using freenode for communication.

In order to provide you with the best experience on IRC, Ubuntu is now officially moving to Libera Chat. You will be able to find the same channels, the same people, and the same tools that you are used to. In the event that you see something is not quite right, please, don’t hesitate to reach out to our Ubuntu IRC Team, on #ubuntu-irc.

While this is a bump on the road, we hope that it will give our community some fresh air to revitalize, and we will be able to rebuild what we had, but 10x better. I sincerely appreciate the IRC Council’s efforts in making this move a success.

Please join us at #ubuntu, on irc.libera.chat:6697 (TLS).

On behalf of the Ubuntu Community Council,

Jose Antonio Rey

on May 26, 2021 04:47 AM

May 24, 2021

Following the Ubuntu IRC Council resolution, Lubuntu will be moving all of the Lubuntu IRC channels to LiberaChat as well. Some of the channels have already moved at the time of this announcement and the others will follow shortly. We are also working on updating our links to reflect the change. Telegram to IRC bridge offline. As a result of […]

The post Lubuntu IRC channels are moving networks! first appeared on Lubuntu.

on May 24, 2021 02:21 AM

May 21, 2021

C has the useful feature of adjacent allowing literal strings to be automatically concatenated. This is described in K&R "The C programming language" 2nd edition, page 194, section A2.6 "String Literals":

"Adjacent string literals are concatenated into a single string."

Unfortunately over the years I've seen several occasions where this useful feature has led to bugs not being detected, for example, the following code:

 

A simple typo in the "Does Not Compute" error message ends up with the last two literal strings being "silently" concatenated, causing an array out of bounds read when error_msgs[7] is accessed.

This can also bite when a new literal string is added to the end of the array.  The previous string requires a comma to be added when a new string is added to the array, and sometimes this is overlooked.  An example of this is in ACPICA commit 81eb9c383e6dee0f1b6620e91e5c3dbb48234831  - fortunately static analysis detected this and it has been fixed with commit 3f8c79fc22fd64b739f51268654a6783a874520e

The concerning issue is that this useful string concatenation feature can produce hazardous outcomes with coding mistakes that are simple to make but hard to notice.

 

                                              

on May 21, 2021 10:58 AM

May 19, 2021

To all the people I have interacted with in freenode, and to all the contributors I have worked with over there:

I recently celebrated my 10-year anniversary of having an account in freenode. I have a lot of fond memories, met a lot of amazing people in that period of time.

Some time ago, the former head of freenode staff sold `freenode ltd` (a holding company) to a third party, Andrew Lee[1], under terms that have not been disclosed to the staff body. Mr Lee at the time had promised to never exercise any operational control over freenode.

In the past few weeks, this has changed[2][3], and the existance of a legal threat to freenode has become apparent. We cannot know the substance of this legal threat as it contains some kind of gag order preventing its broader discussion within staff.

As a result, Mr Lee now has operational control over the freenode IRC network. I cannot stand by such a (hostile?) corporate takeover of the freenode network, and I am resigning as a staff volunteer along with most other freenode staff. We simply do not feel that the network now remains independent after two heads of staff appear to have been compelled to make changes to our git repo for the website[4].

Where to now?

We are founding a new network with the same goals and ambitions: libera.chat.

You can connect to the new network at `irc.libera.chat`, ssl port 6697 (and the usual clearnet port).

We’re really sorry it’s had to come to this, and hope that you’re willing to work with us to make libera a success, independent from outside control.

What about Ubuntu?

Whether Ubuntu decides to stay on freenode or move to libera would be a decision of the Ubuntu IRC Council. Please refer to them with any questions you might have. While I am a part of the Community Council, the IRC Council operates independently, and I will personally leave the final decision to them.

Footnotes

[1]: https://find-and-update.company-information.service.gov.uk/company/10308021/officers

[2]: A blogpost has been removed without explanation: https://freenode.net/news/freenode-reorg (via the wayback machine)

[3]: The freenode testnet, for experimental deployment and testing of new server features was shutdown on Friday 30th April, for reasons that have not been disclosed to us.

[4]: Unexplained change to shells.com as our sponsor: web-7.0/pull/489, followed by a resignation: web-7.0/pull/493

on May 19, 2021 04:49 PM

May 15, 2021

Are you using Kubuntu 21.04 Hirsute Hippo, our current Stable release? Or are you already running our development builds of the upcoming 21.10 Impish Indri?

We currently have Plasma 5.21.90 (Plasma 5.22 Beta)  available in our Beta PPA for Kubuntu 21.04, and 21.10 development series.

However this is a beta release, and we should re-iterate the disclaimer from the upstream release announcement:

DISCLAIMER: This is beta software and is released for testing purposes. You are advised to NOT use Plasma 5.22 Beta in a production environment or as your daily desktop. If you do install Plasma 5.22 Beta, you must be prepared to encounter (and report to the creators) bugs that may interfere with your day-to-day use of your computer.

https://kde.org/announcements/plasma/5/5.21.90

If you are prepared to test, then…..

Add the beta PPA and then upgrade:

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt full-upgrade -y

Then reboot.

In case of issues, testers should be prepared to use ppa-purge to remove the PPA and revert/downgrade packages.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use a launchpad.net to post testing feedback to the Kubuntu team as a bug, or give feedback on IRC [1], or mailing lists [2].
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the release announcement and changelog.

[Test Case]
* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.21?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.
* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing.
– Test the ‘fixed’ functionality or ‘new’ feature.

Testing may involve some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu and the KDE community as a whole.

Thanks!

Please stop by the Kubuntu-devel IRC channel if you need clarification of any of the steps to follow.

[1] – irc://irc.freenode.net/kubuntu-devel
[2] – https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

on May 15, 2021 09:59 AM

May 11, 2021

Do you know a person, project or organisation doing great work in open tech in the UK? We want to hear about it. We are looking for nominations for people and projects working on open source software, hardware and data. We are looking for companies or orgnisations working in fintech with open, helping achieve the objectives of any of the United Nations Sustainable Development Goals. Nominations are open for projects, organisations and individuals that demonstrate outstanding contribution and impact for the Diversity and Inclusion ecosystem. This includes solving unique challenges, emphasis transparency of opportunities, mentorship, coaching and nurturing the creation of diverse, inclusive and neurodiverse communities. And individuals who you admire either under 25 or of any age.

Self nominations are welcome and encouraged. You can also nominate in more than one category.

Nominations may be submitted until 11.59pm on 13 June 2021.

Awards Event 11 November 2021.

Those categories again:

Hardware – sponsored by The Stack
Software – sponsored by GitLab
Data
Financial Services – sponsored by FINOS
Sustainability – sponsored by Centre for Net Zero
Belonging Network – sponsored by Osmii
Young Person (under 25) – sponsored by JetStack
Individual – sponsored by Open Source Connections

Read more and find the nomination form on the OpenUK website.

Winners of Awards 2020, First edition

Young Person • Josh Lowe
Individual • Liz Rice
Financial Services and Fintech in Open Source • Parity
Open Data • National Library of Wales
Open Hardware • LowRISK
Open Source Software • HospitalRun

on May 11, 2021 03:53 PM

May 10, 2021

The Big Iron Hippo

Elizabeth K. Joseph

It’s been about a year since I last wrote about an Ubuntu release on IBM Z (colloquially known as “mainframes” and nicknamed “Big Iron”). In my first year at IBM my focus really was Linux on Z, along with other open source software like KVM and how that provides support for common tools via libvirt to make management of VMs on IBM Z almost trivial for most Linux folks. Last year I was able to start digging a little into the more traditional systems for IBM Z: z/OS and z/VM. While I’m no expert, by far, I have obtained a glimpse into just how powerful these operating systems are, and it’s impressive.

This year, with this extra background, I’m coming back with a hyper focus on Linux, and that’s making me appreciate the advancements with every Linux kernel and distribution release. Engineers at IBM, SUSE, Red Hat, and Canonical have made an investment in IBM Z, and are supporting those with kernel and other support for IBM Z hardware.

So it’s always exciting to see the Ubuntu release blog post from Frank Heimes over at Canonical! And the one for Hirsute Hippo is no exception: The ‘Hippo’ is out in the wild – Ubuntu 21.04 got released!

Several updates to the kernel! A great, continued focus on virtualization and containers! I can already see that the next LTS, coming out in the spring of 2022, is going to be a really impressive one for Ubuntu on IBM Z and LinuxONE.

on May 10, 2021 08:07 PM

Here are some uploads for April.

2021-04-06: Upload package bundlewrap (4.7.1-1) to Debian unstable.

2021-04-06: Upload package calamares (3.2.39.2-1) to Debian experimental.

2021-04-06: Upload package flask-caching (1.10.1-1) to Debian unstable.

2021-04-06: Upload package xabacus (8.3.5-1) to Debian unstable.

2021-04-06: Upload package python-aniso8601 (9.0.1-1) to Debian experimental.

2021-04-07: Upload package gdisk (1.0.7-1) to Debian unstable.

2021-04-07: Upload package gnome-shell-extension-disconnect-wifi (28-1) to Debian unstable.

2021-04-07: Upload package gnome-shell-extension-draw-on-your-screen (11-1) to Debian unstable.

2021-04-12: Upload package s-tui (1.1.1-1) to Debian experimental.

2021-04-12: Upload package speedtest-cli (2.1.3-1) to Debian unstable.

2021-04-19: Spnsor package bitwise (0.42-1) to Debian unstable (E-mail request).

2021-04-19: Upload package speedtest-cli (2.1.3-2) to Debian unstable.

2021-04-23: Upload package speedtest-cli (2.0.2-1+deb10u2) to Debian buster (Closes: #986637)

on May 10, 2021 03:01 PM

Writing software is similar to translating from one language to another. Specifically, it is similar to translating from your native language to some other language. You are translating to that other language so that you can help those others do some task for you. You might not understand this other language very well, and some concepts might be difficult to express in the other language. You are doing your best though when translating, but as we know, some things can get lost in translation.

On software testing

When writing software, some things do get lost in translation. You know what your software should do, but you need to express your needs into the particular programming language that you are using. Even small pieces of software will have some sort of problem, which are called software defects. There is a whole field in computer science which is called software testing, and their goal is to find early such software defects so that they get fixed before the software gets released and reaches the market. When you buy a software package, it has gone through intensive software testing. Because if a customer uses the software package, and then it crashes or malfunctions, it reflects really poorly. They might even return the software and demand their money back!

In the field of software testing, you try to identify actions that a typical customer will likely perform, and may crash the software. If you could, you would try to find all possible software defects and have them fixed. But in reality, identifying all software defects is not possible. And this is a hard fact and a known issue in software testing; no matter how hard you try, there will still be some more software defects.

This post is about security though, and not about software testing. What gives? Well, a software defect can make the software to malfunction. This malfunctioning can make the software to perform an action that was not intended by the software developers. It can make the software do what some attacker wants it to do. Farfetched? Not at all. This is what a big part of computer security works on.

Security fuzzing

When security researchers perform software testing with an aim of finding software defects, we say that they are performing security fuzzing, or just fuzzing. Therefore, fuzzing is similar to software testing, but with the focus on identifying ways to make the software malfunction in a really bad way.

Security researchers find security vulnerabilities, ways to break into a computer system. This means that fuzzing is the first half of the job to find security vulnerabilities. The second part is to analyse each software defect and try to figure out, if possible, a way to break into the system. In this post we are only focusing on the first part of the job.

Defects and vulnerabilities

Are all software defects a potential candidate for a security vulnerability? Let’s see an example of a text editor. If you are using the text editor only to edit your own documents, but not open downloaded text documents, then there is no chance for a security vulnerability. Because an attacker would not have a way to influence this text editor. There would be no input of this text editor that is exposed to the attacker.

A text editor.

However, most computers are connected to the Internet. And most operating systems, either Windows, OS/X or a Linux distribution, are pre-configured to open text documents with a text editor. If you are browsing the Internet, you may find an interesting text document and decide to download and open it on your computer. Or, you may receive an email with an attachment of a text document. In both cases, it is the document file that is fully in control of an attacker. That means that an attacker can modify any aspect of that file. A Word document is a ZIP file that contains several individual files. There are opportunities to modify any of the individual files, ZIP it back into a Doc file and try to open it. If you get a crash, you successfully managed to fuzz the application, in a manual way. If you manage to crash the application simply by editing a Doc document due to your own work, then you are a natural in security fuzzing. Just keep a copy of that exact crashing document because it could be gold to a security researcher.

If you rename a .doc file and change the extension into .zip, then you can open it with a ZIP file manager. And can see the individual files inside it.

Artificial intelligence

If there is a complex task that a person could do but it is tedious and expensive, then you can either use a computer and make it work as just like a person would, or break down the task into a simpler but repetitive form so that it is suitable for a computer. The latter is quite enticing because computing power is way cheaper and more abundant than employing an expert.

Suppose you want to recognize apples from digital images. You can either employ an apple expert to identify if there is an apple in a photograph (any variety of apple). Or, get an expert to share the domain knowledge of apples and have them help in creating software that understands all shapes and colors of apples. Or, obtain several thousands of photos of different apples and train an AI system to detect apples in new images.

Employing a domain expert to manually identify the apples does not scale. Developing software using domain knowledge does not scale easily to, let’s say, other fruits. And developing this domain-specific software is also expensive compared to training an AI system to detect the specific objects.

Similarly, with security fuzzing. A security expert working manually does not scale and the process is expensive to perform repeatedly. Developing software that acts exactly like a security expert is also expensive as well the software would have to capture the whole domain knowledge of software security. And the very best next option is to break the problem into smaller tasks, and use primarily cheap computer power.

Advanced Fuzzing League++

And that leads as to the Advanced Fuzzing League++ (afl++). It is a security fuzzing software that requires lots of computer power, it runs the software that we are testing many times with slightly different inputs each time, and looks whether any of the attempts have managed to lead to a software crash.

afl++ does security fuzzing, and this is just the first part of the security work. A security researcher will take the results of the fuzzing (i.e. the list of crash reports) and manually look whether these can be exploited so that an attacker can make the software let them in.

Discussion

Up to now, afl++ has been developed so that it can use as much computer power as possible. There are many ways to parallelise to multiple computers.

afl++ uses software instrumentation. When you have access to the source code, you can recompile it in a special way so that when afl++ does the fuzzing, afl++ will know if a new input causes the execution to reach new unexplored areas of the executable. It helps afl++ to expand the coverage to all of the executable.

afl++ does not automatically recognize the different inputs to a software. You have to guide it whether the input is from the command-line, from the network, or elsewhere.

afl++ can be fine-tuned in order to perform even better. Running an executable repeatedly from scratch is not as performant as to just running the same main function of the executable repeatedly.

afl++ can be used whether you have the source code of the software or whether you do not have it.

afl++ can fuzz binaries from a different architecture that your fuzzing server. It uses Qemu for hardware virtualization and can also use CPU emulation through unicorn.

afl++ has captured the mind share on security fuzzing and there are more and more new efforts to expand support to different things. For example, there is support for Frida (dynamic instrumentation).

afl++ has a steep learning curve. Good introductory tutorials are hard to find.

on May 10, 2021 01:09 PM

May 04, 2021

NSF CAREER Award

Benjamin Mako Hill

In exciting professional news, it was recently announced that I got an National Science Foundation CAREER award! The CAREER is the US NSF’s most prestigious award for early-career faculty. In addition to the recognition, the award involves a bunch of money for me to put toward my research over the next 5 years. The Department of Communication at the University of Washington has put up a very nice web page announcing the thing. It’s all very exciting and a huge honor. I’m very humbled.

The grant will support a bunch of new research to develop and test a theory about the relationship between governance and online community lifecycles. If you’ve been reading this blog for a while, you’ll know that I’ve been involved in a bunch of research to describe how peer production communities tend to follow common patterns of growth and decline as well as a studies that show that many open communities become increasingly closed in ways that deter lots of the kinds contributions that made the communities successful in the first place.

Over the last few years, I’ve worked with Aaron Shaw to develop the outlines of an explanation for why many communities because increasingly closed over time in ways that hurt their ability to integrate contributions from newcomers. Over the course of the work on the CAREER, I’ll be continuing that project with Aaron and I’ll also be working to test that explanation empirically and to develop new strategies about what online communities can do as a result.

In addition to supporting research, the grant will support a bunch of new outreach and community building within the Community Data Science Collective. In particular, I’m planning to use the grant to do a better job of building relationships with community participants, community managers, and others in the platforms we study. I’m also hoping to use the resources to help the CDSC do a better job of sharing our stuff out in ways that are useful as well doing a better job of listening and learning from the communities that our research seeks to inform.

There are many to thank. The proposed work was the direct research of the work I did as the Center for Advanced Studies in the Behavioral Sciences at Stanford where I got to spend the 2018-2019 academic year in Claude Shannon’s old office and talking through these ideas with an incredible range of other scholars over lunch every day. It’s also the product of years of conversations with Aaron Shaw and Yochai Benkler. The proposal itself reflects the excellent work of the whole CDSC who did the work that made the award possible and provided me with detailed feedback on the proposal itself.

on May 04, 2021 02:29 AM