September 18, 2020

As you may have noticed, the Ubuntu Community Council has been vacant for a while. Happily, a decision has recently been made to repopulate it. Thus, this official announcement for nominations.

We will be filling all seven seats this term, with terms lasting two years. To be eligible, a nominee must be an Ubuntu Member. Ideally, they should have a vast understanding of the Ubuntu community, be well-organized, and be a natural leader.

The work of the Community Council, as it stands, is to uphold the Code of Conduct throughout the community, ensure that all the other leadership boards and council are running smoothly, and to ensure the general health of the community, including not only supporting contributors but also stepping in for dispute resolution, as needed.

Historically, there would be two meetings per month, so the nominee should be willing to commit, at minimum, to that particular time requirement. Additionally, as needs arise, other communication, most often by email, will happen. The input of the entire Council is essential for swift and appropriate actions to get enacted, so participation in these conversations should be expected.

As you might notice from Mark Shuttleworth’s post, there is a greater vision for the structure of the Ubuntu community, so this term could be an exciting time with perhaps vast and sweeping changes. That said, it would be wise that nominees have an open mind as to what is to come.

To nominate someone (including yourself), send the name and Launchpad ID of the nominee to community-council [AT] Nominations will be accepted for a period of two weeks until 29 September 2020 11:59 UTC.

Once the nominations are collected, Mark Shuttleworth will shortlist them and an actual election will take place, using the Condorcet Internet Voting Service . All Ubuntu Members are eligible to vote in this election.

If you have any other questions, feel free to post something in the Ubuntu Discourse #community-council category so all may benefit from the answer.

Thanks in advance to all that participate and for your desire to make Ubuntu better!

on September 18, 2020 07:32 PM

If you’re a snap developer, you know that snap development is terribly easy. Or rather complex and difficult. Depending on your application code and requirements, it can take a lot of effort putting together the snapcraft.yaml file from which you will build your snap. One of our goals is to make snap development practically easier and more predictable for everyone. To that end, we created a framework of snap extensions, designed to make the snap journey simpler and more fun.

In a nutshell, extensions abstract away a range of common code declarations you would normally put in your snapcraft.yaml file. They help developers avoid repetitive tasks, reduce the knowledge barrier needed to successfully build snaps, offer a common template for application builds, and most importantly, save time and effort. But what if you want – or need – to know what is behind the abstraction?

Expand your horizons, expand your extensions

Let’s examine a real-life example. Just a few weeks ago, Jonathan Riddell and I did a workshop on how to build snaps during KDE Akademy. We demonstrated with KBlocks, and the snapcraft.yaml file references the kde neon extension, which makes the latest Qt5 and KDE Frameworks libraries available to KBlocks at runtime. The necessary code block [sic] is as follows:

adopt-info: kblocks
        - kde-neon
        common-id: org.kde.kblocks.desktop

But we need to see what happens behind the scenes.

You can expand an extension declaration in any snapcraft.yaml file by running:

snapcraft expand-extensions

This command will look for the snapcraft.yaml file in the current directory or snap subdirectory, and print to the standard output the expanded version of the YAML file. You can redirect the output into a separate file, and then compare the two to understand what gives.

The diff is greener on the other side

The expand-extensions command does quite a bit. First, we can see that the declaration of the extension for the kblocks app in the YAML has been removed. Then, we can also see that the expanded file has several additional plugs declared, namely desktop, desktop-legacy, wayland, and x11. Another important element is the desktop-launch command, which contains a number of common environment configurations to make sure the application runs correctly.

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>
Original YAML on the left, the expanded version without the extension on the right. For brevity, only part of the complete snapcraft.yaml file is shown.

The biggest difference is an entire new block of code at the bottom of the YAML – not visible in the screenshot above, but you can just grab the KBlocks snapcraft.yaml file we used in the workshop, and try for yourself:

  • It compiles the kde-neon-extension part using a pre-existing source available in the snapcraft build environment.
  • It defines several plugs – icon and sound themes as well KDE Frameworks libraries available in the existing gtk-common-themes and kde-frameworks-5-core18 content snaps. This way, you do not need to worry or manually satisfy the various components required to make your application behave and look as intended. All of these dependencies are satisfied automagically.
  • It defines the runtime environment that ensures the snap will function correctly.
    - g++
    plugin: make
    source: $SNAPCRAFT_EXTENSIONS_DIR/desktop
    source-subdir: kde-neon
    default-provider: gtk-common-themes
    interface: content
    target: $SNAP/data-dir/icons
    content: kde-frameworks-5-core18-all
    default-provider: kde-frameworks-5-core18
    interface: content
    target: $SNAP/kf5
    default-provider: gtk-common-themes
    interface: content
    target: $SNAP/data-dir/sounds

As you can see from the example above, extensions not only save time, they also ensure your snaps use a consistent structure, containing all the required assets to look and run correctly. You also end up with smaller snaps, because the common components and libraries are already included in the content snaps. This can be quite useful, especially if you develop multiple applications that share common code, as is the case with the KDE software.

What’s next?

Now, you may actually have a use case that only requires a portion of the code shown above, or you want to understand how extensions work behind the scenes. You can expand existing YAML files, and then use the relevant snippets as you sit. Overall, the use of extensions, in full or in part, should help you make snaps with a more consistent theming, smaller size, and more predictable behavior, as you baseline against a well-tested set of packages used across a large number of applications.


Extensions and the associated expand command allow you to make best use of the snapcraft.yaml files, whether to learn new things, or to adopt existing code for your own needs. You may already have snaps published in the Snap Store, and you would like to improve your code. Indeed, we encourage developers to use the extensions, as they can help avoid various papercut issues in the look & feel of their applications, as well as reduce the development burden, as they need to maintain a smaller, more compact, more precise code base.

If you have any questions or suggestions on this topic, please join our forum for a discussion. In particular, if you can think of practical use cases that would warrant a creation of new extensions, we’d definitely like to hear from you.

Photo by Charlie Seaman on Unsplash.

on September 18, 2020 11:11 AM

September 17, 2020

Ep 108 – Sim podemos

Podcast Ubuntu Portugal

Em semana se regresso às aulas, mais 1 episódio no melhor podcast sobre Ubuntu, software livres e outras cenas, de Portugal. Raspberry pi, marmitas e pinetabs fazem parte deste fantástico cardápio.

Já sabem: oiçam, subscrevam e partilhem!



Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on September 17, 2020 09:45 PM

S13E26 – The evil eye

Ubuntu Podcast from the UK LoCo

This week we’ve been playing with arcade boards and finishing DIY in the kitchen. We discuss if old technology is more fun than new technology, bring you a command line love and go over all your wonderful feedback.

It’s Season 13 Episode 26 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:
  • We discuss if old technology is inherently more fun than new technology?
  • We share a Command Line Lurve:
opusenc in.wav --downmix-mono --bitrate 6 --cvbr --framesize 60 out.opus

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on September 17, 2020 02:00 PM

September 16, 2020

The web team here at Canonical run two-week iterations. Here are some of the highlights of our completed work from this iteration.

Web squad

Our Web Squad develops and maintains most of Canonical’s promotional sites like, and more.

CloudNative Days Tokyo 2020

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

This year will see a lot of physical events move to a virtual event format.  However, most virtual booths bring low engagement with the audience, limit the abilities to do demos and showcase interactive content and have very low conversion rates.

Visit the CloudNative Days Tokyo 2020

New homepage for

We released a refreshed homepage for Providing a lot more outreach for different audiences.

Visit the new homepage

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

EdgeX page on

A new page with information about the exciting EdgeX ecosystem. Allowing you to sign up for our new EdgeX newsletter.

Visit the EdgeX page 

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>


The Brand team develop our design strategy and create the look and feel for the company across many touch-points, from web, documents, exhibitions, logos and video.

Brand Hierarchy

We collated all of the work we have done so far, analysed the content and decided on the outcomes and next steps to complete the system and roll it out across all areas of the company.

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>
<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

Charmed Kubeflow

Illustrations and web design for the updated Kubeflow page.

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>
<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>
<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

Document Hierarchy testing

We took the templates developed in the last iteration and inserted real content to make sure that the styles and typography are robust enough to roll out.

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>
<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

Livechat branding

Initial colour and design work to brand the Eye catcher and Livechat instance on

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>


The MAAS squad develops the UI for the MAAS project.

MAAS new CLI iteration 2

Last week, we have been testing out the CLI prototype with 6 MAAS CLI users to collect feedback on what we can do to improve our current CLI experience. 

The first thing we tested out was the concept of the initial help or what our testers mostly referred to as “cheat sheet”.

When a user types “maas” or “maas –help”, you will see a cheat sheet as a guideline to help you do basic operations. In the first prototype, we introduced the concept of a primary object (machine), so by default, MAAS interprets an action against machines as an object when no object is indicated. 

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

We received a lot of positive feedback for this concept and our testers wanted to see what other objects are available in the CLI. So to provide learnability, we added “Hints” to this page and indicate the current version of MAAS as well as other useful information.

One use case was frequently mentioned during the test regarding how to operate bulk actions for multiple MAASes or data centres.

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

Breaking down MAAS objects. In the current CLI, we have both singular and plural forms of object as a way to indicate collection services VS individual services. In this version, we stripped down all plural objects and broke them down into different categories for readability. We’ve also added a link to the documentation page for further readings. 

Highlight: 5 out of 6 users mentioned that they don’t like reading the documentation and would not bother to read them because when they are working with the CLI they feel like they have to start the search over again in the documentation page. It felt like they have to do a double search to get to the point where they get the answer. So to reduce that hassle, we attached the right documentation link to the relevant commands help page. 

In this prototype, we also introduced the concept of the primary profile, which is reflected in the current prompt. A user may log in to multiple profiles and set one of the profiles as their primary profile. When a user wants to take actions on multiple profiles such as deploying 10 identical machines 3 MAAS profiles, they just need to apply a profile flag to their deployment action.

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

The user will always see their current profile when they take action. This is to help prevent our users from making mistakes and that they are aware of their current environment.

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>
<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

To enhance the experience for learnability, we also try to break down the help page to be more meaningful. This is an example of the new help page. It will show you a brief description of the action, options, different ways to customise the outputs, examples, and related commands. For instance, maas list will list multiple or individual node description in a compact format as an overview. Whereas a related command is “maas show”, where a user can drill down detailed information of a particular machine in an elaborate manner. 

After going through this user test, our team has agreed to make the default experience in the CLI compatible with our scripting users because we have a separate MAAS UI that can achieve the same purpose. However, if a user prefers to do things in MAAS using the CLI and expecting a beginner’s experience, they can use the” –interactive “ flag or “ -i” to use the interactive prompt.

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>
<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

When a user indicates -i in the CLI, they will be presented with an interactive prompt where, it will ask for required information for the deployment such as OS, release, kernel, setting a machine is a KVM host, and customising cloud-init user-data.

By default, “maas deploy” will throw an error, because a user needs to supply information about the machine or a characteristic of a collection of machines they wish to deploy. 

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

However, if an error occurs, the prompt will provide guidelines and suggestions including documentation to help our users perform an action correctly.

Since the default experience is tailored to scripting users, by default a user will get the prompt back when they perform an action. They will also be informed that the machine(s) they deployed are running in the background and they may use any of these suggestions to check up on the status of the deployment. 

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

If a user wishes to have their prompt block when performing an action and want to see if it’s running, they can use the --wait flag to see the waiting state of the CLI. This will show a broken down version of the running task, a loading indication, and the time elapse. Time is quite important for MAAS users because for many users, if something is processing longer than usual, they would rather have that process aborted and reboot. 

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

What’s next?

  1. We are still working to define a better way to help our users debug when an error occurs. So the error state is another important piece that we are focusing on. Our current goal is to enhance the learnability aspects of our CLI and help our users recover from their errors or mistakes. 
  2. Another interesting feature is partial matching of the machine name. Currently, MAAS operates on machine IDs, which is not easy to fetch. We will allow our users to operate based on the machine names instead and they can either supply the full FQDN or just hostname. The current prototype provides suggestions for misspelt names and our next task is to allow partial string matching for this.

If you are reading about this and would like to add feedback, we are more than happy to find out how we can improve the CLI experience. Your feedback is a gift.


The JAAS squad develops the UI for the JAAS store and Juju GUI  projects.

Cross model relations

We posted our UX work about CRM on discourse last iteration, thanks for the positive feedback from the community. The final visuals for the MVP has been completed – the Cross-model relations information is displayed in the Relation view, Application panel and a dedicated Relation details panel. 

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>
Relations view
<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>
Apps panel with relation and offer details
<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>
Relation details panel

Charm Actions UX Exploration 

The team are exploring ways to provide actions which can be performed on a charm or units, with better visual affordance and representation. We started investigating the existing feature sets from Juju CLI, talking to charm writers and Juju users, organising feedback and comments from Discourse. A workshop with prepared design proposals and ideas will be held next week.


The Vanilla squad designs and maintains the design system and Vanilla framework library. They ensure a consistent style throughout web assets.


Work continues on a unified notifications pattern for use across our websites and web apps. Our UX team has been working on defining problems, auditing our use of notifications and benchmarking against the best-in-class notification experiences.

We’re hoping to deliver a notification experience that’s useful, lightweight and doesn’t interrupt user flow as they accomplish their important tasks in our software. 

Accessibility audit and fixes

We’ve completed a component-wide accessibility audit, and we’ve started applying enhancements to many components and example pages, including navigation, breadcrumbs, the switch component, buttons and others.

Modular table React component

We’ve been continuing the work on creating a new modular table React component. After initial proof of concept, we created a basic component in our React components library and will continue to extend it with features used by the tables in our products.

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

Snapcraft and Charm Hub

The Snapcraft team works closely with the Store team to develop and maintain the Snap Store site and the upcoming Charm Hub site.



The snap team have published a forum category. You can come and join us by giving feedback or engaging in design conversations with us directly there.

Weekly active device by architecture

Snapcraft publishers page can now get the weekly active device by architecture. A publisher can go to the snap metrics page and filter by architecture. Please, provide us feedback in the forum.

<noscript> <img alt="" src=",q_auto,fl_sanitize,w_720/" width="720" /> </noscript>

Your month in snaps email

The system that was providing the “Your month in snaps” emails has now been updated to run as a cron job in the Webteam’s Kubernetes. The repository for this cronjob can be found on GitHub.

Brand store admin

The brand store admin pages are in the process of being re-designed to become more useful, provide deeper insight into how the snaps published in them are used, and to become more easily manageable. With the redesign they will also be moved into, from

Charm Hub 

Charm Hub Developer Experience

We have been iterating on the charm detail pages to be able to accommodate for content that helps charmers build their own new charms by reusing Python libraries from other charms, as well as help consumers, make integrations between charms in a more intuitive way. Along with the improvements of the developer experience, we have done visual improvements on the rest of the detail pages.

Store page

Responsive filters and sorting have been added to the store page for a better experience when browsing charms and bundles.

With ♥ from Canonical web team.

on September 16, 2020 08:58 PM

Full Circle Weekly News #182

Full Circle Magazine

Ubuntu Beginning the Switch to NFTables in Groovy Gorilla
IP Fire 2.25 Core Update 148 Released with Location-based Firewall
Lenovo to Ship Fedora on its Thinkpads
Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust

on September 16, 2020 05:47 PM

September 15, 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, 237.25 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

August was a regular LTS month once again, even though it was only our 2nd month with Stretch LTS.
At the end of August some of us participated in DebConf 20 online where we held our monthly team meeting. A video is available.
As of now this video is also the only public resource about the LTS survey we held in July, though a written summary is expected to be released soon.

The security tracker currently lists 56 packages with a known CVE and the dla-needed.txt file has 55 packages needing an update.

Thanks to our sponsors

Sponsors that recently joined are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on September 15, 2020 10:01 AM

Middle of September 2020 Notes

Stephen Michael Kellat

"A person is praised for his insight, but a warped mind leads to contempt." – Proverbs 12:8 (Common English Bible)

It has been a while since I have written anything that might appear on Planet Ubuntu. Specifically the last time was June 26th. That's not necessarily a good thing.

I have been busy writing. What have I been writing? I knocked out a new novelette in Visual Studio Code. The print version was typeset using the novel class using LuaLaTeX. It is a bit of a sci-fi police procedural. It is up on Amazon for people to acquire though I do note that Amazon's print-on-demand costs have gone up a wee bit since the start of the planet-wide coronavirus crisis.

I also have taken time to test the Groovy Gorilla ISOs for Xubuntu. I encourage everybody out there to visit the testing tracker to test disc images for Xubuntu and other flavours as we head towards the release of 20.10 next month. Every release needs as much testing as possible.

Based upon an article from The Register it appears that the Community Council is being brought back to life. Nominations are being sought per a post on the main Discourse instance but readers of this are reminded that you need to be a current member either directly or indirectly of the 609 Ubuntu Members shown on Launchpad. Those 609 persons are the electors for the Community Council and the Community Council is drawn from that group. The size and composition of the Ubuntu Members group on Launchpad can change based upon published procedures and the initiative of individual to be part of such changes.

I will highlight an article at Yahoo Finance concerning financial distress among the fifty states. Here in Ohio we are seemingly in the middle of the pack. In Ashtabula County we have plenty of good opportunities in the age of coronavirus especially with our low transmission rates and very good access to medical facilities. With some investment in broadband backhaul an encampment for coders could be built who did not want to stick with city living. There is enough empty commercial real estate available to provide opportunities for film and television production if the wildfires and coronavirus issues out in California are not brought under control any time soon.

As a closing note, a federal trial judge ruled that the current coronavirus response actions in Pennsylvania happen to be unconstitutional. A similar lawsuit is pending before a trial judge here in Ohio about coronavirus response actions in this particular state. This year has been abnormal in so many ways and this legal news is just another facet of the abnormality.

on September 15, 2020 01:49 AM

Disk usage

So, you wake up one day, and find that one of your programs, starts to complainig about “No space left on device”:

Next thing (Obviously, duh?) is to see what happened, so you fire up du -h /tmp right?:

$ du -h /tmp
Filesystem              Size  Used Avail Use% Mounted on
/dev/mapper/zkvm1-root  6.2G  4.6G  1.3G  79% /

Well, yes, but no, ok? ok, ok!

Wait, what? there’s space there! How can it be? In all my years of experience (+15!), I’ve never seen such thing!

Gods must be crazy!? or is it a 2020 thing?

I disagree with you

$ touch /tmp
touch: cannot touch ‘/tmp/test’: No space left on device

Wait, what? not even a small empty file? Ok...

After shamelessly googling/duckducking/searching, I ended up at but alas, that was not my problem, although… perhaps too many files?, let’s check with du -i this time:

$ du -i /tmp
`Filesystem             Inodes  IUsed IFree IUse% Mounted on
/dev/mapper/zkvm1-root 417792 417792     0  100% /

Of course!

Because I’m super smart I’m not, I now know where my problem is, too many files!, time to start fixing this…

After few minutes of deleting files, moving things around, bind mounting things, I landed with the actual root cause:

Tons of messages waiting in /var/spool/clientmqueue to be processed, I decided to delete some, after all, I don’t care about this system’s mails… so find /var/spool/clientmqueue -type f -delete does the job, and allows me to have tab completion again! YAY!.

However, because deleting files blindly is never a good solution, I ended up in the link from above, the solution was quite simple:

$ systemctl enable --now sendmail

Smart idea!

After a while, root user started to receive system mail, and I could delete them afterwards :)

In the end, very simple solution (In my case!) rather than formatting or transfering all the data to a second drive, formatting & playing with inode size and stuff…

Filesystem             Inodes IUsed  IFree IUse% Mounted on
/dev/mapper/zkvm1-root 417792 92955 324837   23% /

Et voilà, ma chérie! It's alive!

This is a very long post, just to say:

ext4 no space left on device can mean: You have no space left, or you don’t have more room to store your files.

on September 15, 2020 12:00 AM

September 14, 2020

Welcome to the Ubuntu Weekly Newsletter, Issue 648 for the week of September 6 – 12, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on September 14, 2020 10:57 PM

September 13, 2020

Wootbook / Tongfang laptop

Jonathan Carter

Old laptop

I’ve been meaning to get a new laptop for a while now. My ThinkPad X250 is now 5 years old and even though it’s still adequate in many ways, I tend to run out of memory especially when running a few virtual machines. It only has one memory slot, which I maxed out at 16GB shortly after I got it. Memory has been a problem in considering a new machine. Most new laptops have soldered RAM and local configurations tend to ship with 8GB RAM. Getting a new machine with only a slightly better CPU and even just the same amount of RAM as what I have in the X250 seems a bit wasteful. I was eyeing the Lenovo X13 because it’s a super portable that can take up to 32GB of RAM, and it ships with an AMD Ryzen 4000 series chip which has great performance. With Lenovo’s discount for Debian Developers it became even more attractive. Unfortunately that’s in North America only (at least for now) so that didn’t work out this time.

Enter Tongfang

I’ve been reading a bunch of positive reviews about the Tuxedo Pulse 14 and KDE Slimbook 14. Both look like great AMD laptops, supports up to 64GB of RAM and clearly runs Linux well. I also noticed that they look quite similar, and after some quick searches it turns out that these are made by Tongfang and that its model number is PF4NU1F.

I also learned that a local retailer (Wootware) sells them as the Wootbook. I’ve seen one of these before although it was an Intel-based one, but it looked like a nice machine and I was already curious about it back then. After struggling for a while to find a local laptop with a Ryzen CPU and that’s nice and compact and that breaks the 16GB memory barrier, finding this one that jumped all the way to 64GB sealed the deal for me.

This is the specs for the configuration I got:

  • Ryzen 7 4800H 2.9GHz Octa Core CPU (4MB L2 cache, 8MB L3 cache, 7nm process).
  • 64GB RAM (2x DDR4 2666mHz 32GB modules)
  • 1TB nvme disk
  • 14″ 1920×1280 (16:9 aspect ratio) matt display.
  • Real ethernet port (gigabit)
  • Intel Wifi 6 AX200 wireless ethernet
  • Magnesium alloy chassis

This configuration cost R18 796 (€947 / $1122). That’s significantly cheaper than anything else I can get that even starts to approach these specs. So this is a cheap laptop, but you wouldn’t think so by using it.

I used the Debian netinstall image to install, and installation was just another uneventful and boring Debian installation (yay!). Unfortunately it needs the firmware-iwlwifi and firmare-amd-graphics packages for the binary blobs that drives the wifi card and GPU. At least it works flawlessly and you don’t need an additional non-free display driver (as is the case with NVidia GPUs). I haven’t tested the graphics extensively yet, but desktop graphics performance is very snappy. This GPU also does fancy stuff like VP8/VP9 encoding/decoding, so I’m curious to see how well it does next time I have to encode some videos. The wifi upgrade was nice for copying files over. My old laptop maxed out at 300Mbps, this one connects to my home network between 800-1000Mbps. At this speed I don’t bother connecting via cable at home.

I read on Twitter that Tuxedo Computers thinks that it’s possible to bring Coreboot to this device. That would be yet another plus for this machine.

I’ll try to answer some of my own questions about this device that I had before, that other people in the Debian community might also have if they’re interested in this device. Since many of us are familiar with the ThinkPad X200 series of laptops, I’ll compare it a bit to my X250, and also a little to the X13 that I was considering before. Initially, I was a bit hesitant about the 14″ form factor, since I really like the portability of the 12.5″ ThinkPad. But because the screen bezel is a lot smaller, the Wootbook (that just rolls off the tongue a lot better than “the PF4NU1F”) is just slightly wider than the X250. It weighs in at 1.1KG instead of the 1.38KG of the X250. It’s also thinner, so even though it has a larger display, it actually feels a lot more portable. Here’s a picture of my X250 on top of the Wootbook, you can see a few mm of Wootbook sticking out to the right.

Card Reader

One thing that I overlooked when ordering this laptop was that it doesn’t have an SD card reader. I see that some variations have them, like on this Slimbook review. It’s not a deal-breaker for me, I have a USB card reader that’s very light and that I’ll just keep in my backpack. But if you’re ordering one of these machines and have some choice, it might be something to look out for if it’s something you care about.


On to the keyboard. This keyboard isn’t quite as nice to type on as on the ThinkPad, but, it’s not bad at all. I type on many different laptop keyboards and I would rank this keyboard very comfortably in the above average range. I’ve been typing on it a lot over the last 3 days (including this blog post) and it started feeling natural very quickly and I’m not distracted by it as much as I thought I would be transitioning from the ThinkPad or my mechanical desktop keyboard. In terms of layout, it’s nice having an actual “Insert” button again. This is things normal users don’t care about, but since I use mc (where insert selects files) this is a welcome return :). I also like that it doesn’t have a Print Screen button at the bottom of my keyboard between alt and ctrl like the ThinkPad has. Unfortunately, it doesn’t have dedicated pgup/pgdn buttons. I use those a lot in apps to switch between tabs. At leas the Fn button and the ctrl buttons are next to each other, so pressing those together with up and down to switch tabs isn’t that horrible, but if I don’t get used to it in another day or two I might do some remapping. The touchpad has en extra sensor-button on the top left corner that’s used on Windows to temporarily disable the touchpad. I captured it’s keyscan codes and it presses left control + keyscan code 93. The airplane mode, volume and brightness buttons work fine.

I do miss the ThinkPad trackpoint. It’s great especially in confined spaces, your hands don’t have to move far from the keyboard for quick pointer operations and it’s nice for doing something quick and accurate. I painted a bit in Krita last night, and agree with other reviewers that the touchpad could do with just a bit more resolution. I was initially disturbed when I noticed that my physical touchpad buttons were gone, but you get right-click by tapping with two fingers, and middle click with tapping 3 fingers. Not quite as efficient as having the real buttons, but it actually works ok. For the most part, this keyboard and touchpad is completely adequate. Only time will tell whether the keyboard still works fine in a few years from now, but I really have no serious complaints about it.


The X250 had a brightness of 172 nits. That’s not very bright, I think the X250 has about the dimmest display in the ThinkPad X200 range. This hasn’t been a problem for me until recently, my eyes are very photo-sensitive so most of the time I use it at reduced brightness anyway, but since I’ve been working from home a lot recently, it’s nice to sometimes sit outside and work, especially now that it’s spring time and we have some nice days. At full brightness, I can’t see much on my X250 outside. The Wootbook is significantly brighter even (even at less than 50% brightness), although I couldn’t find the exact specification for its brightness online.


The Wootbook has 3x USB type A ports and 1x USB type C port. That’s already quite luxurious for a compact laptop. As I mentioned in the specs above, it also has a full-sized ethernet socket. On the new X13 (the new ThinkPad machine I was considering), you only get 2x USB type A ports and if you want ethernet, you have to buy an additional adapter that’s quite expensive especially considering that it’s just a cable adapter (I don’t think it contains any electronics).

It has one hdmi port. Initially I was a bit concerned at lack of displayport (which my X250 has), but with an adapter it’s possible to convert the USB-C port to displayport and it seems like it’s possible to connect up to 3 external displays without using something weird like display over usual USB3.

Overall remarks

When maxing out the CPU, the fan is louder than on a ThinkPad, I definitely noticed it while compiling the zfs-dkms module. On the plus side, that happened incredibly fast. Comparing the Wootbook to my X250, the biggest downfall it has is really it’s pointing device. It doesn’t have a trackpad and the touchpad is ok and completely usable, but not great. I use my laptop on a desk most of the time so using an external mouse will mostly solve that.

If money were no object, I would definitely choose a maxed out ThinkPad for its superior keyboard/mouse, but the X13 configured with 32GB of RAM and 128GB of SSD retails for just about double of what I paid for this machine. It doesn’t seem like you can really buy the perfect laptop no matter how much money you want to spend, there’s some compromise no matter what you end up choosing, but this machine packs quite a punch, especially for its price, and so far I’m very happy with my purchase and the incredible performance it provides.

I’m also very glad that Wootware went with the gray/black colours, I prefer that by far to the white and silver variants. It’s also the first laptop I’ve had since 2006 that didn’t come with Windows on it.

The Wootbook is also comfortable/sturdy enough to carry with one hand while open. The ThinkPads are great like this and with many other brands this just feels unsafe. I don’t feel as confident carrying it by it’s display because it’s very thin (I know, I shouldn’t be doing that with the ThinkPads either, but I’ve been doing that for years without a problem :) ).

There’s also a post on Reddit that tracks where you can buy these machines from various vendors all over the world.

on September 13, 2020 08:44 PM

September 10, 2020

Ep 107 – Viagem no tempo

Podcast Ubuntu Portugal

Demos folga Tiago Carrondo, mas para compensar os ouvintes o David Negreira veio fazer uma perninha. Falámos de redes caseiras, escritórios, do Interrruptor da Rute Correira, do KDE Plasma e workflows, dos novos HP certificados com Ubuntu e das novidades muitas e fixes do
do Groovy Gorila.

Já sabem: oiçam, subscrevam e partilhem!



Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on September 10, 2020 09:55 PM

Unav 3 is here!


The new uNav 3 is here! A simple, easy & beautiful GPS Navigator for Ubuntu Touch! 100% Libre. Doesn't track you, it respects you. Powered by Openstreemap. Online & offline (powered by OSM Scout Server) GPS Navigation. Enjoy it in your UBPorts device!
on September 10, 2020 09:10 PM

S13E25 – 666

Ubuntu Podcast from the UK LoCo

This week we have been watching The Mandalorian. We discuss a new look for UKUI, HP Z series computers with Ubuntu pre-installed, elementary OS on Pinebook, Active Directory integration in Ubuntu Desktop, and making apps for GNOME. We also round up some picks from the tech news.

It’s Season 13 Episode 25 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on September 10, 2020 02:00 PM

September 07, 2020

sxmo on pinephone

Serge Hallyn

If you are looking for a new phone that either respects your privacy, leaves you in control, or just has a different form factor from the now ubiquitous 6″ slab, there are quite a few projects in various states of readiness


  • vollaphone
  • oneplus
  • pinephone
  • librem 5
  • fairphone

Different form factors:

Earlier this year I bought a pinephone, braveheart edition. I’ve tried several OSes on it. Just yesterday, I tried:

  • sailfish: looked great, but it would not recognize sim, and crashed when launching browser.
  • ubports (ubuntu touch): looked good, texting worked, but crashed when launching app store and would not ring on incoming calls.
  • mobian: nice set of default apps, but again would not ring on incoming calls.

So I’m back to running what I’ve had on it for a month or two – sxmo, the suckless mobile operating system. It’s an interesting, different take on> interacting with the phone, and I quite like it. More importantly, for now it’s the most reliable as a communication devvice. With it, I can

  • make and receive calls and texts.
  • send texts using vi :).
  • easily send/receive mail using mbsync, mutt, and msmtp.
  • easily customize using scripts – editing existing ones, and adding new ones to the menu system.
  • use a cozy, known setup (dwm, st, tmux, sshd)
  • change call and text ringtone based on the caller – few other phones I’ve had have done that, not one did it well.
  • have a good browsing experience.
  • use both wifi and 4G data. I’ve not hotspotted, but can see no reason why that will be a problem.

The most limiting thing about this phone is the battery. It drains very quickly, charges slowly, and if I leave the battery in while turned off, it continues to discharge until, after a day, it doesn’t want to turn back on. An external battery charger helps enormously with this. There is also an apparent hardware misfeature which will prevent the modem from waking the cpu during deep sleep – this will presumably be fixed in later hardware versions, remember mine is the braveheart .

on September 07, 2020 03:01 PM

September 06, 2020

DebConf 20 Online

Jonathan Carter

This week, last week, Last month, I attended DebConf 20 Online. It was the first DebConf to be held entirely online, but it’s the 7th DebConf I’ve attended from home.

My first one was DebConf7. Initially I mostly started watching the videos because I wanted to learn more about packaging. I had just figured out how to create binary packages by hand, and have read through the new maintainers guide, but a lot of it was still a mystery. By the end of DebConf7 my grasp of source packages was still a bit thin, but other than that, I ended up learning a lot more about Debian during DebConf7 than I had hoped for, and over the years, the quality of online participation for each DebConf has varied a lot.

I think having a completely online DebConf, where everyone was remote, helped raise awareness about how important it is to make the remote experience work well, and I hope that it will make people who run sessions at physical events in the future consider those who are following remotely a bit more.

During some BoF sessions, it was clear that some teams haven’t talked to each other face to face in a while, and I heard at least 3 teams who said “This was nice, we should do more regular video calls!”. Our usual communication methods of e-mail lists and IRC serve us quite well, for the most part, but sometimes having an actual conversation with the whole team present at the same time can do wonders for dealing with many kind of issues that is just always hard to deal with in text based mediums.

There were three main languages used in this DebConf. We’ve had more than one language at a DebConf before, but as far as I know it’s the first time that we had multiple talks over 3 languages (English, Malayalam and Spanish).

It was also impressive how the DebConf team managed to send out DebConf t-shirts all around the world and in time before the conference! To my knowledge only 2 people didn’t get theirs in time due to customs.

I already posted about the new loop that we worked on for this DebConf. It was an unintended effect that we ended up having lots of shout-outs which ended up giving this online DebConf a much more warmer, personal feel to it than if we didn’t have it. I’m definitely planning to keep on improving on that for the future, for online and in-person events. There were also some other new stuff from the video team during this DebConf, we’ll try to co-ordinate a blog post about that once the dust settled.

Thanks to everyone for making this DebConf special, even though it was virtual!

on September 06, 2020 07:50 PM

September 05, 2020

Akademy Kicks off

Jonathan Riddell

Viewers on Planet Ubuntu can see the videos on my original post.

Akademy 2020 launched in style with this video starring moi and many other good looking contributors..

We’re online now, streaming onto YouTube at room 1 and room 2 or register for the event to get involved.

I gave the first KDE talk of the conference talking about the KDE is All About the Apps goal

And after the Consistency and Wayland talks we had a panel session.

Talks are going on for the next three hours this European early evening.  And start again tomorrow (Sunday).


on September 05, 2020 04:36 PM

September 04, 2020

In the spring of 2020, the GNOME project ran their Community Engagement Challenge in which teams proposed ideas that would “engage beginning coders with the free and open-source software community [and] connect the next generation of coders to the FOSS community and keep them involved for years to come.” I have a few thoughts on this topic, and so does Alan Pope, and so we got chatting and put together a proposal for a programming environment for making simple apps in a way that new developers could easily grasp. We were quite pleased with it as a concept, but: it didn’t get selected for further development. Oh well, never mind. But the ideas still seem good to us, so I think it’s worth publishing the proposal anyway so that someone else has the chance to be inspired by it, or decide they want it to happen. Here:

Cabin: Creating simple apps for Linux, by Stuart Langridge and Alan Pope

I’d be interested in your thoughts.

on September 04, 2020 09:30 AM

For the past few months, I’ve been running a handful of SSH Honeypots on some cloud providers, including Google Cloud, DigitalOcean, and NameCheap. As opposed to more complicated honeypots looking at attacker behavior, I decided to do something simple and was only interested in where they were coming from, what tools might be in use, and what credentials they are attempting to use to authenticate. My dataset includes 929,554 attempted logins over a period of a little more than 3 months.

If you’re looking for a big surprise, I’ll go ahead and let you down easy: my analysis hasn’t located any new botnets or clusters of attackers. But it’s been a fascinating project nonetheless.

Honeypot Design

With a mere 200ish lines of Go, I implemented a honeypot server using the library as the underlying implementation. I advertised a portable OpenSSH version as the server version string (sent to clients on connection). I then logged each connection to a SQLite database, including the timestamp, IP address, client version, and credentials used to (attempt to) authenticate.

Analysis of Credentials

In a surprise to absolutely nobody, root is by far the most commonly tried username for login sessions. I suspect there must be many attackers trying lists of passwords with just root as the username, as 78% of attempted logins were with username root. None of the remainder of the top 10 are particularly surprising, although usuario was not one I expected to see. (It is Spanish for user.)

Blank passwords are the most common attempted passwords, followed by other obvious choices, like 123456 and password. Just off the top 10 list was a surprising choice of password: J5cmmu=Kyf0-br8CsW. Interestingly, a Google search for this password only finds other people with experience running credential honeypots. It doesn’t appear in any of the password wordlists I have, including SecLists and others. If anyone knows what this is a password for, I’d love to know.

There were a number of other interesting passwords such as 7ujMko0admin, used for a bunch of networked DVRs, and also known to be used by malware attacking IoT devices. There are other passwords that don’t look obvious to a US-centric view of the world, like:

  • baikal – a lake in Siberia
  • prueba – Spanish for test
  • caonima – a Mandarin profanity written in Pinyin
  • meiyoumima – Mandarin for “no password”
  • woaini – Mandarin for “I love you”
  • poiuytThe name for an optical illusion also known as the "devil's tuning fork" Edit: multiple redditors pointed out this is the begginning of the top row of the keyboard from right to left.

There are also dozens and dozens of keyboard walks, like 1q2w3e, 1qaz@WSX, and !QAZ2wsx. There are many more that took me much longer to realize they were keyboard walks, such as 4rfv$RFV and qpwoei.

It has actually fascinated me to look at some of the less obvious passwords and discern their background. Many are inexplicable, but I assume they are from hardcoded passwords in devices or something along those lines. Or perhaps someone let their cat walk across the keyboard to generate it. I’ve certainly had that experience.

Overall, the top 10 usernames and top 10 passwords (not necessarily together) are:

Username Count Password Count
root 729108 <blank> 40556
admin 23302 123456 14542
user 8420 admin 7757
test 7547 123 7355
oracle 6211 1234 7099
ftpuser 4012 root 6999
ubuntu 3657 password 6118
guest 3606 test 5671
postgres 3455 12345 5223
usuario 2876 guest 4423

There were a total of 128,588 unique pairings of username and password attempted, though only 38,112 were attempted 5 or more times. You can download the full list of pairs with counts here, but I’ve omitted those attempted less than 5 times in case a legitimate user typo’d an IP or otherwise was mistaken. The top 25 pairings are:

username password count
root   37580
root root 4213
user user 2794
root 123456 2569
test test 2532
admin admin 2531
root admin 2185
guest guest 2143
root password 2128
oracle oracle 1869
ubuntu ubuntu 1811
root 1234 1681
root 123 1658
postgres postgres 1594
support support 1535
jenkins jenkins 1360
admin password 1241
root 12345 1177
pi raspberry 1160
root 12345678 1126
root 123456789 1069
ubnt ubnt 1069
admin 1234 1012
root 1234567890 967
ec2-user ec2-user 963

Again, no real surprises here. ubnt is a little bit higher than I would have thought (for Ubiquiti networking gear) but I suppose there’s a fair bit of their gear on the internet. It’s interesting to see the mix of “lazy admin” and “default credentials” here. It’s mildly interesting to me that all substrings of the first 10 digits (3 or longer) are included, except for 7 digits. I guess 7 digit passwords are less common?

Timing Information

Though I imagine these kind of untargeted scans are long-term processes continually running, I decided to check and see what the timing looked like anyway. Neither the day of week analysis nor the hour of day analysis look like there’s any significant variance.

Day of Week Hour of Day

Looking at the number of login requests over the time period where I’ve been running the honeypots shows the traffic to be intermittent. While I didn’t expect the number to be constant, the variance is much higher than I expected. I imagine a larger sample size and more nodes would probably make the results more even.

Day of Study

Analysis of Sources

So where are all of these requests coming from? I want to start by noting that none of my analysis is an attempt to attribute the actors making the requests – that’s just not possible with this kind of data. There’s two ways to look at the source of requests – in terms of the network, and in terms of the (assumed) geography. My analysis relied on the IP to ASN and IP to Country data provided by

Looking at the country-level data, networks from China lead the pack by a long shot (62% of all login attempts), followed by the US.


Country Count
CN 577789
US 87589
TW 48645
FR 39072
RU 30929
NL 29920
JP 28033
DE 15408
IN 13921
LT 6623

Again, I’m not claiming that these countries mean anything other than location of the autonomous system (AS) that originates the requests. I also did not do individual IP geolocation, so the results should be taken with a small grain of salt.

So what networks are sourcing this traffic? I have the full AS counts and data, but the top networks are:

AS Name Country ASN Count
CHINANET-BACKBONE No.31,Jin-rong Street CN 4134 202024
CHINANET-JS-AS-AP AS Number for CHINANET jiangsu province backbone CN 23650 186274
CHINA169-BACKBONE CNCGROUP China169 Backbone CN 4837 122192
HINET Data Communication Business Group TW 3462 48492
OVH FR 16276 30865
VECTANT ARTERIA Networks Corporation JP 2519 27481
DIGITALOCEAN-ASN - DigitalOcean, LLC US 14061 26965
MICROSOFT-CORP-MSN-AS-BLOCK - Microsoft Corporation US 8075 20370
AS38994 NL 38994 14482
XMGBNET Golden-Bridge Netcom communication Co.,LTD. CN 45058 12418
CNNIC-ALIBABA-CN-NET-AP Hangzhou Alibaba Advertising Co.,Ltd. CN 37963 12045
CNNIC-TENCENT-NET-AP Shenzhen Tencent Computer Systems Company Limited CN 45090 10804
CNIX-AP China Networks Inter-Exchange CN 4847 10000
PONYNET - FranTech Solutions US 53667 9317
ITTI US 44685 7960
CHINA169-BJ China Unicom Beijing Province Network CN 4808 7835
AS12876 FR 12876 7262
AS209605 LT 209605 6586
CONTABO DE 51167 6261

AS Graph

Chinanet is no surprise given the high ratio of China in general. OVH is a low-cost host known to have liberal AUP, so is popular for both malicious and research purposes. DigitalOcean and Microsoft, of course, are popular cloud providers. Surprisingly, AWS only sourced about 600 connections, unless they have a large number of IPs on a non-Amazon ASN.

Overall, traffic came from 27,448 unique IPv4 addresses. Of those, more than 11 thousand sent only a single request. At the other end of the spectrum, the top IP source sent 64,969 login requests.

Most hosts sent relatively few requests, the large numbers are outliers:

IP Count Graph

Surely, by now a thought has crossed your mind: how many of these requests are coming from Tor? Surely the Tor network is a wretched hive of scum and villany, and the source of much malicious traffic, right?

Tor Graph

Not at all. Only 219 of the unique source IPs were identified as Tor exit nodes, representing only 0.8% of the sources. On a per-request basis, even a smaller percentage of requests is seen from Tor exit nodes.

Client Software

Remember – this is self-reported by the client application, and just like I can spoof the server version string, so can clients. But I still thought it would be interesting to take a brief look at those.

client count
SSH-2.0-PuTTY 309797
SSH-2.0-PUTTY 182465
SSH-2.0-libssh2_1.4.3 135502
SSH-2.0-Go 125254
SSH-2.0-libssh-0.6.3 62117
SSH-2.0-libssh2_1.7.0 23799
SSH-2.0-libssh2_1.9.0 21627
SSH-2.0-OpenSSH_7.3 9954
SSH-2.0-OpenSSH_7.4p1 8949
SSH-2.0-libssh2_1.8.0 5284
SSH-2.0-JSCH-0.1.45 3469
SSH-2.0-PuTTY_Release_0.70 2080
SSH-2.0-PuTTY_Release_0.63 1813
SSH-2.0-OpenSSH_5.3 1212
SSH-2.0-paramiko_1.8.1 1140
SSH-2.0-PuTTY_Release_0.62 1130
SSH-2.0-OpenSSH_4.3 795
SSH-2.0-PuTTY_Release_0.66 694
SSH-2.0-OpenSSH_7.9p1 Raspbian-10+deb10u2 690
SSH-2.0-libssh_0.11 660

You know, I didn’t expect that. PuTTY as the top client strings. (Also not sure what to make of the case difference.) I wonder if people are building the PuTTY SSH library into a tool for scanning or wrapping the binary in some kind of script.

Go, paramiko, and libssh are less surprising, as they’re libraries designed for integration. It’s hard to know if the OpenSSH requests are linked into a scanning tool or just wrapped versions of the SSH client. At some point in the future, I might dive more into this and trying to figure out which software uses which libraries (at least for the publicly-known tools).


I was hoping to find something earth-shattering in this research. Instead, I found things that were much as expected – common usernames and passwords, widespread scanning, large numbers of requests. One thing’s for sure though: connect it to the internet and someone’s going to pwn it.

on September 04, 2020 07:00 AM

September 03, 2020

Akademy 2020 Starts Tomorrow

Jonathan Riddell

KDE’s annual conference starts tomorrow with a day of tutorials.  There’s two days of talks at the weekend and the rest of the week with meetings and BoFs.

Register now.

Tomorrow European morning you can learn about QML, Debugging or speed up dev workflows.  In the evening a choice of QML, Multithreading and Implicit Bias training.

Saturday morning the talks start with a Keynote at 09:00UTC and then I’m up talking about the All About the Apps Goal.  There’s an overview of the Wayland and Consistency goals too plus we have a panel to discuss them.

Saturday early evening I’m looking forward to some talks about Qt 6 updates and “Integrating Hollywood Open Source with KDE Applications” sounds intriguing.

On Sunday European morning I’m scared but excited to learn more elite C++ from Ivan, but I hear Linux is being rewritten in Rust so that’s worth learning about next.  And it doesn’t get much more exciting than the Wag Company tails.

In the afternoon those of us who care about licences will enjoy Open Source Compliance and an early win for Kubuntu was switching to System Settings so it’ll be good to get an update Behind the Scene.

On Monday join us for some tutorial on getting your apps to the users with talks on Snaps, Flatpak, neon and Appimage.

Mon, Tue and Wednesday has escape room puzzles.  You need to register for these in advance separately, so sign up now.

There’s a pub quiz on Thursday.

It’s going to be a fun week, and no need to travel so sign up now!


on September 03, 2020 02:20 PM

Full Circle Weekly News #181

Full Circle Magazine

Rolling Rhino Turns Ubuntu 20.04 into a Rolling Release
Boothole, A Linux Security Vulnerability
Debian 10.5 Out

MX Linux 19.2 KDE Out

Kali Linux 2020.3 Out

KDE Neon, Based on Ubuntu 20.04, Out

Kernel 5.8 Out

Kernel 5.9 rc1 Out

Gnome 3.36.5 Out

LibreOffice 7.0 Out

Firefox 79 Out

KDE 20.08 Apps Out

Radeon Software for Linux 20.30 Out

Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust

on September 03, 2020 01:17 PM

September 02, 2020

Previously: v5.5.

Linux v5.6 was released back in March. Here’s my quick summary of various features that caught my attention:

The widely used WireGuard VPN has been out-of-tree for a very long time. After 3 1/2 years since its initial upstream RFC, Ard Biesheuvel and Jason Donenfeld finished the work getting all the crypto prerequisites sorted out for the v5.5 kernel. For this release, Jason has gotten WireGuard itself landed. It was a twisty road, and I’m grateful to everyone involved for sticking it out and navigating the compromises and alternative solutions.

openat2() syscall and RESOLVE_* flags
Aleksa Sarai has added a number of important path resolution “scoping” options to the kernel’s open() handling, covering things like not walking above a specific point in a path hierarchy (RESOLVE_BENEATH), disabling the resolution of various “magic links” (RESOLVE_NO_MAGICLINKS) in procfs (e.g. /proc/$pid/exe) and other pseudo-filesystems, and treating a given lookup as happening relative to a different root directory (as if it were in a chroot, RESOLVE_IN_ROOT). As part of this, it became clear that there wasn’t a way to correctly extend the existing openat() syscall, so he added openat2() (which is a good example of the efforts being made to codify “Extensible Syscall” arguments). The RESOLVE_* set of flags also cover prior behaviors like RESOLVE_NO_XDEV and RESOLVE_NO_SYMLINKS.

pidfd_getfd() syscall
In the continuing growth of the much-needed pidfd APIs, Sargun Dhillon has added the pidfd_getfd() syscall which is a way to gain access to file descriptors of a process in a race-less way (or when /proc is not mounted). Before, it wasn’t always possible make sure that opening file descriptors via /proc/$pid/fd/$N was actually going to be associated with the correct PID. Much more detail about this has been written up at LWN.

openat() via io_uring
With my “attack surface reduction” hat on, I remain personally suspicious of the io_uring() family of APIs, but I can’t deny their utility for certain kinds of workloads. Being able to pipeline reads and writes without the overhead of actually making syscalls is pretty great for performance. Jens Axboe has added the IORING_OP_OPENAT command so that existing io_urings can open files to be added on the fly to the mapping of available read/write targets of a given io_uring. While LSMs are still happily able to intercept these actions, I remain wary of the growing “syscall multiplexer” that io_uring is becoming. I am, of course, glad to see that it has a comprehensive (if “out of tree”) test suite as part of liburing.

removal of blocking random pool
After making algorithmic changes to obviate separate entropy pools for random numbers, Andy Lutomirski removed the blocking random pool. This simplifies the kernel pRNG code significantly without compromising the userspace interfaces designed to fetch “cryptographically secure” random numbers. To quote Andy, “This series should not break any existing programs. /dev/urandom is unchanged. /dev/random will still block just after booting, but it will block less than it used to.” See LWN for more details on the history and discussion of the series.

arm64 support for on-chip RNG
Mark Brown added support for the future ARMv8.5’s RNG (SYS_RNDR_EL0), which is, from the kernel’s perspective, similar to x86’s RDRAND instruction. This will provide a bootloader-independent way to add entropy to the kernel’s pRNG for early boot randomness (e.g. stack canary values, memory ASLR offsets, etc). Until folks are running on ARMv8.5 systems, they can continue to depend on the bootloader for randomness (via the UEFI RNG interface) on arm64.

arm64 E0PD
Mark Brown added support for the future ARMv8.5’s E0PD feature (TCR_E0PD1), which causes all memory accesses from userspace into kernel space to fault in constant time. This is an attempt to remove any possible timing side-channel signals when probing kernel memory layout from userspace, as an alternative way to protect against Meltdown-style attacks. The expectation is that E0PD would be used instead of the more expensive Kernel Page Table Isolation (KPTI) features on arm64.

powerpc32 VMAP_STACK
Christophe Leroy added VMAP_STACK support to powerpc32, joining x86, arm64, and s390. This helps protect against the various classes of attacks that depend on exhausting the kernel stack in order to collide with neighboring kernel stacks. (Another common target, the sensitive thread_info, had already been moved away from the bottom of the stack by Christophe Leroy in Linux v5.1.)

generic Page Table dumping
Related to RISCV’s work to add page table dumping (via /sys/fs/debug/kernel_page_tables), Steven Price extracted the existing implementations from multiple architectures and created a common page table dumping framework (and then refactored all the other architectures to use it). I’m delighted to have this because I still remember when not having a working page table dumper for ARM delayed me for a while when trying to implement upstream kernel memory protections there. Anything that makes it easier for architectures to get their kernel memory protection working correctly makes me happy.

That’s in for now; let me know if there’s anything you think I missed. Next up: Linux v5.7.

© 2020, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 License.
CC BY-SA 4.0

on September 02, 2020 11:22 PM

NordVPN is one of many VPN services. I was asked to have a look at how to make it work in a LXD container and as a result I am writing this post. I am not advertising this service, nor do I use affiliate links, etc. Up to now, NordVPN have refused to fix their official Linux client to work in a container.

Installing the official client

Let’s install the official client in a LXD container. Create a container and get a shell into it. Then, download the Deb package and install it. The initial Deb package is really small. It just has instructions to setup the NordVPN repository to your system. After you install this package, you apt update to refresh the package list and then you can install the actual nordvpn package.

$ lxc launch ubuntu:18.04 nordvpn
Creating nordvpn
Starting nordvpn
$ lxc ubuntu nordvpn
ubuntu@nordvpn:~$ wget
ubuntu@nordvpn:~$ sudo apt install -f ./nordvpn-release_1.0.0_all.deb
ubuntu@nordvpn:~$ sudo apt update
ubuntu@nordvpn:~$ sudo apt install -y nordvpn
NordVPN for Linux successfully installed!
To get started, type 'nordvpn login' and enter your NordVPN account details. Then type 'nordvpn connect' and you’re all set! If you need help using the app, use the command 'nordvpn --help'.
ubuntu@nordvpn:~$ nordvpn
Welcome to NordVPN Linux client app!
Version 3.7.3
Usage: nordvpn [global options] command [command options] [arguments…]

Running the official NordVPN client

Let’s attempt to run the official NordVPN client. We log in, and then we connect. Does not work! Something is wrong.

ubuntu@nordvpn:~$ nordvpn login --username --password mypassword
Welcome to NordVPN! You can now connect to VPN by using 'nordvpn connect'.
ubuntu@nordvpn:~$ nordvpn connect
Connecting to Germany #500 (
transport is closing
ubuntu@nordvpn:~$ nordvpn connect
Whoops! Cannot reach System Daemon.

We look into /var/log/syslog. Here are the offending lines. The official client crashes due to some array bounds error.

Jun 16 19:59:31 nordvpn nordvpnd[1423]: debug: Tue Jun 16 19:59:31 2020 MANAGEMENT: Connected to management server at /var/run/nordvpn-openvpn.sock
Jun 16 19:59:31 nordvpn nordvpnd[1423]: 2020/06/16 19:59:31 [INFO] Tue Jun 16 19:59:31 2020 MANAGEMENT: Connected to management server at /var/run/nordvpn-openvpn.sock
Jun 16 19:59:31 nordvpn nordvpnd[1423]: panic: runtime error: index out of range [1] with length 1
Jun 16 19:59:31 nordvpn nordvpnd[1423]: goroutine 117 [running]:
Jun 16 19:59:31 nordvpn nordvpnd[1423]: nordvpn/daemon.ruleParsing(…)
Jun 16 19:59:31 nordvpn nordvpnd[1423]: #011/builds/nordvpn/apps-source/linux-app/src/daemon/vpn_ipv6.go:117
Jun 16 19:59:31 nordvpn nordvpnd[1423]: nordvpn/daemon.(Ipv6).Disable(0xc000390844, 0x0, 0x0) Jun 16 19:59:31 nordvpn nordvpnd[1423]: #011/builds/nordvpn/apps-source/linux-app/src/daemon/vpn_ipv6.go:43 +0x6c7 Jun 16 19:59:31 nordvpn nordvpnd[1423]: nordvpn/daemon.(OpenVPN).Start(0xc0004466e0, 0x2b89520, 0xc00042dec0, 0x18, 0xc00042dee0, 0x18, 0x0, 0x0, 0x0, 0x0, …)
Jun 16 19:59:31 nordvpn nordvpnd[1423]: #011/builds/nordvpn/apps-source/linux-app/src/daemon/vpn_openvpn.go:159 +0xd11
Jun 16 19:59:31 nordvpn nordvpnd[1423]: created by nordvpn/daemon.Connect
Jun 16 19:59:31 nordvpn nordvpnd[1423]: #011/builds/nordvpn/apps-source/linux-app/src/daemon/rpc.go:288 +0x882
Jun 16 19:59:31 nordvpn systemd[1]: nordvpnd.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Jun 16 19:59:31 nordvpn systemd[1]: nordvpnd.service: Failed with result 'exit-code'.
Jun 16 19:59:36 nordvpn systemd[1]: nordvpnd.service: Service hold-off time over, scheduling restart.

The same error appears whether you run the client with sudo -H, whether you run it in a privileged container. Something is wrong in this official NordVPN client.

So, what do we do now? Apparently, their client is based on OpenVPN, therefore let’s use the OpenVPN client directly.

Using the OpenVPN client

We exit from the container, remove it and create a new one. One without the official client.

ubuntu@nordvpn:~$ logout
$ lxc stop nordvpn
$ lxc delete nordvpn
$ lxc launch ubuntu:18.04 nordvpn
Creating nordvpn
Starting nordvpn
$ lxc ubuntu nordvpn

We then update the package list and install the openvpn package.

ubuntu@nordvpn:~$ sudo apt update
ubuntu@nordvpn:~$ sudo apt install -y openvpn

OpenVPN requires configuration files for the VPN servers. NordVPN has this list online at Let’s download it. It’s a 21MB file.

ubuntu@nordvpn:~$ wget

We unzip into the /etc/openvpn/client directory. The ZIP has folders, so we instruct unzip to ignore folders and dump them all in the given folder.

ubuntu@nordvpn:~$ sudo unzip -d /etc/openvpn/client/ -j

We are ready to connect to some VPN server. But which one? We can select any. This page,, will auto-detect the closest VPN server. Let’s assume the result is We do not connect yet. Do the following section to avoid DNS leakage, then we connect on the next section.

Avoiding DNS leakage

By default, OpenVPN does not set the DNS server and keeps the existing DNS configuration. The result of this, is DNS leakage; that is, name resolutions do not happen through the VPN but use the local network.

What needs to happen, is to add the appropriate script to OpenVPN to configure the DNS when the VPN is established, and when the VPN is teared down.

We use Ubuntu 18.04 LTS in the container, therefore, we configure systemd-resolved for this. Here are the commands. We install the helper, and edit the .ovpn file to use the helper.

ubuntu@nordvpn:~$ sudo apt install openvpn-systemd-resolved

Then, edit the OpenVPN configuration file, in our case, /etc/openvpn/client/ and add the following lines,

script-security 2
up /etc/openvpn/update-systemd-resolved
down /etc/openvpn/update-systemd-resolved

We are ready now to connect. Note that if you change server, you need to edit the corresponding file manually as above.

Making the connection

We are ready to make the connection.

ubuntu@nordvpn:~$ sudo openvpn /etc/openvpn/client/
Tue Jun 16 21:17:17 2020 OpenVPN 2.4.4 x86_64-pc-linux-gnu [SSL (OpenSSL)] [LZO] [LZ4] [EPOLL] [PKCS11] [MH/PKTINFO] [AEAD] built on May 14 2019
Tue Jun 16 21:17:17 2020 library versions: OpenSSL 1.1.1 11 Sep 2018, LZO 2.08
Enter Auth Username:
Enter Auth Password: ***********
<14>Jun 16 21:17:42 update-systemd-resolved: Adding DNS Routed Domain .
<14>Jun 16 21:17:42 update-systemd-resolved: Adding IPv4 DNS Server
<14>Jun 16 21:17:42 update-systemd-resolved: Adding IPv4 DNS Server
Tue Jun 16 21:17:42 2020 Initialization Sequence Completed

Looks good. The VPN circuit is active until you hit Ctrl+C here to interrupt OpenVPN. You need to open a new terminal to the LXD container to use the VPN. We can see that the proper new DNS entries have been added to systemd-resolved.

$ lxc ubuntu nordvpm
ubuntu@nordvpn:~$ systemd-resolve --status
Link 7 (tun0)
      Current Scopes: DNS
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no
         DNS Servers:
          DNS Domain: ~.


The official NordVPN client does not work in a LXD container, and it appears that it’s just a bug that they know about and do not intend to fix. The developers did not envision the client to run in a container, nor did they test it. We work around this issue by installing the OpenVPN client and using it to connect to one of their VPN servers.

on September 02, 2020 02:57 PM

August 28, 2020

The Kubuntu Council and Community would like to thank Linode for once again renewing their sponsorship of Kubuntu by providing us with another year’s usage of a VPS instance.

This is, and will continue to be, an invaluable resource, helping us make Kubuntu releases easier and better.

Specifically, the VPS allows us to:

  • Run remote build nodes for our Kubuntu CI Jenkins server, that helps us keep up with rapid upstream changes. Our CI is currently down for reworking of the Jenkins tooling, but this will come back in the next few months.
  • Host remote packaging containers for our developers and packagers. This not only proves a clean environment, but the super fast upload speeds allow us to make short work of the many hundreds of PPA and distribution package uploads required.
  • Run autopkgtests (QA) tests so we can fix these prior to upload.
  • Generate QA web pages for our builds, so problems can be identified and solved.

Specs: Linode 32GB: 8 CPU, 640GB Storage, 32GB RAM, Download 40 Gbps, Upload 7000 Mbps.

on August 28, 2020 03:38 PM

A Debian LTS logo Like each month, albeit a bit later due to vacation, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 249.25 work hours have been dispatched among 14 paid contributors. Their reports are available:
  • Abhijith PA did 18.0h (out of 14h assigned and 6h from June), and gave back 2h to the pool.
  • Adrian Bunk did 16.0h (out of 25.25h assigned), thus carrying over 9.25h to August.
  • Ben Hutchings did 5h (out of 20h assigned), and gave back the remaining 15h.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 60h (out of 5.75h assigned and 54.25h from June).
  • Holger Levsen spent 10h (out of 10h assigned) for managing LTS and ELTS contributors.
  • Markus Koschany did 15h (out of 25.25h assigned), thus carrying over 10.25h to August.
  • Mike Gabriel did nothing (out of 8h assigned), thus is carrying over 8h for August.
  • Ola Lundqvist did 3h (out of 12h assigned and 7h from June), thus carrying over 16h to August.
  • Roberto C. Sánchez did 26.5h (out of 25.25h assigned and 1.25h from June).
  • Sylvain Beucler did 25.25h (out of 25.25h assigned).
  • Thorsten Alteholz did 25.25h (out of 25.25h assigned).
  • Utkarsh Gupta did 25.25h (out of 25.25h assigned).

Evolution of the situation

July was our first month of Stretch LTS! Given this is our fourth LTS release we anticipated a smooth transition and it seems everything indeed went very well. Many thanks to the members of the Debian ftpmaster-, security, release- and publicity- teams who helped us make this happen!
Stretch LTS begun on July 18th 2020 after the 13th and final Stretch point release. and is currently scheduled to end on June 30th 2022.

Last month, we asked you to participate in a survey and we got 1764 submissions, which is pretty awesome. Thank you very much for participating!. Right now we are still busy crunching the results, but we already shared some early analysis during the Debconf LTS bof this week.

The security tracker currently lists 54 packages with a known CVE and the dla-needed.txt file has 52 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on August 28, 2020 07:57 AM

August 23, 2020

  1. Nickodemus with Carol C & The Spy from Cairo - Do You Do You
  2. Claudia - Deixa Eu Dizer (iZem ReShape)
  3. Afterclapp - BRZL
  4. Twerking Class Heroes - Hustlin´
  5. Jeanette - Porque Te Vas (2 Many GT’s Para Los DJs Edit)
  6. Twerking Class Heroes - Vanakkam
  7. Gang Do Eletro - Pith Bull
  8. Daniel Haaksman - Toma Que Toma (Waldo Squash Remix)
  9. Banda Uó - Cremosa
  10. Sofi Tukker - Purple Hat
  11. Baiuca - Mangüeiro (feat. Aliboria)
  12. PNAU - Save Disco (feat. Kira Divine & Theo Hutchcraft)
  13. 10cc Dreadlock Holiday (Chuggz Edit)
  14. Suhov aka BéTé - Öli Öli (Shades of Budabeats)
  15. Nickodemus - Inmortales (Body Move) feat. Fémina (The Spy from Cairo Remix)
  16. Santi - Minotauro
  17. Peter Power - White Rabbit (edit)
  18. Baden Powel - Canto de Lemanja (Billy Caso’s Coco Edit)
  19. Guts - Brand New Revolution
  20. Bruxas - Plantas Falsas
  21. Nickodemus feat Ismael Kouyate - N’Dini (Tal M Klein Remix)
  22. Yemi Alade - Koffi Anan
  23. Daniël Leseman - Ease The Pain (Extended Mix)
  24. Andri - Night Rider
  25. Siriusmo - Wow
  26. Walter Murphy & The Big Apple Band - A Fifth Of Beethoven (Soulwax Remix)
  27. Pleasurekraft - Carny
  28. Pizeta - Nina Papa (Andy Kohlmann Remix)
  29. Format:B - Gospel (Super Flu’s Antichrist Remix)
  30. David Walters - Mama
  31. nicholas ryan gant - Gypsy Woman (Kaytronik Remix Extended Version)
  32. Royal Highness - Lo Ke Tu Quiera Ft. Toy Selectah
  33. Tony Quattro - Zulu Carnival
  34. London Afrobeat Collective - Prime Minister (Captain Planet Remix)
  35. Adome Nyueto - Yta Jourias (Sopp’s Party Edit)
on August 23, 2020 08:16 AM

August 22, 2020

rsync command

Rolando Blanco

The rsync or remote sync, is a Linux/Unix command utility used to synchronize and copy files and directories either locally or remotely.

Rsync may be used to mirror, backup or migrate data across folders, across disks and networks. One notable feature with the rsync command is that it uses the “delta transfer algorithm.”

What this means?. Delta Transfer algorithm works by updating the destination directory with the contents of the source destination. If a change or a new file is created on the source directory, only the particular change will be copied to the destination directory when you run the rsync command.

Please, remember, Rsync use the SSH to transport and sync files and directories between the local and a remote machine,

Installing rsync in Linux

The rsync command comes pre-installed in most GNU/Linux operating systems.

However, if this is not your case. You can install RSYNC by running the below commands in a terminal.

On CentOS & RHEL

yum install rsync -y

On Ubuntu and other Debian based distributions

sudo apt install rsync -y

rsync basic syntax

rsync [options] [source] [destination]

Some of the standard options/parameters used with Rsync command:

-v: – -verbose: Verbose output
-r: copies data recursively
-z: compress file data
-h: Gives output in a Human Readable format
-a: archive files and directory while synchronizing
– – progress: Shows the progress of the Rsync tasks currently running.

You can see all the options available for the Rsync command using the
“- – help” option.

rsync --help

Some rsync examples

1. Copy/sync files locally with -v (verbose) option

That is the most basic rsync command. In this example, I will copy files between the ‘source‘ (~/SourceDir/) directory on the Desktop to the ‘destination‘ (/opt/DestDir) directory in the ‘/opt.’ folder. I will include the -v (Verbose) option so that rsync can provide information on what is going on.

rsync -v ~/Desktop/SourceDir /opt/DestDir
rsync -v (verbose) command

Please, take note that with rsync, if the destination directory doesn’t exist, it will automatically create it.

2. Sync/copy files and directories recursively

In the above command, If there was a directory present in the ‘Original‘ folder, it would be skipped. That is illustrated in the image below.

Use the -r (recursive) option.

rsync -r ~/Desktop/SourceDir /opt/DestDir

3. Sync/copy files between the local machine and remote machine

There are several parameters that you need to know about the remote server/machine: the IP-address, the username, and the user password.

The basic syntax that we will use is:

rsync [options] [local source files] [remote-username]@[ip-address]:/[destination folder]

4. Sync/copy files and directories from a remote server to your local PC

Just like the previous command, we will need to know the IP-address of the remote server. In this example, we will sync files in the ‘Backup‘ folder in the server’s home directory to our Local PC.

rsync -rv remoteuser@remotehost:/home/remoteuser/source  /home/localuser/Desktop/destination

5. Use rsync over SSH with the -e option

To ensure the security of files and folders, we will use the rsync over Secure Shell Protocol (SSH).

Additionally, when providing the root/user password – SSH will provide encryption services, ensuring that they are safe.

To use SSH, we will add the -e option that specifies the protocol that we want to use.

 rsync -vre ssh source/* remoteuser@[remote ip/hostname]:/home/destination

6. Show progress with rsync command

Some times where you copy multiple files, knowing the progress would be efficient. Fortunately, rsync has this option, the ‘– – progress‘ .

sudo rsync -rv --progress source/* /opt/destination

7. Use rsync with the ‘- – include’ option

Maybe you will have this situation, you only want to sync particular files. With rsync command, you cause the ‘– – include‘ option to carry out the task. For example, you need synchronize files starting with ‘I’ letter only.

sudo rsync -vr --include 'I*' source/ /opt/destination/

8. rsync with ‘- – exclude’ option to ignore particular files

With the rsync ‘– – exclude‘ option, you can exclude files that you don’t want to sync/copy.

In this example, we want to ignore all files starting with the ‘I’ letter.

sudo rsync -vr --exclude 'I*' source/ /opt/destination/

Alternatively, you can use both options in one command. See the example below.

sudo rsync -vr --exclude '*' --include 'I*' source/ /opt/destination/

We are excluding all files apart from those starting with the letter ‘I’

9. Use rsync with ‘- – delete’ command

rsync comes with the ‘– – delete‘ option, If a file is present in the destination directory but no in the source, it will delete it.

rsync -vr --delete /opt/source/ remoteuser@remotehost:/home/remoteuser/backup

10. Set the maximum size of files to transfer with rsync

If you are concerned with storage space or bandwidth for remote file synchronization, you need to use the ‘- -max-size’ option with the rsync command.

This option enables you to set the maximum size of a file that can be copied. For example, a ‘– -max-size=300k‘ will only transfer data equal to or smaller than 300 kilobytes.

rsync -rv --max-size='300k' /opt/source/ remoteuser@remotehost:/home/remoteuser/Backup

11. Delete source files automatically after a successful transfer

Take a situation where you have a remote backup server and a backup directory on your PC. You back up data to the backup folder on your PC before syncing it with the Backup-server. After every synchronization, you will need to delete the data in the backup directory.

Fortunately, you can do this automatically with the ‘--remove-source-files‘ option.

By running an ‘ls’ command on the source folder, we confirm that indeed the files were deleted.

12. Perform a ‘- -dry-run’ with rsync

If you are not sure with the rsync command, with the ‘– -dry-run‘ option, the rsync command will give you an output of what will be performed, but it won’t do it.

Therefore, you can look at this output if its what you expect before going o to remove the ‘– -dry-run‘ option.

rsync -vr --dry-run source/* remoteuser@remotehost:/home/remoteuser/backup

13. Set bandwith limit required to transfer files.

Suppose that you are on a shared network or running several programs, will be efficient to set a bandwidth limit required to sync/copy files remotely. We can do this with the rsync ‘— bwlimit‘ option.

This rate is calculated in kilobytes. Therefore, when ‘– -bwlimit=1100‘ means that only 1000Kb can be transferred per second.

rsync -vr --bwlimit=1100 source/* remoteuser@remotehost:/home/remoteuser/destination

14. Sync the whole files with rsync

Is important to know, that by default rsync only synchronizes the modified blocks and bytes.

Therefore, if you had synced a text file before and later added some texts to the source file when you sync, only the inserted text will be copied.

If you need to sync the entire file, you will need to use the ‘-W’ option.

rsync -vrW source/* remoteuser@remotehost:/home/remoteuser/destination

15. Do not sync/copy modified files in the destination directory

Some times when made modifications to files present in the destination folder. If you run a rsync command, these modifications will be overwritten by those in the source file. To avoid such, use the ‘-u’ option.

rsync -vu source/* remoteuser@remotehost:/home/remoteuser/destination

16. Use rsync with ‘-i’ option to view the difference in files between source and destination

If you wish to know what new changes will be made to the destination directory, use the ‘-i’ option, which will show the difference in files between the source and destination directory.

rsync -avzi source/ destination/

d: shows a change in the destination file
f: represents a file
t: shows a change in timestamps
s: indicates a change in the size of a file

17. Use rsync to copy directory structure only

Rsync can be used to sync only the directory structure. If you are not interested in the files, just need to use the parameters -f”+ */” -f”- *” before the source directory.

rsync -av -f"+ */" -f"- *" /home/localuser/Desktop/source/ /opt/destination/

18. Add date stamp to directory name

You can easily add a date to a directory name. That will add a date stamp to all synchronizations you do with rsync.

To do so, we will append $(date +\\%Y-\\%m-\\%d) to the destination directory.

sudo rsync -rv source/ /etc/destination-$(date +\%Y-\%m-\%d)

19. Copy a single file locally

To sync/copy a single file with rsync, you will need to specify the file path followed by the destination directory path.

rsync -v /home/localuser/source/filetocopy.txt ~/destination/

20. Copying multiple files remotely

To copy (few preferred) multiple files simultaneously, you will need to provide the path to all of them separated by space.

rsync -vr /home/user/Desktop/source/file1.txt /home/user/Desktop/source/file2.txt /home/user/Desktop/source/file3.txt  remoteuser@remotehost:/home/remoteuser/destination

on August 22, 2020 10:55 AM

August 20, 2020

On 2020-08-13, we deployed an update that caused users whose full names contain non-ASCII characters (which is of course very common) to be unable to log into Launchpad. We heard about this serious regression from users on 2020-08-17, and rolled out a fix on 2020-08-18. We’re sorry about this; it doesn’t meet the standards of both inclusion and quality that we set for ourselves. This post aims to explain what happened, technical details of why it happened, and the steps we’ve taken to avoid it happening again.

Launchpad still runs on Python 2. This is a problem, and we’ve been gradually chipping away at it for the last couple of years. With about three-quarters of a million lines of Python code in the main tree and over 200 dependencies, it’s a big job – but we’re well underway!

Some of those dependencies have been difficult problems in their own right. The one at issue here was python-openid, which we use as part of our login workflow, but which hasn’t been actively maintained for over ten years. Fortunately, in this case we didn’t have to port it ourselves, because there were already a couple of forks featuring Python 3 support while preserving more or less the same interface: we chose python-openid2 on the grounds that it had done a good job of maintaining both Python 2 and 3 support in the same codebase, which we needed in order to arrange a practical transition, and that it was in itself well-maintained. We worked with upstream to fix a couple of issues discovered by the Launchpad test suite that blocked us migrating to it (notably PR #41, although that was fixed as PR #43 instead), and switched Launchpad over once python-openid2 3.2 was released. So far, so good.

One of the major reasons for much of the disruption in the Python 3 transition was to provide a clean separation between the concept of a sequence of bytes and a text string, which was often a problem for code that needed to handle Unicode: it’s all too common in Python 2 to have code that works on the ASCII domain (which can be represented either as str or unicode) but that fails on Unicode strings outside that subset. Launchpad is less prone to that than many Python 2 applications because the ORM we use (Storm) has always been relatively strict about the boundary between bytes and text; nevertheless, having a stricter data model here is a good thing for us in the long term. It might seem ironic that we ran into exactly such a bug as part of porting to Python 3; but then, we aren’t using the new interpreter yet.

Launchpad uses the OpenID Simple Registration Extension in its login workflow. It specifically requests the user’s full name from Canonical’s OpenID provider (, which we generally call “SSO”): this means that if the user has an SSO account but not yet a Launchpad account, we can create a Launchpad account for them without them needing to enter their name again. That full name is encoded as a UTF-8 string, which in turn is URL-encoded using the usual %xx mechanism. This means that if, say, your name is Gráinne Ní Mháille, it will show up in the OpenID response’s query string as openid.sreg.fullname=Gr%C3%A1inne+N%C3%AD+Mh%C3%A1ille.

python-openid2 uses its openid.urinorm module to normalise parts of the response, decoding and re-encoding it to make sure comparisons work as expected; this is built on top of the URL handling code in Python’s standard library. Now, unlike Python 3, Python 2’s urlencode has undocumented restrictions on values in the query argument: if the doseq argument is False (the default), then it converts values using str(v), while if it’s True then it converts Unicode values using v.encode("ASCII", "replace") (potentially losing information!). In this case, doseq is False, and the input given to it is always text (unicode on Python 2): this works fine if the input is within the ASCII subset, but if it’s not:

>>> urlencode({u'openid.sreg.fullname': u'Gráinne Ní Mháille'})
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/lib/python2.7/", line 1350, in urlencode
    v = quote_plus(str(v))
UnicodeEncodeError: 'ascii' codec can't encode character u'\xe1' in position 2: ordinal not in range(128)

The fix is that on Python 2 one must always pass values to urlencode as bytes rather than text:

>>> urlencode({u'openid.sreg.fullname': u'Gráinne Ní Mháille'.encode('UTF-8')})

We’ve sent PR #47 to python-openid2 to implement this. We’ve also made a temporary local fork of python-openid2 containing this patch and deployed it to Launchpad production.

One thing to be clear about here: though the root cause was a bug in python-openid2, it’s our responsibility to make sure it works correctly when integrated into Launchpad.

We missed this bug because of a gap in testing: although we did test the full login workflow, we only did so with a test user whose full name was entirely ASCII. We’ve closed this gap now, so we’ll catch it if a dependency regresses in future.

on August 20, 2020 10:01 AM

August 19, 2020

  1. Baobab - Aduna Jarul Naawo
  2. Alexander - Truth
  3. Dal - Fontanel
  4. Baz Luhrmann - Everybody’s Free (To Wear Sunscreen)
  5. The Silver Thunders - Fresales eternos
  6. Camila Costa - Ponto das caboclas
  7. Louis Armstrong - The Creator Has A Masterplan
  8. Burhou - Please Delete
  9. Jimi Jules - Running Away
  10. Valentin Stip - Angst
  11. Lukas Endhardt - Rond De Jambe (Bootleg)
  12. Paul Anka - Put Your Head On My Shoulder
  13. The Peddlers - On A Clear Day You Can See Forever
  14. The Kinks - Sunny Afternoon
  15. Peder - timetakesthetimetimetakes
  16. Fleetwood Mac - Albatross
  17. Tropical Hi-Fi - Tahiti Blue (feat. Mike Cooper)
  18. The Dirty Diary - Dead Jazz
  19. Jeff Bridges - Lost in space
  20. Paoli - Milonga Para Javier
  21. Red Axes - Papa Sooma
  22. Soft Hair - Lying Has To Stop
  23. Rock Steady Freddy - Bohemian Rhapsody (Black Messiah Dub)
  24. King Jammy & Scientist - the death of mr spock (tokyo tower version)
  25. Bendaly family vs. Center of the Universe - Do you love me
on August 19, 2020 06:17 AM

August 18, 2020

One of the projects I’m working on involves creating a little device which you talk to from your phone. So, I thought, I’ll do this properly. No “cloud service” that you don’t need; no native app that you don’t need; you’ll just send data from your phone to it, locally, and if the owners go bust it won’t brick all your devices. I think a lot of people want their devices to live on beyond the company that sold them, and they want their devices to be under their own control, and they want to be able to do all this from any device of their choosing; their phone, their laptop, whatever. An awful lot of devices don’t do some or all of that, and perhaps we can do better. That is, here’s the summary of that as a sort of guiding principle, which we’re going to try to do:

You should be able to communicate a few hundred KB of data to the device locally, without needing a cloud service by using a web app rather than a native app from an Android phone.

Here’s why that doesn’t work. Android and Chrome, I am very disappointed in you.

Bluetooth LE

The first reaction here is to use Bluetooth LE. This is what it’s for; it’s easy to use, phones support it, Chrome on Android has Web Bluetooth, everything’s gravy, right?

No, sadly. Because of the “a few hundred KB of data” requirement. This is, honestly, not a lot of data; a few hundred kilobytes at most. However… that’s too much for poor old Bluetooth LE. An excellent article from AIM Consulting goes into this in a little detail and there’s a much more detailed article from Novelbits, but transferring tens or hundreds of KB of data over BLE just isn’t practical. Maybe you can get speeds of a few hundred kilo bits per second in theory, but in practice it’s nothing like that; I was getting speeds of twenty bytes per second, which is utterly unhelpful. Sure, maybe it can be more efficient than that, but it’s just never going to be fast enough: nobody’s going to want to send a 40KB image and wait three minutes for it to do so. BLE’s good for small amounts of data; not for even medium amounts.

WiFi to your local AP

The next idea, therefore, is to connect the device to the wifi router in your house. This is how most IoT devices work; you teach them about your wifi network and they connect to it. But… how do you teach them that? Normally, you put them in some sort of “setup” mode and the device creates its own wifi network, and then you connect your phone to that, teach it about your wifi network, and then it stops its own AP and connects to yours instead. This is maybe OK if the device never moves from your house and it only has one wifi network to connect to; it’s terrible if it’s something that moves around to different places. But you still need to connect to its private AP first to do that setup, and so let’s talk about that.

WiFi to the device

The device creates its own WiFi network; it becomes a wifi router. You then connect your phone to it, and then you can talk to it. The device can even be a web server, so you can load the controlling web app from the device itself. This is ideal; exactly what I planned.

Except it doesn’t work, and as far as I can tell it’s Android’s fault. Bah humbug.

You see, the device’s wifi network obviously doesn’t have a route to the internet. So, when you connect your phone to it, Android says “hey! there’s no route to the internet here! this wifi network sucks and clearly you don’t want to be connected to it!” and, after ten seconds or so, disconnects you. Boom. You have no chance to use the web app on the device to configure the device, because Android (10, at least) disconnects you from the device’s wifi network before you can do so.

Now, there is the concept of a “captive portal”. This is the thing you get in hotels and airports and so on, where you have to fill in some details or pay some money or something to be able to use the wifi; what happens is that all web accesses get redirected to the captive portal page where you do or pay whatever’s necessary and then the network suddenly becomes able to access the internet. Android will helpfully detect these networks and show you that captive portal login page so you can sign in. Can we have our device be a captive portal?

No. Well, we can, but it doesn’t help.

You see, Android shows you the captive portal login page in a special cut-down “browser”. This captive portal browser (Apple calls it a CNA, for Captive Network Assistant, so I shall too… but we’re not talking about iOS here, which is an entirely different kettle of fish for a different article), this CNA isn’t really a browser. Obviously, our IoT device can’t provide a route to the internet; it’s not that it has one but won’t let you see it, like a hotel; it doesn’t have one at all. So you can’t fill anything into the CNA that will make that happen. If you try to switch back to the real browser in order to access the website being served from the device, Android says “aha, you closed the CNA and there’s still no route to the internet!” and disconnects you from the device wifi. That doesn’t work.

You can’t open a page in the real browser from the CNA, either. You used to be able to do some shenanigans with a link pointing to an intent:// URL but that doesn’t work any more.

Maybe we can run the whole web app inside the CNA? I mean, it’s a web browser, right? Not an ideal user experience, but it might be OK.

Nope. The CNA is a browser, but half of the features are turned off. There are a bunch of JavaScript APIs you don’t have access to, but the key thing for our purposes is that <input type="file"> elements don’t work; you can’t open a file picker to allow someone to choose a file to upload to the device. So that’s a non-starter too.

So, what do we do?

Unfortunately, it seems that the plan:

communicate a few hundred KB of data to the device locally, without needing a cloud service by using a web app rather than a native app from an Android phone

isn’t possible. It could be, but it isn’t; there are roadblocks in the way. So building the sort of IoT device which ought to exist isn’t actually possible, thanks very much Android. Thandroid. We have to compromise on one of the key points.

If you’re only communicating small amounts of data, then you can use Bluetooth LE for this. Sadly, this is not something you can really choose to compromise on; if your device plan only needs small volumes, great, but if it needs more then likely you can’t say “we just won’t send that data”. So that’s a no-go.

You can use a cloud service. That is: you teach the device about the local wifi network and then it talks to your cloud servers, and so does your phone; all data is round-tripped through those cloud servers. This is stupid: if the cloud servers go away, the device is a brick. Yes, lots of companies do this, but part of the reason they do it is that they want to be able to control whether you can access a device you’ve bought by running the connection via their own servers, so they can charge you subscription money for it. If you’re not doing that, then the servers are a constant ongoing cost and you can’t ever shut them down. And it’s a poor model, and aggressively consumer-hostile, to require someone to continue paying you to use a thing they purchased. Not doing that. Communication should be local; the device is in my house, I’m in my house, why the hell should talking to it require going via a server on the other side of the world?

You can use a native app. Native apps can avoid the whole “this wifi network has no internet access so I will disconnect you from it for your own good” approach by calling various native APIs in the connectivity manager. A web app can’t do this. So you’re somewhat forced into using a native app even though you really shouldn’t have to.

Or you can use something other than Android; iOS, it seems, has a workaround although it’s a bit dodgy.

None of these are good answers. Currently I’m looking at building native apps, which I really don’t think I should have to do; this is exactly the sort of thing that the web should be good at, and is available on every platform and to everyone, and I can’t use the web for it because a bunch of decisions have been taken to prevent that. There are good reasons for those decisions, certainly; I want my phone to be helpful when I’m on some stupid hotel wifi with a signin. But it’s also breaking a perfectly legitimate use case and forcing me to use native apps rather than the web.

Unless I’m wrong? If I am… this is where you tell me how to do it. Something with a pleasant user experience, that non-technical people can easily do. If it doesn’t match that, I ain’t doin’ it, just to warn you. But if you know how this can be done to meet my list of criteria, I’m happy to listen.

on August 18, 2020 12:13 PM

August 13, 2020

Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 18.04.5 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight X11 Desktop Environment (LXDE). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock solid […]
on August 13, 2020 08:15 PM

August 07, 2020

The Kubuntu Team is happy to announce that Kubuntu 20.04.1 LTS “point release” is available today, featuring the beautiful KDE Plasma 5.18 LTS: simple by default, powerful when needed.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Kubuntu 20.04 LTS.

More details can be found in the release notes:

In order to download Kubuntu 20.04.1 LTS, visit:

Download Kubuntu

Users of Kbuntu 18.04 LTS will soon be offered an automatic upgrade to 20.04.1 LTS via Update Manager/Discover. For further information about upgrading, see:

As always, upgrades to the latest version of Kubuntu are entirely free of charge.

We recommend that all users read the 20.04.1 LTS release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

#kubuntu on

on August 07, 2020 08:47 PM

August 06, 2020

Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 20.04.1 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight Qt Desktop Environment (LXQt). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock-solid Ubuntu […]
on August 06, 2020 05:37 PM

July 29, 2020

Visité Valencia este mes de Julio y aproveché a visitar las oficinas de Slimbook, el manufacturador español especializado en dispositivos con Linux preinstalado. Allí realicé durante un par de horas una review a su modelo Pro X con CPU Ryzen de AMD. Construcción y diseño Yo uso un Dell Latitude E5470 y al coger el Pro X lo primero que llama la atención es su ligereza (1,1 Kg). No hay muchos portátiles en el mercado que puedan presumir de pesar 1,1 Kg, tal como analicé en otro post.
on July 29, 2020 03:39 PM

July 26, 2020

I can already hear some readers saying that backups are an IT problem, and not a security problem. The reality, of course, is that they’re both. Information security is commonly thought of in terms of the CIA Triad – that is, Confidentiality, Integrity, and Availability, and it’s important to remember those concepts when dealing with backups.

We need look no farther than the troubles Garmin is having in dealing with a ransomware attack to find evidence that backups are critical. It’s unclear whether Garmin lacked adequate backups, had their backups ransomware’d, or is struggling to restore from backups. (It’s possible that they never considered an issue of this scale and simply aren’t resourced to restore this quickly, but given that the outage remains a complete outage after 4 days, I’d bet on one of those 3 conditions.)

So what does a security professional need to know about backups? Every organization is different, so I’m not going to try to provide a formula or tutorial for how to do backups, but rather discuss the security concepts in dealing with backups.

Before I got into security, I was both a Site Reliability Engineer (SRE) and a Systems Administrator, so I’ve had my opportunities to think about backups from a number of different directions. I’ll try to incorporate both sides of that here.


I want to deal with availability first, because that’s really what backups are for. Backups are your last line of defense in ensuring the availability of data and services. In theory, when the service is down, you should be able to restore from backups and get going (with the possibility of some data loss in between the time of the backup and the restoration).

Availability Threat: Disaster

Anyone who’s had to deal with backups has probably given some thoughts to the various disasters that can strike their primary operations. There are numerous disasters that can take out an entire datacenter, including fire, earthquake, tornadoes, flooding, and more. Just as a general rule, assume a datacenter will disappear, so you need a full copy of your data somewhere else as well as the ability to restore operations from that location.

This also means you can’t rely on anything in that datacenter for your restoration. We’ll talk about encryption under confidentiality, but suffice it to say that you need your backup configs, metadata (what backup is stored where), encryption keys, and more in a way you can access them if you lose that site. A lot of this would be great to store completely offline, such as in a safe in your office (assuming it’s sufficiently far from the datacenter to be unaffected).

Availability Threat: Malware

While replicating your data across two sites would likely protect against natural disasters, it won’t be enough to protect against malware. Whether ransomware or malware that just wants to destroy your data, network connectivity would place both sets of data at risk if you don’t take precautions.

One option is using backup software that provides versioning controlled by the provider. For small business or SOHO use, providers like BackBlaze and SpiderOak offer this. Another choice is using a cloud provider for storage and enabling a provider-enforced policy like Retention Policies on GCP.

Alternatively, using a “pull” backup configuration (where backups are “pulled” from the system by a backup system) can help with this as well. By having the backup system pull, malware on the serving system cannot access anything but the currently online data. You still need to ensure you retain older versions to avoid just backing up the ransomware’d data.

At the end of the day, what you want is to ensure that an infected system cannot delete, modify, or replace its own backups. Remember that anything a legitimate user or service on the machine can do can also be done by malware.

Another consideration is how the backup service is administered. If, for example, your backups are stored on servers joined to your Windows domain and a domain administrator or domain controller is compromised, then the malware can also hop to the backup server and encrypt/destroy the backups. If your backups are exposed as a writable share to any compromised machine, then, again, the malware can have it’s way with your backups.

Of course, offline backups can mitigate most of the risks as well. Placing backups onto tapes or hard drives that are physically disconnected is a great way to avoid exposing those backups to malware, but it also adds significant complexity to your backup scheme.

Availability Threat: Insider

You may also want to consider a malicious insider when designing your backup strategy. While many of the steps that protect against malware will help against an insider, considering who has access to your backup strategy and what unilateral access they have is important.

Using a 3rd party service with an enforced retention period can help, as can layers of backups administered by different individuals. Offline backups also make it harder for an individual to quickly destroy data.

Ensuring that the backup administrator is also not in a good position to destroy your live data can also help protect against their ability to have too much impact on your organization.


It’s critical to protect your data. Since many backup approaches involve entrusting your data to a 3rd party (whether it’s a cloud provider, an archival storage company, or a colocated data center), encryption is commonly employed to ensure confidentiality of the data stored. (Naturally, the key should not be stored with the 3rd party.)

Fun anecdote: at a previous employer, we had our backup tapes stored offsite by a 3rd party backup provider. The tapes were picked up and delivered in a locked box, and we were told that only we possessed the key to the box. I became “suspicious” when we added a new person to our authorized list (those who are allowed to request backups back from the vendor) and the person’s ID card was delivered inside our locked box. (Needless to say, you can’t trust statements like that from a vendor – not to mention that a plastic box is not a security boundary.)

All the data you backup should be encrypted with a key your organization controls, and you should have access to that key even if your network is completely trashed. I recommend storing it in a safe, preferrably on a smartcard or other secure element. (Ideally in a couple of locations to hedge your bets.)

A fun bit about encrypted backups: if you use proper encryption, destroying the key is equivalent to destroying all the backups encrypted with that key. Some organizations do this as a way of expiring old data. You can have the data spread across all kinds of tapes, but once the key is destroyed, you will never be recovering that data. (On the other hand, if a malicious actor destroys your key, you will also never be recovering that data.)


Your backups need to be integrity protected – that is, protected against tampering or modification. This both protects against accidental modifications (i.e., corruption from bad media, physical damage, etc.) as well as tampering. While encryption makes it harder for an adversary to modify data in a controlled fashion, it is still possible. (This is a property of encryption known as Malleability.)

Ideally, backups should be cryptographyically signed. This prevents both accidental and malicious modification to the underlying data. A common approach is to build a manifest of cryptographic hashes (i.e., SHA-256) of each file and then sign that. The individual hashes can be computed in parallel and even on multiple hosts, then the finished manifest can be signed. (Possibly on a different host.)

These hashes can also be used to verify the backups as written to ensure against damage during the writing of backups. Only the signing machines need access to the private key (which should ideally be stored in a hardware-backed key storage mechanism like a smart card or TPM).

Backup Strategy Testing

No matter what strategy you end up designing (which should be a joint function between the core IT group and the security team), the strategy needs to be evaluated and tested. Restoration needs to be tested, and threats need to be considered.

Practicing Restoration

This is likely to be far more a function of IT/production teams than of the security team, but you have to test restoration. I’ve seen too many backup plans without a tested restoration plan that wouldn’t work in practice.

Fails I’ve seen or heard of:

  • Relying on encryption to protect the backup, but then not having a copy of the encryption key at the time of restoration.
  • Using tapes for backups, but not having metadata of what was backed up on what tape. (Tapes are slow, imagine searching for the data you need.)

Tabletop Scenarios

When designing a backup strategy, I suggest doing a series of tabletop exercises to evaluate risks. Having a subset of the team play “red team” and attempt to destroy the data or access confidential data or apply ransomware to the network and the rest of the team evaluating controls to prevent this is a great way to discover gaps in your thought process.

Likewise, explicitly threat modeling ransomware into your backup strategy is critical, as we’ve seen increased use of this tactic by cybercriminals. Even though defenses to prevent ransomware getting on your network in the first place would be ideal, real security involves defense in depth, and having workable backups is a key mitigation for the risks posed by ransomware.

on July 26, 2020 07:00 AM

July 24, 2020

Firefox Beta via Flatpak

Bryan Quigley

What I've tried.

  1. Firefox beta as a snap. (Definitely easy to install. But not as quick and harder to use for managing files - makes it's own Downloads directory, etc)
  2. Firefox (stock) with custom AppArmor confinement. (Fun to do once, but the future is clearly using portals for file access, etc)
  3. Firefox beta as a Flatpak.

I've now been running Firefox as a Flatpak for over 4 months and have not had any blocking issues.

Getting it installed

Flatpak - already installed on Fedora SilverBlue (comes with Firefox with some Fedora specific optimizations) and EndlessOS at least

Follow Quick Setup. This walks you through installing the Flatpak package as well as the Flathub repo. Now you could easily install Firefox with just 'flatpak install firefox' if you want the Stable Firefox.

To get the beta you need to add the Flathub Beta repo. You can just run:

sudo flatpak remote-add flathub-beta htps://

Then to install Firefox from it do (You can also choose to install as a user and not using sudo with the --user flag):

sudo flatpak install flathub-beta firefox

Once you run the above commend it will ask you which Firefox to install, install any dependencies, tell you the permissions it will use, and finally install.

Looking for matches…
Similar refs found for ‘firefox’ in remote ‘flathub-beta’ (system):

   3) app/org.mozilla.firefox/x86_64/beta

Which do you want to use (0 to abort)? [0-3]: 3
Required runtime for org.mozilla.firefox/x86_64/beta (runtime/org.freedesktop.Platform/x86_64/19.08) found in remote flathub
Do you want to install it? [Y/n]: y

org.mozilla.firefox permissions:
    ipc                          network       pcsc       pulseaudio       x11       devices       file access [1]       dbus access [2]
    system dbus access [3]

    [1] xdg-download
    [2] org.a11y.Bus, org.freedesktop.FileManager1, org.freedesktop.Notifications, org.freedesktop.ScreenSaver, org.gnome.SessionManager, org.gtk.vfs.*,
    [3] org.freedesktop.NetworkManager

        ID                                             Branch            Op            Remote                  Download
 1. [—] org.freedesktop.Platform.GL.default            19.08             i             flathub                    56.1 MB / 89.1 MB
 2. [ ] org.freedesktop.Platform.Locale                19.08             i             flathub                 < 318.3 MB (partial)
 3. [ ] org.freedesktop.Platform.openh264              2.0               i             flathub                   < 1.5 MB
 4. [ ] org.gtk.Gtk3theme.Arc-Darker                   3.22              i             flathub                 < 145.9 kB
 5. [ ] org.freedesktop.Platform                       19.08             i             flathub                 < 238.5 MB
 6. [ ] org.mozilla.firefox.Locale                     beta              i             flathub-beta             < 48.3 MB (partial)
 7. [ ] org.mozilla.firefox                            beta              i             flathub-beta             < 79.1 MB

The first 5 dependencies downloaded are required by most applications and are shared, so the actual size of Firefox is more like 130MB.


  • You can't browsing for local files via browser file:/// (except for ~/Downloads). All local files need to be opened by Open File Dialogue which automatically adds the needed permissions. Unboxing
  • You can enable Wayland as well with 'sudo flatpak override --env=GDK_BACKEND=wayland org.mozilla.firefox (Wayland doesn't work with the NVidia driver and Gnome Shell in my setup though)

What Works?

Everything I want which includes in no particular order:

  • Netflix (some older versions had issues with DRM IMU)
  • WebGL (with my Nvidia card and proprietary driver. Flatpak installs the necessary bits to get it working based on your video card)
  • It's speedy, it starts quick as I would normally expect
  • Using the file browser for ANY file on my system. You can upload your private SSH keys if you really need to, but you need to explicitly select the file (and I'm not sure how you unshare it).
  • Opening apps directly via Firefox (aka I download a PDF and I want it to open in Evince - this does use portals for confinement).
  • Offline mode

What could use work?

  • Some flatpak commands can figure out what just "Firefox" means, while others want the full org.mozilla.firefox
  • If you want to run Firefox from the command line, you need to run it as org.mozilla.firefox. This is the same for all Flatpaks, although you can make an alias.
  • It would be more convenient if Beta releases were part of the main Flathub (or advertised more)
  • If you change your Downloads directory in Firefox, you have to update the permissions in Flatpak or else it won't allow it to work. If you do Save As.. it will work fine though.
  • The flatpak permission-* commands lets you see what permissions are defined, but resetting or removing doesn't seem to actually work.

If you think you found a Flatpak specific Mozilla bug, the first place to look is Mozilla Bug #1278719 as many bugs are reported against this one bug for tracking purposes.


Add a comment by making a Pull Request to this post.

on July 24, 2020 09:20 PM

July 22, 2020

Wrong About Signal

Bryan Quigley

Updated Riot was renamed to Element. XMPP info added in comment. And Signal still doesn't let you Unregister

A couple years ago I was a part of a discussion about encrypted messaging.

  • I was in the Signal camp - we needed it to be quick and easy to setup for users to get setup. Using existing phone numbers makes it easy.
  • Others were in the Matrix camp - we need to start from scratch and make it distributed so no one organization is in control. We should definitely not tie it to phone numbers.

I was wrong.

Signal has been moving in the direction of adding PINs for some time because they realize the danger of relying on the phone number system. Signal just mandated PINs for everyone as part of that switch. Good for security? I really don't think so. They did it so you could recover some bits of "profile, settings, and who you’ve blocked".

Before PIN

If you lose your phone your profile is lost and all message data is lost too. When you get a new phone and install Signal your contacts are alerted that your Safety Number has changed - and should be re-validated.

>>Where profile data lives1318.60060075387.1499999984981Where profile data livesYour Devices

After PIN

If you lost your phone you can use your PIN to recover some parts of your profile and other information. I am unsure if Safety Number still needs to be re-validated or not.

Your profile (or it's encryption key) is stored on at least 5 servers, but likely more. It's protected by secure value recovery.

There are many awesome components of this setup and it's clear that Signal wanted to make this as secure as possible. They wanted to make this a distributed setup so they don't even need to tbe only one hosting it. One of the key components is Intel's SGX which has several known attacks. I simply don't see the value in this and it means there is a new avenue of attack.

>>Where profile data lives1370.275162.94704773529975250.12499999999997371.0529522647003Where profile data livesYour DevicesSignal servers

PIN Reuse

By mandating user chosen PINs, my guess is the great majority of users will reuse the PIN that encrypts their phone. Why? PINs are re-used a lot to start, but here is how the PIN deployment went for a lot of Signal users:

  1. Get notification of new message
  2. Click it to open Signal
  3. Get Mandate to set a PIN before you can read the message!

That's horrible. That means people are in a rush to set a PIN to continue communicating. And now that rushed or reused PIN is stored in the cloud.

Hard to leave

They make it easy to get connections upgraded to secure, but their system to unregister when you uninstall has been down Since June 28th at least (tried last on July22nd). Without that, when you uninstall Signal it means:

  • you might be texting someone and they respond back but you never receive the messages because they only go to Signal
  • if someone you know joins Signal their messages will be automatically upgraded to Signal messages which you will never receive


In summary, Signal got people to hastily create or reuse PINs for minimal disclosed security benefits. There is a possibility that the push for mandatory cloud based PINS despite all of the pushback is that Signal knows of active attacks that these PINs would protect against. It likely would be related to using phone numbers.

I'm trying out the Element which uses the open Matrix network. I'm not actively encouraging others to join me, but just exploring the communities that exist there. It's already more featureful and supports more platforms than Signal ever did.

Maybe I missed something? Feel free to make a PR to add comments


kousu posted

In the XMPP world, Conversastions has been leading the charge to modernize XMPP, with an index of popular public groups ( and a server validator. XMPP is mobile-battery friendly, and supports server-side logs wrapped in strong, multi-device encryption (in contrast to Signal, your keys never leave your devices!). Video calling even works now. It can interact with IRC and Riot (though the Riot bridge is less developed). There is a beautiful Windows client, a beautiful Linux client and a beautiful terminal client, two good Android clients, a beautiful web client which even supports video calling (and two others). It is easy to get an account from one of the many servers indexed here or here, or by looking through You can also set up your own with a little bit of reading. Snikket is building a one-click Slack-like personal-group server, with file-sharing, welcome channels and shared contacts, or you can integrate it with NextCloud. XMPP has solved a lot of problems over its long history, and might just outlast all the centralized services.

Bryan Reply

I totally forgot about XMPP, thanks for sharing!

on July 22, 2020 08:18 PM

Major Backports Update

Ubuntu Studio

For those of you using the Ubuntu Studio Backports Repository, we recently had a major update of some tools. If you’ve been using the Backports PPA, you may have noticed some breakage when updating via normal means. To update if you have the Backports PPA enabled, make sure to do... Continue reading
on July 22, 2020 07:38 PM

Up and down the hillside

Søren Bredlund Caspersen

We just got home from a week of holidays in Norway, with lots of spectacular scenery and fresh air.

Energy consumption up and down the mountain.

The cabin was located about 900 meters above sea level. The first 600 meters climb from Oslo during the course of a few hours went by almost unnoticed. The last ~300 meters were for us from flat Denmark a bit more unusual.

Notice how the energy consumption of our Tesla 3 rose significantly during the last approx 600 meters climb to the cabin, and how the trip downhill actually charged the battery instead of using energy (the green area on the graph).

on July 22, 2020 05:03 PM

July 21, 2020

A very common problem in GStreamer, especially when working with live network streams, is that the source might just fail at some point. Your own network might have problems, the source of the stream might have problems, …

Without any special handling of such situations, the default behaviour in GStreamer is to simply report an error and let the application worry about handling it. The application might for example want to restart the stream, or it might simply want to show an error to the user, or it might want to show a fallback stream instead, telling the user that the stream is currently not available and then seamlessly switch back to the stream once it comes back.

Implementing all of the aforementioned is quite some effort, especially to do it in a robust way. To make it easier for applications I implemented a new plugin called fallbackswitch that contains two elements to automate this.

It is part of the GStreamer Rust plugins and also included in the recent 0.6.0 release, which can also be found on the Rust package (“crate”) repository


For using the plugin you most likely first need to compile it yourself, unless you’re lucky enough that e.g. your Linux distribution includes it already.

Compiling it requires a Rust toolchain and GStreamer 1.14 or newer. The former you can get via rustup for example, if you don’t have it yet, the latter either from your Linux distribution or by using the macOS, Windows, etc binaries that are provided by the GStreamer project. Once that is done, compiling is mostly a matter of running cargo build in the utils/fallbackswitch directory and copying the resulting (or .dll or .dylib) into one of the GStreamer plugin directories, for example ~/.local/share/gstreamer-1.0/plugins.


The first of the two elements is fallbackswitch. It acts as a filter that can be placed into any kind of live stream. It consumes one main stream (which must be live) and outputs this stream as-is if everything works well. Based on the timeout property it detects if this main stream didn’t have any activity for the configured amount of time, or everything arrived too late for that long, and then seamlessly switches to a fallback stream. The fallback stream is the second input of the element and does not have to be live (but it can be).

Switching between main stream and fallback stream doesn’t only work for raw audio and video streams but also works for compressed formats. The element will take constraints like keyframes into account when switching, and if necessary/possible also request new keyframes from the sources.

For example to play the Sintel trailer over the network and displaying a test pattern if it doesn’t produce any data, the following pipeline can be constructed:

gst-launch-1.0 souphttpsrc location= ! \
    decodebin ! identity sync=true ! fallbackswitch name=s ! videoconvert ! autovideosink \
    videotestsrc ! s.fallback_sink

Note the identity sync=true in the main stream here as we have to convert it to an actual live stream.

Now when running the above command and disconnecting from the network, the video should freeze at some point and after 5 seconds a test pattern should be displayed.

However, when using fallbackswitch the application will still have to take care of handling actual errors from the main source and possibly restarting it. Waiting a bit longer after disconnecting the network with the above command will report an error, which then stops the pipeline.

To make that part easier there is the second element.


The second element is fallbacksrc and as the name suggests it is an actual source element. When using it, the main source can be configured via an URI or by providing a custom source element. Internally it then takes care of buffering the source, converting non-live streams into live streams and restarting the source transparently on errors. The various timeouts for this can be configured via properties.

Different to fallbackswitch it also handles audio and video at the same time and demuxes/decodes the streams.

Currently the only fallback streams that can be configured are still images for video. For audio the element will always output silence for now, and if no fallback image is configured for video it outputs black instead. In the future I would like to add support for arbitrary fallback streams, which hopefully shouldn’t be too hard. The basic infrastructure for it is already there.

To use it again in our previous example and having a JPEG image displayed whenever the source does not produce any new data, the following can be done:

gst-launch-1.0 fallbacksrc uri= \
    fallback-uri=file:///path/to/some/jpg ! videoconvert ! autovideosink

Now when disconnecting the network, after a while (longer than before because fallbacksrc does additional buffering for non-live network streams) the fallback image should be shown. Different to before, waiting longer will not lead to an error and reconnecting the network causes the video to reappear. However as this is not an actual live-stream, right now playback would again start from the beginning. Seeking back to the previous position would be another potential feature that could be added in the future.

Overall these two elements should make it easier for applications to handle errors in live network sources. While the two elements are still relatively minimal feature-wise, they should already be usable in various real scenarios and are already used in production.

As usual, if you run into any problems or are missing some features, please create an issue in the GStreamer bug tracker.

on July 21, 2020 01:12 PM