August 11, 2020

GameMode in Debian

Jonathan Carter

What is GameMode, what does it do?

About two years ago, I ran into some bugs running a game on Debian, so installed Windows 10 on a spare computer and ran it on there. I learned that when you launch a game in Windows 10, it automatically disables notifications, screensaver, reduces power saving measures and gives the game maximum priority. I thought “Oh, that’s actually quite nice, but we probably won’t see that kind of integration on Linux any time soon”. The very next week, I read the initial announcement of GameMode, a tool from Feral Interactive that does a bunch of tricks to maximise performance for games running on Linux.

When GameMode is invoked it:

  • Sets the kernel performance governor from ‘powersave’ to ‘performance’
  • Provides I/O priority to the game process
  • Optionally sets nice value to the game process
  • Inhibits the screensaver
  • Tweak the kernel scheduler to enable soft real-time capabilities (handled by the MuQSS kernel scheduler, if available in your kernel)
  • Sets GPU performance mode (NVIDIA and AMD)
  • Attempts GPU overclocking (on supported NVIDIA cards)
  • Runs custom pre/post run scripts. You might want to run a script to disable your ethereum mining or suspend VMs when you start a game and resume it all once you quit.

How GameMode is invoked

Some newer games (proprietary games like “Rise of the Tomb Raider”, “Total War Saga: Thrones of Britannia”, “Total War: WARHAMMER II”, “DiRT 4” and “Total War: Three Kingdoms”) will automatically invoke GameMode if it’s installed. For games that don’t, you can manually evoke it using the gamemoderun command.

Lutris is a tool that makes it easy to install and run games on Linux, and it also integrates with GameMode. (Lutris is currently being packaged for Debian, hopefully it will make it in on time for Bullseye).

Screenshot of Lutris, a tool that makes it easy to install your non-Linux games, which also integrates with GameMode.

GameMode in Debian

The latest GameMode is packaged in Debian (Stephan Lachnit and I maintain it in the Debian Games Team) and it’s also available for Debian 10 (Buster) via buster-backports. All you need to do to get up and running with GameMode is to install the ‘gamemode’ package.

GameMode in Debian supports 64 bit and 32 bit mode, so running it with older games (and many proprietary games) still work. Some distributions (like Arch Linux), have dropped 32 bit support, so 32 bit games on such systems lose any kind of integration with GameMode even if you can get those games running via other wrappers on such systems.

We also include a binary called ‘gamemode-simulate-game’ (installed under /usr/games/). This is a minimalistic program that will invoke gamemode automatically for 10 seconds and then exit without an error if it was successful. Its source code might be useful if you’d like to add GameMode support to your game, or patch a game in Debian to automatically invoke it.

In Debian we install Gamemode’s example config file to /etc/gamemode.ini where a user can customise their system-wide preferences, or alternatively they can place a copy of that in ~/.gamemode.ini with their personal preferences. In this config file, you can also choose to explicitly allow or deny games.

GameMode might also be useful for many pieces of software that aren’t games. I haven’t done any benchmarks on such software yet, but it might be great for users who use CAD programs or use a combination of their CPU/GPU to crunch a large amount of data.

I’ve also packaged an extension for GNOME called gamemode-extension. The Debian package is called ‘gnome-shell-extension-gamemode’. You’ll need to enable it using gnome-tweaks after installation, it will then display a green controller in your notification area whenever GameMode is active. It’s only in testing/bullseye since it relies on a newer gnome-shell than what’s available in buster.

Running gamemode-simulate-game, with the shell extension showing that it’s activated in the top left corner.
on August 11, 2020 03:35 PM

The Kubernetes 1.19 release candidate is now available for download and experimentation ahead of general availability later this month. You can try it now with MicroK8s.

To get the latest Kubernetes on your machine, install MicroK8s and get a lightweight, zero-ops K8s cluster in no time:

sudo snap install microk8s --channel=1.19/candidate --classic

Or install from https://snapcraft.io/microk8s  and select 1.19/candidate

You can install MicroK8s on Ubuntu and all major Linux distributions or on Windows and macOS using native installers.

For any questions or support requests on Kubernetes and MicroK8s,  contact us or find our Kubernetes team on Discourse or Slack (#microk8s).

on August 11, 2020 02:34 PM

August 10, 2020

Welcome to the Ubuntu Weekly Newsletter, Issue 643 for the week of August 2 – 8, 2020. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on August 10, 2020 10:50 PM

August 07, 2020

The Kubuntu Team is happy to announce that Kubuntu 20.04.1 LTS “point release” is available today, featuring the beautiful KDE Plasma 5.18 LTS: simple by default, powerful when needed.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Kubuntu 20.04 LTS.

More details can be found in the release notes: https://wiki.ubuntu.com/FocalFossa/ReleaseNotes/Kubuntu

In order to download Kubuntu 20.04.1 LTS, visit:

Download Kubuntu

Users of Kbuntu 18.04 LTS will soon be offered an automatic upgrade to 20.04.1 LTS via Update Manager/Discover. For further information about upgrading, see:

https://help.ubuntu.com/community/FocalUpgrades

As always, upgrades to the latest version of Kubuntu are entirely free of charge.

We recommend that all users read the 20.04.1 LTS release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

https://wiki.ubuntu.com/FocalFossa/ReleaseNotes

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

#kubuntu on irc.freenode.net
https://lists.ubuntu.com/mailman/listinfo/kubuntu-users
https://www.kubuntuforums.net/
https://www.reddit.com/r/Kubuntu/

on August 07, 2020 08:47 PM

WSLConf is the first and only community conference dedicated to Windows Subsystem for Linux (WSL).

Canonical proudly sponsored the first WSLConf in March 2020. Since then Ubuntu 20.04 on WSL 2 has arrived and support for AI/ML workloads is available in Windows Insiders.

With WSL growing, WSLConf community organizers did not want to wait another year to gather the community together. WSLConf is returning in September 2020 in virtual form, as microWSLConf.

microWSLConf will feature a virtual hallway track for unscheduled conversations by attendees and breakout sessions for affinity groups.

The schedule for microWSLConf has been broken into two shorter sessions expanding the geographic reach of the virtual conference to Europe, Africa, and Asia.

Session One

NYC: September 9 9pm-12am
SEA: September 10 6pm-9pm
LON: September 10 2am-5am
HKG: September 10 9am-12pm
Session Two

NYC: September 10 10am-1pm
SEA: September 10 7am-10am
LON: September 10 3pm-6pm
HKG: September 10 10pm-12am

microWSLConf is still accepting presentation proposals through 15 August. Submit a presentation proposal at wslconf.dev.

Planned speakers to date include:

Hayden Barnes
Engineering Manager for Ubuntu on WSL at Canonical
Microsoft MVP

Nuno do Carmo
Analyst, Ferring Pharmaceuticals
CNCF Ambassador, Docker Captain, and Microsoft MVP

Carlos Ramirez
CEO, Whitewater Foundry, publishers of Pengwin and Raft

Mario Hewardt
Principal Program Manager, Microsoft, for Sysinternals for Linux

Kohei Ota
Solutions Architect, Hewlett Packard Enterprise

Jérôme Laban
Chief Technology Officer, Uno Platform

Carlos Lopez
Business System Analyst, Conduent, and SQL Server expert
Microsoft MVP

Join the WSLConf Telegram chat.

All the streamed content from the first WSLConf is available on the Ubuntu YouTube page:



on August 07, 2020 04:16 PM

August 06, 2020

The Ubuntu team is pleased to announce the release of Ubuntu 20.04.1 LTS (Long-Term Support) for its Desktop, Server, and Cloud products, as well as other flavours of Ubuntu with long-term support.

As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 20.04 LTS.

Kubuntu 20.04.1 LTS, Ubuntu Budgie 20.04.1 LTS, Ubuntu MATE 20.04.1 LTS, Lubuntu 20.04.1 LTS, Ubuntu Kylin 20.04.1 LTS, Ubuntu Studio 20.04.1 LTS, and Xubuntu 20.04.1 LTS are also now available. More details can be found in their individual release notes:

https://wiki.ubuntu.com/FocalFossa/ReleaseNotes#Official_flavours

Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, and Ubuntu Core. All the remaining flavours will be supported for 3 years. Additional security support is available with ESM (Extended Security Maintenance).

To get Ubuntu 20.04.1 LTS

In order to download Ubuntu 20.04.1 LTS, visit:

https://ubuntu.com/download

Users of Ubuntu 18.04 LTS will soon be offered an automatic upgrade to 20.04.1 LTS via Update Manager. For further information about upgrading,
see:

https://help.ubuntu.com/community/FocalUpgrades

As always, upgrades to the latest version of Ubuntu are entirely free of charge.

We recommend that all users read the 20.04.1 LTS release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:

https://wiki.ubuntu.com/FocalFossa/ReleaseNotes

If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

#ubuntu on irc.freenode.net
https://lists.ubuntu.com/mailman/listinfo/ubuntu-users
https://ubuntuforums.org
https://askubuntu.com

Help Shape Ubuntu

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:

https://discourse.ubuntu.com/contribute

About Ubuntu

Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:

https://ubuntu.com/support

More Information

You can learn more about Ubuntu and about this release on our website listed below:

https://ubuntu.com/

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

https://lists.ubuntu.com/mailman/listinfo/ubuntu-announce


Originally posted to the ubuntu-announce mailing list on Thu Aug 6 17:31:55 UTC 2020 by Łukasz ‘sil2100’ Zemczak, on behalf of the Ubuntu Release Team

on August 06, 2020 09:46 PM

Ep 102 – Jornalista Pintor

Podcast Ubuntu Portugal

O Diogo está em parte incerta, e o Carrondo noutro fuso horário, mas o PUP must go on! Esta semana trazemos actualidade do final de Julho e a já tão falada aplicação de rastreamento covid… Saúde para todos e fiquem com mais 1 capítulo desta aventura.

Já sabem: oiçam, subscrevam e partilhem!

  • https://9to5linux.com/meet-ubuntued-20-04-an-educational-ubuntu-flavor-for-kids-schools-and-universities
  • https://discourse.ubuntu.com/t/ubuntu-education-ubuntued/17063
  • https://9to5linux.com/meet-ubuntu-retro-remix-an-ubuntu-distro-to-turn-your-raspberry-pi-into-a-retro-gaming-pc
  • https://ubuntuunity.org/
  • https://rastreamento.pt/
  • https://ansol.org/STAYAWAY-COVID
  • https://www.humblebundle.com/books/raspberry-pi-raspberry-pi-press-books?partner=pup

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on August 06, 2020 09:45 PM
Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 20.04.1 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight Qt Desktop Environment (LXQt). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock-solid Ubuntu […]
on August 06, 2020 05:37 PM

S13E20 – Bananas on board

Ubuntu Podcast from the UK LoCo

This week we’ve been building Monster Joysticks and playing Red Alert. We discuss the proliforation of Ubuntu Remixes, bring you some GUI love and go over all your wonderful feedback.

It’s Season 13 Episode 20 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on August 06, 2020 02:00 PM

August 01, 2020

Het repareren van smartphones

Behind the Circle

gsm-reparatie

Een smartphone laten vallen? Dan is de kans groot dat de smartphone beschadigd is. Is het scherm van jouw smartphone beschadigd? Dan moet je op zoek naar een bedrijf dat de smartphone kan repareren. Natuurlijk ga je dan op zoek naar een bedrijf dat de smartphone snel en voor een bescheiden prijs kan repareren. Een bedrijf dat bij de reparatie uitsluitend gebruikmaakt van onderdelen van uitstekende kwaliteit. Inwoners van Nijmegen of omgeving kunnen hun smartphone laten repareren door Phone shop Nijmegen.

 

Schade voorkomen

Het reparatiebedrijf dat smarthones repareert kan het verschrikkelijk vinden dat het scherm van een smartphone is beschadigd omdat men de smartphone heeft laten vallen. Die bedrijven doen meer dan het repareren van de smartphone. Ze geven klanten die het scherm van hun smartphone hebben laten repareren een geharde glasbeschermer cadeau. Zo kan schade in de toekomst waarschijnlijk worden voorkomen.

 

Je wordt goed geholpen

Heb je een winkel gevonden waar ze jouw smartphone kunnen repareren? Bij die winkel repareren ze niet alleen maar smartphones en tablets. Je kunt er ook allerlei accessoires kopen. Veelgevraagde accessoires zijn accu’s, screenprotectors en telefoonhoesjes. Wellicht heb je een ander accessoire nodig. Ook dan helpen de specialisten die in de winkel werken je graag. Een nieuwe smartphone kopen? Je krijgt deskundig advies. Laat je je smartphone repareren? Reparaties kunnen vaak direct worden gedaan. Klanten kunnen dan zien hoe hun smartphone wordt gerepareerd. Ze kunnen ook even een boodschap gaan doen of even gaan wandelen. Ze hoeven doorgaans niet lang te wachten op hun gerepareerde smarthone. De meeste reparaties nemen namelijk niet meer dan 30 minuten in beslag.

 

Er zijn winkels met een ruim assortiment

Een winkel waar smartphones worden verkocht heeft vaak een groot assortiment. Er zijn winkels met meer dan 5000 telefoon, tablet en smartphone accessoires. Uiteraard vind je daar ook accessoires voor de nieuwste toestellen. Soms zijn die accessoires nog eerder leverbaar dan de nieuwste toestellen. Liever geen nieuw toestel? je kunt ook een gebruikte of een refurbished smartphone kopen.

 

The post Het repareren van smartphones appeared first on behindthecircle.

on August 01, 2020 09:38 AM

July 31, 2020

Here are my uploads for the month of July, which is just a part of my free software activities, I’ll try to catch up on the rest in upcoming posts. I haven’t indulged in online conferences much over the last few months, but this month I attended the virtual editions of Guadec 2020 and HOPE 2020. HOPE isn’t something I knew about before and I enjoyed it a lot, you can find their videos on archive.org.

Debian Uploads

2020-07-05: Sponsor backport gamemode-1.5.1-5 for Debian buster-backports.

2020-07-06: Sponsor package piper (0.5.1-1) for Debian unstable (mentors.debian.net request).

2020-07-14: Upload package speedtest-cli (2.0.2-1+deb10u1) to Debian buster (Closes: #940165, #965116).

2020-07-15: Upload package calamares (3.2.27-1) to Debian unstable.

2020-07-15: Merge MR#1 for gnome-shell-extension-dash-to-panel.

2020-07-15: Upload package gnome-shell-extension-dash-to-panel (38-1) to Debian unstable.

2020-07-15: Upload package gnome-shell-extension-disconnect-wifi (25-1) to Debian unstable.

2020-07-15: Upload package gnome-shell-extension-draw-on-your-screen (6.1-1) to Debian unstable.

2020-07-15: Upload package xabacus (8.2.8-1) to Debian unstable.

2020-07-15: Upload package s-tui (1.0.2-1) to Debian unstable.

2020-07-15: Upload package calamares-settings-debian (10.0.2-1+deb10u2) to Debian buster (Closes: #934503, #934504).

2020-07-15: Upload package calamares-settings-debian (10.0.2-1+deb10u3) to Debian buster (Closes: #959541, #965117).

2020-07-15: Upload package calamares-settings-debian (11.0.2-1) to Debian unstable.

2020-07-19: Upload package bluefish (2.2.11+svn-r8872-1) to Debian unstable (Closes: #593413, #593427, #692284, #730543, #857330, #892502, #951143).

2020-07-19: Upload package bundlewrap (4.0.0-1) to Debian unstable.

2020-07-20: Upload package bluefish (2.2.11+svn-r8872-1) to Debian unstable (Closes: #965332).

2020-07-22: Upload package calamares (3.2.27-1~bpo10+1) to Debian buster-backports.

2020-07-24: Upload package bluefish (2.2.11_svn-r8872-3) to Debian unstable (Closes: #965944).

on July 31, 2020 05:01 PM

Full Circle Magazine #159

Full Circle Magazine

This month:
* Command & Conquer
* How-To : Python, Podcast Production, and Rawtherapee
* Graphics : Inkscape
* Graphics : Krita for Old Photos
* Linux Loopback
Everyday Ubuntu
Ubports Touch
* Review : Ubuntu Unity 20.04
* Ubuntu Games : Mable And The Wood
plus: News, My Opinion, The Daily Waddle, Q&A, and more.

Get it while it’s hot: https://fullcirclemagazine.org/issue-159/

on July 31, 2020 04:07 PM

S13E19 – Three manholes

Ubuntu Podcast from the UK LoCo

This week we’ve been going retro; making an Ubuntu Retro Remix and playing ET:Legacy. We discuss the new release of digiKam, Intel GPU driver tweaks, Ubuntu Web Remix, Thunderbird 78 and Mir 2.0! We also round up our picks from the tech news.

It’s Season 13 Episode 19 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on July 31, 2020 11:00 AM

July 30, 2020

Ep 101 – O Carteiro Cantor

Podcast Ubuntu Portugal

No último episódio de Julho falou-se sobre podcasts da concorrência, festivais que podem criar ainda mais concorrência, e site que não fazem concorrência porque não conseguem vender os seus produtos. Tudo isto e muito mais neste capítulo do PUP.

Já sabem: oiçam, subscrevam e partilhem!

  • https://www.youtube.com/watch?v=UDM_gRVBv94
  • https://tosdr.org/
  • https://www.rncbc.org/
  • https://linux.fe.up.pt/
  • https://www.ajustmentn.site
  • https://twitter.com/Portcasts/status/1287402830509244418
  • https://help.ubuntu.com/stable/ubuntu-help/screen-shot-record.html
  • https://www.humblebundle.com/books/programming-for-makers-make-co-books?partner=pup

Apoios

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal.
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles não listados nas notas usem o link https://www.humblebundle.com/?partner=PUP e vão estar também a apoiar-nos.

Atribuição e licenças

Este episódio foi produzido por Diogo Constantino e Tiago Carrondo e editado por Alexandre Carrapiço, o Senhor Podcast.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio e a imagem utilizada estão licenciados nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on July 30, 2020 09:45 PM

July 29, 2020

Mitsubishi Airco

Behind the Circle

Airco

Wanneer het kwik tijdens de warme zomermaanden gestaag omhoog kruipt, zul je merken dat je minder energie hebt en slechter slaapt, waardoor je weer niet goed uitgerust aan je nieuwe dag kunt beginnen. Ventilatoren kunnen maar zoveel doen om een ruimte af te koelen, maar als je zelf de temperatuur in een ruimte kunt reguleren, is dat natuurlijk veel eenvoudiger. Een topmerk binnen het airco segment is Mitsubishi Airco, voor als je op zoek bent naar een kwalitatief hoogwaardig systeem voor klimaatbeheersing.

 

De voordelen van een goed airco systeem

Goede klimaatbeheersing is ingericht op gemakelijk gebruik, goede luchtcirculatie en hoge luchtkwaliteit. De innovatie van topmerken is dat ze ook bijdragen aan een hogere luchtkwaliteit op de werkvloer. Bij bedrijfspanden bijvoorbeeld kan het zo zijn dat vanwege gebrekkige of zelfs slechte ventilatie de kwaliteit van de lucht ondermaats is. Dit kan weer gevolgen hebben voor de bezoekers van je pand, maar vooral voor werknemers die hier voor langere tijd aan worden blootgesteld. Het is dus belangrijk om een gezond en aangenaam klimaat te onderhouden. Een goed airco systeem koelt niet alleen de ruimte af op een efficiënte manier, maar draagt ook bij aan een goede circulatie en ventilatie. Een airco systeem bestaat namelijk uit een binnen- en buitenunit, die door middel van leidingen met elkaar verbonden zijn. Aangezien de airco buitenlucht aanzuigt, afkoelt en in een ruimte circuleert, draagt zo’n systeem ook bij aan de ventilatie van een ruimte.

 

Mitsubishi Airco op maat laten instaleren

Voor elk pand of ruimte geldt dat de situatie nergens echt hetzelfde is. Bij het instaleren van een klimaatbeheersingssysteem, is het voor en bedrijf dus noodzakelijk om te kijken hoe jouw wensen overeenkomen met de praktijk. Er zijn namelijk verschillende factoren die invloed hebben op de effectiviteit van de airco en het systeem dat je vervolgens nodig zal hebben. Wanneer een gebouw veel ramen heeft, zal de temperatuur schommelen. Daarnaast kan een ruimte te warm of te vochtig aanvoelen, wat slecht is voor de gezondheid. Een goed airco systeem zorgt daarom ook voor een optimale vochtbalans.

The post Mitsubishi Airco appeared first on behindthecircle.

on July 29, 2020 03:39 PM
Visité Valencia este mes de Julio y aproveché a visitar las oficinas de Slimbook, el manufacturador español especializado en dispositivos con Linux preinstalado. Allí realicé durante un par de horas una review a su modelo Pro X con CPU Ryzen de AMD. Construcción y diseño Yo uso un Dell Latitude E5470 y al coger el Pro X lo primero que llama la atención es su ligereza (1,1 Kg). No hay muchos portátiles en el mercado que puedan presumir de pesar 1,1 Kg, tal como analicé en otro post.
on July 29, 2020 03:39 PM

July 26, 2020

En Ubuntu 20.04 se incluyó la última versión de Nautilus y la gestión de los ficheros en el escritorio se realiza a través de una extensión de GNOME. Si bien emula correctamente que podamos gestionar ficheros en el escritorio, tiene algunas carencias importantes como no poder arrastrar desde Nautilus al escritorio, atajos de teclado… En este vídeo te muestro todos los pasos para sustituir Nautilus por Nemo y conseguir un escritorio totalmente funcional:
on July 26, 2020 08:18 AM

I can already hear some readers saying that backups are an IT problem, and not a security problem. The reality, of course, is that they’re both. Information security is commonly thought of in terms of the CIA Triad – that is, Confidentiality, Integrity, and Availability, and it’s important to remember those concepts when dealing with backups.

We need look no farther than the troubles Garmin is having in dealing with a ransomware attack to find evidence that backups are critical. It’s unclear whether Garmin lacked adequate backups, had their backups ransomware’d, or is struggling to restore from backups. (It’s possible that they never considered an issue of this scale and simply aren’t resourced to restore this quickly, but given that the outage remains a complete outage after 4 days, I’d bet on one of those 3 conditions.)

So what does a security professional need to know about backups? Every organization is different, so I’m not going to try to provide a formula or tutorial for how to do backups, but rather discuss the security concepts in dealing with backups.

Before I got into security, I was both a Site Reliability Engineer (SRE) and a Systems Administrator, so I’ve had my opportunities to think about backups from a number of different directions. I’ll try to incorporate both sides of that here.

Availability

I want to deal with availability first, because that’s really what backups are for. Backups are your last line of defense in ensuring the availability of data and services. In theory, when the service is down, you should be able to restore from backups and get going (with the possibility of some data loss in between the time of the backup and the restoration).

Availability Threat: Disaster

Anyone who’s had to deal with backups has probably given some thoughts to the various disasters that can strike their primary operations. There are numerous disasters that can take out an entire datacenter, including fire, earthquake, tornadoes, flooding, and more. Just as a general rule, assume a datacenter will disappear, so you need a full copy of your data somewhere else as well as the ability to restore operations from that location.

This also means you can’t rely on anything in that datacenter for your restoration. We’ll talk about encryption under confidentiality, but suffice it to say that you need your backup configs, metadata (what backup is stored where), encryption keys, and more in a way you can access them if you lose that site. A lot of this would be great to store completely offline, such as in a safe in your office (assuming it’s sufficiently far from the datacenter to be unaffected).

Availability Threat: Malware

While replicating your data across two sites would likely protect against natural disasters, it won’t be enough to protect against malware. Whether ransomware or malware that just wants to destroy your data, network connectivity would place both sets of data at risk if you don’t take precautions.

One option is using backup software that provides versioning controlled by the provider. For small business or SOHO use, providers like BackBlaze and SpiderOak offer this. Another choice is using a cloud provider for storage and enabling a provider-enforced policy like Retention Policies on GCP.

Alternatively, using a “pull” backup configuration (where backups are “pulled” from the system by a backup system) can help with this as well. By having the backup system pull, malware on the serving system cannot access anything but the currently online data. You still need to ensure you retain older versions to avoid just backing up the ransomware’d data.

At the end of the day, what you want is to ensure that an infected system cannot delete, modify, or replace its own backups. Remember that anything a legitimate user or service on the machine can do can also be done by malware.

Another consideration is how the backup service is administered. If, for example, your backups are stored on servers joined to your Windows domain and a domain administrator or domain controller is compromised, then the malware can also hop to the backup server and encrypt/destroy the backups. If your backups are exposed as a writable share to any compromised machine, then, again, the malware can have it’s way with your backups.

Of course, offline backups can mitigate most of the risks as well. Placing backups onto tapes or hard drives that are physically disconnected is a great way to avoid exposing those backups to malware, but it also adds significant complexity to your backup scheme.

Availability Threat: Insider

You may also want to consider a malicious insider when designing your backup strategy. While many of the steps that protect against malware will help against an insider, considering who has access to your backup strategy and what unilateral access they have is important.

Using a 3rd party service with an enforced retention period can help, as can layers of backups administered by different individuals. Offline backups also make it harder for an individual to quickly destroy data.

Ensuring that the backup administrator is also not in a good position to destroy your live data can also help protect against their ability to have too much impact on your organization.

Confidentiality

It’s critical to protect your data. Since many backup approaches involve entrusting your data to a 3rd party (whether it’s a cloud provider, an archival storage company, or a colocated data center), encryption is commonly employed to ensure confidentiality of the data stored. (Naturally, the key should not be stored with the 3rd party.)

Fun anecdote: at a previous employer, we had our backup tapes stored offsite by a 3rd party backup provider. The tapes were picked up and delivered in a locked box, and we were told that only we possessed the key to the box. I became “suspicious” when we added a new person to our authorized list (those who are allowed to request backups back from the vendor) and the person’s ID card was delivered inside our locked box. (Needless to say, you can’t trust statements like that from a vendor – not to mention that a plastic box is not a security boundary.)

All the data you backup should be encrypted with a key your organization controls, and you should have access to that key even if your network is completely trashed. I recommend storing it in a safe, preferrably on a smartcard or other secure element. (Ideally in a couple of locations to hedge your bets.)

A fun bit about encrypted backups: if you use proper encryption, destroying the key is equivalent to destroying all the backups encrypted with that key. Some organizations do this as a way of expiring old data. You can have the data spread across all kinds of tapes, but once the key is destroyed, you will never be recovering that data. (On the other hand, if a malicious actor destroys your key, you will also never be recovering that data.)

Integrity

Your backups need to be integrity protected – that is, protected against tampering or modification. This both protects against accidental modifications (i.e., corruption from bad media, physical damage, etc.) as well as tampering. While encryption makes it harder for an adversary to modify data in a controlled fashion, it is still possible. (This is a property of encryption known as Malleability.)

Ideally, backups should be cryptographyically signed. This prevents both accidental and malicious modification to the underlying data. A common approach is to build a manifest of cryptographic hashes (i.e., SHA-256) of each file and then sign that. The individual hashes can be computed in parallel and even on multiple hosts, then the finished manifest can be signed. (Possibly on a different host.)

These hashes can also be used to verify the backups as written to ensure against damage during the writing of backups. Only the signing machines need access to the private key (which should ideally be stored in a hardware-backed key storage mechanism like a smart card or TPM).

Backup Strategy Testing

No matter what strategy you end up designing (which should be a joint function between the core IT group and the security team), the strategy needs to be evaluated and tested. Restoration needs to be tested, and threats need to be considered.

Practicing Restoration

This is likely to be far more a function of IT/production teams than of the security team, but you have to test restoration. I’ve seen too many backup plans without a tested restoration plan that wouldn’t work in practice.

Fails I’ve seen or heard of:

  • Relying on encryption to protect the backup, but then not having a copy of the encryption key at the time of restoration.
  • Using tapes for backups, but not having metadata of what was backed up on what tape. (Tapes are slow, imagine searching for the data you need.)

Tabletop Scenarios

When designing a backup strategy, I suggest doing a series of tabletop exercises to evaluate risks. Having a subset of the team play “red team” and attempt to destroy the data or access confidential data or apply ransomware to the network and the rest of the team evaluating controls to prevent this is a great way to discover gaps in your thought process.

Likewise, explicitly threat modeling ransomware into your backup strategy is critical, as we’ve seen increased use of this tactic by cybercriminals. Even though defenses to prevent ransomware getting on your network in the first place would be ideal, real security involves defense in depth, and having workable backups is a key mitigation for the risks posed by ransomware.

on July 26, 2020 07:00 AM

July 24, 2020

Firefox Beta via Flatpak

Bryan Quigley

What I've tried.

  1. Firefox beta as a snap. (Definitely easy to install. But not as quick and harder to use for managing files - makes it's own Downloads directory, etc)
  2. Firefox (stock) with custom AppArmor confinement. (Fun to do once, but the future is clearly using portals for file access, etc)
  3. Firefox beta as a Flatpak.

I've now been running Firefox as a Flatpak for over 4 months and have not had any blocking issues.

Getting it installed

Flatpak - already installed on Fedora SilverBlue (comes with Firefox with some Fedora specific optimizations) and EndlessOS at least

Follow Quick Setup. This walks you through installing the Flatpak package as well as the Flathub repo. Now you could easily install Firefox with just 'flatpak install firefox' if you want the Stable Firefox.

To get the beta you need to add the Flathub Beta repo. You can just run:

sudo flatpak remote-add flathub-beta htps://flathub.org/beta-repo/flathub-beta.flatpakrepo

Then to install Firefox from it do (You can also choose to install as a user and not using sudo with the --user flag):

sudo flatpak install flathub-beta firefox

Once you run the above commend it will ask you which Firefox to install, install any dependencies, tell you the permissions it will use, and finally install.

Looking for matches…
Similar refs found for ‘firefox’ in remote ‘flathub-beta’ (system):

...posts/mindshare/snap-firefox-initial.md
   3) app/org.mozilla.firefox/x86_64/beta

Which do you want to use (0 to abort)? [0-3]: 3
Required runtime for org.mozilla.firefox/x86_64/beta (runtime/org.freedesktop.Platform/x86_64/19.08) found in remote flathub
Do you want to install it? [Y/n]: y

org.mozilla.firefox permissions:
    ipc                          network       pcsc       pulseaudio       x11       devices       file access [1]       dbus access [2]
    system dbus access [3]

    [1] xdg-download
    [2] org.a11y.Bus, org.freedesktop.FileManager1, org.freedesktop.Notifications, org.freedesktop.ScreenSaver, org.gnome.SessionManager, org.gtk.vfs.*, org.mpris.MediaPlayer2.org.mozilla.firefox
    [3] org.freedesktop.NetworkManager


        ID                                             Branch            Op            Remote                  Download
 1. [—] org.freedesktop.Platform.GL.default            19.08             i             flathub                    56.1 MB / 89.1 MB
 2. [ ] org.freedesktop.Platform.Locale                19.08             i             flathub                 < 318.3 MB (partial)
 3. [ ] org.freedesktop.Platform.openh264              2.0               i             flathub                   < 1.5 MB
 4. [ ] org.gtk.Gtk3theme.Arc-Darker                   3.22              i             flathub                 < 145.9 kB
 5. [ ] org.freedesktop.Platform                       19.08             i             flathub                 < 238.5 MB
 6. [ ] org.mozilla.firefox.Locale                     beta              i             flathub-beta             < 48.3 MB (partial)
 7. [ ] org.mozilla.firefox                            beta              i             flathub-beta             < 79.1 MB

The first 5 dependencies downloaded are required by most applications and are shared, so the actual size of Firefox is more like 130MB.

Confinement

  • You can't browsing for local files via browser file:/// (except for ~/Downloads). All local files need to be opened by Open File Dialogue which automatically adds the needed permissions. Unboxing
  • You can enable Wayland as well with 'sudo flatpak override --env=GDK_BACKEND=wayland org.mozilla.firefox (Wayland doesn't work with the NVidia driver and Gnome Shell in my setup though)

What Works?

Everything I want which includes in no particular order:

  • Netflix (some older versions had issues with DRM IMU)
  • WebGL (with my Nvidia card and proprietary driver. Flatpak installs the necessary bits to get it working based on your video card)
  • It's speedy, it starts quick as I would normally expect
  • Using the file browser for ANY file on my system. You can upload your private SSH keys if you really need to, but you need to explicitly select the file (and I'm not sure how you unshare it).
  • Opening apps directly via Firefox (aka I download a PDF and I want it to open in Evince - this does use portals for confinement).
  • Offline mode

What could use work?

  • Some flatpak commands can figure out what just "Firefox" means, while others want the full org.mozilla.firefox
  • If you want to run Firefox from the command line, you need to run it as org.mozilla.firefox. This is the same for all Flatpaks, although you can make an alias.
  • It would be more convenient if Beta releases were part of the main Flathub (or advertised more)
  • If you change your Downloads directory in Firefox, you have to update the permissions in Flatpak or else it won't allow it to work. If you do Save As.. it will work fine though.
  • The flatpak permission-* commands lets you see what permissions are defined, but resetting or removing doesn't seem to actually work.

If you think you found a Flatpak specific Mozilla bug, the first place to look is Mozilla Bug #1278719 as many bugs are reported against this one bug for tracking purposes.

Comments

Add a comment by making a Pull Request to this post.

on July 24, 2020 09:20 PM

Better late than never as we say… thanks to the work of Daniel Leidert and Jorge Maldonado Ventura, we managed to complete the update of my book for Debian 10 Buster.

You can get the electronic version on debian-handbook.info or the paperback version on lulu.com. Or you can just read it online.

Translators are busy updating their translations, with German and Norvegian Bokmal leading the way…

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

on July 24, 2020 10:39 AM

July 23, 2020

A Debian LTS logo Like each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, 202.00 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

June was the last month of Jessie LTS which ended on 2020-06-20. If you still need to run Jessie somewhere, please read the post about keeping Debian 8 Jessie alive for longer than 5 years.
So, as (Jessie) LTS is dead, long live the new LTS, Stretch LTS! Stretch has received its last point release, so regular LTS operations can now continue.
Accompanying this, for the first time, we have prepared a small survey about our users and contributors, who they are and why they are using LTS. Filling out the survey should take less than 10 minutes. We would really appreciate if you could participate in the survey online! On July 27th 2020 we will close the survey, so please don’t hesitate and participate now! After that, there will be a followup with the results.

The security tracker for Stretch LTS currently lists 29 packages with a known CVE and the dla-needed.txt file has 44 packages needing an update in Stretch LTS.

Thanks to our sponsors

New sponsors are in bold.

We welcome CoreFiling this month!

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on July 23, 2020 02:10 PM

The Ubuntu Podcast did a review in their new edition of the KDE’s Applications site.  Listen from 14 minutes in.  You can hear such quotes as

“It’s pretty neat, It shows the breadth of applications in the KDE universe, tonnes of stuff in here”
“A big green button to install the thing”
“KDE applications are broad and useful”
“They publish a tonne of applications in the Snap store and they are hugely popular”
“Valuable software that people want to install and use irrespective of the desktop they are on”
“They make high quality and useful applications”
“Well done KDE, always very mindful of user experience”

They did suggest adding a featured app, which is a task we also want to do for Discover which has featured apps but they don’t currently change. That feels like an interesting wee task for anyone who wants to help out KDE.

But more easy would be the task of going over all the apps and checking the info on them is up to date, including going over the various app stores we publish on like the Microsoft Store and making sure those links are in the Appstream meta-data files.

Finally, the main task of All About the Apps is getting the apps onto the stores so we need people who can get the apps running on Windows etc and put them on the relevant Stores.  I did an interview asking for this for Flathub in the most recent monthly apps update.

We’re here to help on our Matrix room and my contact is always open.

on July 23, 2020 10:09 AM

July 22, 2020

Wrong About Signal

Bryan Quigley

Updated Riot was renamed to Element. XMPP info added in comment. And Signal still doesn't let you Unregister

A couple years ago I was a part of a discussion about encrypted messaging.

  • I was in the Signal camp - we needed it to be quick and easy to setup for users to get setup. Using existing phone numbers makes it easy.
  • Others were in the Matrix camp - we need to start from scratch and make it distributed so no one organization is in control. We should definitely not tie it to phone numbers.

I was wrong.

Signal has been moving in the direction of adding PINs for some time because they realize the danger of relying on the phone number system. Signal just mandated PINs for everyone as part of that switch. Good for security? I really don't think so. They did it so you could recover some bits of "profile, settings, and who you’ve blocked".

Before PIN

If you lose your phone your profile is lost and all message data is lost too. When you get a new phone and install Signal your contacts are alerted that your Safety Number has changed - and should be re-validated.

>>Where profile data lives1318.60060075387.1499999984981Where profile data livesYour Devices

After PIN

If you lost your phone you can use your PIN to recover some parts of your profile and other information. I am unsure if Safety Number still needs to be re-validated or not.

Your profile (or it's encryption key) is stored on at least 5 servers, but likely more. It's protected by secure value recovery.

There are many awesome components of this setup and it's clear that Signal wanted to make this as secure as possible. They wanted to make this a distributed setup so they don't even need to tbe only one hosting it. One of the key components is Intel's SGX which has several known attacks. I simply don't see the value in this and it means there is a new avenue of attack.

>>Where profile data lives1370.275162.94704773529975250.12499999999997371.0529522647003Where profile data livesYour DevicesSignal servers

PIN Reuse

By mandating user chosen PINs, my guess is the great majority of users will reuse the PIN that encrypts their phone. Why? PINs are re-used a lot to start, but here is how the PIN deployment went for a lot of Signal users:

  1. Get notification of new message
  2. Click it to open Signal
  3. Get Mandate to set a PIN before you can read the message!

That's horrible. That means people are in a rush to set a PIN to continue communicating. And now that rushed or reused PIN is stored in the cloud.

Hard to leave

They make it easy to get connections upgraded to secure, but their system to unregister when you uninstall has been down Since June 28th at least (tried last on July22nd). Without that, when you uninstall Signal it means:

  • you might be texting someone and they respond back but you never receive the messages because they only go to Signal
  • if someone you know joins Signal their messages will be automatically upgraded to Signal messages which you will never receive

Conclusion

In summary, Signal got people to hastily create or reuse PINs for minimal disclosed security benefits. There is a possibility that the push for mandatory cloud based PINS despite all of the pushback is that Signal knows of active attacks that these PINs would protect against. It likely would be related to using phone numbers.

I'm trying out the Element which uses the open Matrix network. I'm not actively encouraging others to join me, but just exploring the communities that exist there. It's already more featureful and supports more platforms than Signal ever did.

Maybe I missed something? Feel free to make a PR to add comments

Comments

kousu posted

In the XMPP world, Conversastions has been leading the charge to modernize XMPP, with an index of popular public groups (jabber.network) and a server validator. XMPP is mobile-battery friendly, and supports server-side logs wrapped in strong, multi-device encryption (in contrast to Signal, your keys never leave your devices!). Video calling even works now. It can interact with IRC and Riot (though the Riot bridge is less developed). There is a beautiful Windows client, a beautiful Linux client and a beautiful terminal client, two good Android clients, a beautiful web client which even supports video calling (and two others). It is easy to get an account from one of the many servers indexed here or here, or by looking through libreho.st. You can also set up your own with a little bit of reading. Snikket is building a one-click Slack-like personal-group server, with file-sharing, welcome channels and shared contacts, or you can integrate it with NextCloud. XMPP has solved a lot of problems over its long history, and might just outlast all the centralized services.

Bryan Reply

I totally forgot about XMPP, thanks for sharing!

on July 22, 2020 08:18 PM

Major Backports Update

Ubuntu Studio

For those of you using the Ubuntu Studio Backports Repository, we recently had a major update of some tools. If you’ve been using the Backports PPA, you may have noticed some breakage when updating via normal means. To update if you have the Backports PPA enabled, make sure to do... Continue reading
on July 22, 2020 07:38 PM

Up and down the hillside

Søren Bredlund Caspersen

We just got home from a week of holidays in Norway, with lots of spectacular scenery and fresh air.

Energy consumption up and down the mountain.

The cabin was located about 900 meters above sea level. The first 600 meters climb from Oslo during the course of a few hours went by almost unnoticed. The last ~300 meters were for us from flat Denmark a bit more unusual.

Notice how the energy consumption of our Tesla 3 rose significantly during the last approx 600 meters climb to the cabin, and how the trip downhill actually charged the battery instead of using energy (the green area on the graph).

on July 22, 2020 05:03 PM

July 21, 2020

A very common problem in GStreamer, especially when working with live network streams, is that the source might just fail at some point. Your own network might have problems, the source of the stream might have problems, …

Without any special handling of such situations, the default behaviour in GStreamer is to simply report an error and let the application worry about handling it. The application might for example want to restart the stream, or it might simply want to show an error to the user, or it might want to show a fallback stream instead, telling the user that the stream is currently not available and then seamlessly switch back to the stream once it comes back.

Implementing all of the aforementioned is quite some effort, especially to do it in a robust way. To make it easier for applications I implemented a new plugin called fallbackswitch that contains two elements to automate this.

It is part of the GStreamer Rust plugins and also included in the recent 0.6.0 release, which can also be found on the Rust package (“crate”) repository crates.io.

Installation

For using the plugin you most likely first need to compile it yourself, unless you’re lucky enough that e.g. your Linux distribution includes it already.

Compiling it requires a Rust toolchain and GStreamer 1.14 or newer. The former you can get via rustup for example, if you don’t have it yet, the latter either from your Linux distribution or by using the macOS, Windows, etc binaries that are provided by the GStreamer project. Once that is done, compiling is mostly a matter of running cargo build in the utils/fallbackswitch directory and copying the resulting libgstfallbackswitch.so (or .dll or .dylib) into one of the GStreamer plugin directories, for example ~/.local/share/gstreamer-1.0/plugins.

fallbackswitch

The first of the two elements is fallbackswitch. It acts as a filter that can be placed into any kind of live stream. It consumes one main stream (which must be live) and outputs this stream as-is if everything works well. Based on the timeout property it detects if this main stream didn’t have any activity for the configured amount of time, or everything arrived too late for that long, and then seamlessly switches to a fallback stream. The fallback stream is the second input of the element and does not have to be live (but it can be).

Switching between main stream and fallback stream doesn’t only work for raw audio and video streams but also works for compressed formats. The element will take constraints like keyframes into account when switching, and if necessary/possible also request new keyframes from the sources.

For example to play the Sintel trailer over the network and displaying a test pattern if it doesn’t produce any data, the following pipeline can be constructed:

gst-launch-1.0 souphttpsrc location=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm ! \
    decodebin ! identity sync=true ! fallbackswitch name=s ! videoconvert ! autovideosink \
    videotestsrc ! s.fallback_sink

Note the identity sync=true in the main stream here as we have to convert it to an actual live stream.

Now when running the above command and disconnecting from the network, the video should freeze at some point and after 5 seconds a test pattern should be displayed.

However, when using fallbackswitch the application will still have to take care of handling actual errors from the main source and possibly restarting it. Waiting a bit longer after disconnecting the network with the above command will report an error, which then stops the pipeline.

To make that part easier there is the second element.

fallbacksrc

The second element is fallbacksrc and as the name suggests it is an actual source element. When using it, the main source can be configured via an URI or by providing a custom source element. Internally it then takes care of buffering the source, converting non-live streams into live streams and restarting the source transparently on errors. The various timeouts for this can be configured via properties.

Different to fallbackswitch it also handles audio and video at the same time and demuxes/decodes the streams.

Currently the only fallback streams that can be configured are still images for video. For audio the element will always output silence for now, and if no fallback image is configured for video it outputs black instead. In the future I would like to add support for arbitrary fallback streams, which hopefully shouldn’t be too hard. The basic infrastructure for it is already there.

To use it again in our previous example and having a JPEG image displayed whenever the source does not produce any new data, the following can be done:

gst-launch-1.0 fallbacksrc uri=https://www.freedesktop.org/software/gstreamer-sdk/data/media/sintel_trailer-480p.webm \
    fallback-uri=file:///path/to/some/jpg ! videoconvert ! autovideosink

Now when disconnecting the network, after a while (longer than before because fallbacksrc does additional buffering for non-live network streams) the fallback image should be shown. Different to before, waiting longer will not lead to an error and reconnecting the network causes the video to reappear. However as this is not an actual live-stream, right now playback would again start from the beginning. Seeking back to the previous position would be another potential feature that could be added in the future.

Overall these two elements should make it easier for applications to handle errors in live network sources. While the two elements are still relatively minimal feature-wise, they should already be usable in various real scenarios and are already used in production.

As usual, if you run into any problems or are missing some features, please create an issue in the GStreamer bug tracker.

on July 21, 2020 01:12 PM

July 18, 2020

Full Circle Weekly News #178

Full Circle Magazine


Ubuntu Studio 20.10 To Ship With Plasma
https://ubuntustudio.org/2020/05/progress-on-plasma/
Kid3 Goes from Hosted to Official KDE Application
https://kde.org/announcements/releases/2020-05-apps-update/
Pop! OS 20.04 LTS Out
https://blog.system76.com/post/616861064165031936/whats-new-with-popos-2004-lts

Clonezilla Live 2.6.6 Out
https://sourceforge.net/p/clonezilla/news/2020/05/stable-clonezilla-live-266-15-released/

KaOS 2020.05 Out
https://kaosx.us/news/2020/kaos05/

Tails 4.6 Out
https://tails.boum.org/news/version_4.6/index.en.html

Kali Linux 2020.2 Out
https://www.kali.org/news/kali-linux-2020-2-release/

Endless OS 3.8.1 Out
https://community.endlessos.com/t/release-endless-os-3-8-1/13010

BlackArch 2020.06.01 Out
https://9to5linux.com/latest-blackarch-linux-iso-adds-more-than-150-new-hacking-tools-linux-5-6

KDE Plasma 5.18.5 LTS Out
https://kde.org/announcements/plasma-5.18.4-5.18.5-changelog

Gnome 3.36.2 Out
https://mail.gnome.org/archives/gnome-announce-list/2020-May/msg00002.html

LibreOffice 6.4.4 Out
https://blog.documentfoundation.org/blog/2020/05/21/libreoffice-644/

LibreOffice 7.0 Alpha 1 Out
https://qa.blog.documentfoundation.org/2020/05/12/libreoffice-7-0-alpha1-is-ready-for-testing/

Virtualbox 6.1.8 Out
https://www.virtualbox.org/wiki/Changelog-6.1#v8

Transmission 3.0 Out
https://github.com/transmission/transmission/releases/tag/3.00

Ardour 6.0 Released
https://ardour.org/news/6.0.html

Pixelorama 0.7 Out
https://www.orama-interactive.com/post/pixelorama-v0-7-is-out

Credits:
Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

on July 18, 2020 07:14 PM

July 17, 2020

Kubuntu 19.10 Eoan Ermine was released on October 17th, 2019 with 9 months support.

As of July 17th, 2020, 19.10 reaches ‘end of life’.

No more package updates will be accepted to 19.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The official end of life announcement for Ubuntu as a whole can be found here [1].

Kubuntu 20.04 Focal Fossa continues to be supported.

Users of 19.10 can follow the Kubuntu 19.10 to 20.04 Upgrade [2] instructions.

Should for some reason your upgrade be delayed, and you find that the 19.10 repositories have been archived to old-releases.ubuntu.com, instructions to perform a EOL Upgrade can be found on the Ubuntu wiki [3].

Thank you for using Kubuntu 19.10 Eoan Ermine.

The Kubuntu team.

[1] – https://lists.ubuntu.com/archives/ubuntu-announce/2020-July/000258.htmll
[2] – https://help.ubuntu.com/community/FocalUpgrades/Kubuntu
[3] – https://help.ubuntu.com/community/EOLUpgrades

on July 17, 2020 05:19 PM

July 16, 2020

Ubuntu Studio 19.10 (Eoan Ermine) was released October 17, 2019 and will reach End of Life on Friday, July 17, 2020. This means that after that date there will be no further security updates or bugfixes released. We highly recommend that you update to 20.04 LTS immediately if you are... Continue reading
on July 16, 2020 04:18 PM

July 15, 2020

It has been quite a while since the last status update for the GStreamer Rust bindings and the GStreamer Rust plugins, so the new releases last week make for a good opportunity to do so now.

Bindings

I won’t write too much about the bindings this time. The latest version as of now is 0.16.1, which means that since I started working on the bindings there were 8 major releases. In that same time there were 45 contributors working on the bindings, which seems quite a lot and really makes me happy.

Just as before, I don’t think any major APIs are missing from the bindings anymore, even for implementing subclasses of the various GStreamer types. The wide usage of the bindings in Free Software projects and commercial products also shows both the interest in writing GStreamer applications and plugins in Rust as well as that the bindings are complete enough and production-ready.

Most of the changes since the last status update involve API cleanups, usability improvements, various bugfixes and addition of minor API that was not included before. The details of all changes can be read in the changelog.

The bindings work with any GStreamer version since 1.8 (released more than 4 years ago), support APIs up to GStreamer 1.18 (to be released soon) and work with Rust 1.40 or newer.

Plugins

The biggest progress probably happened with the GStreamer Rust plugins.

There also was a new release last week, 0.6.0, which was the first release where selected plugins were also uploaded to the Rust package (“crate”) database crates.io. This makes it easy for Rust applications to embed any of these plugins statically instead of depending on them to be available on the system.

Overall there are now 40 GStreamer elements in 18 plugins by 28 contributors available as part of the gst-plugins-rs repository, one tutorial plugin with 4 elements and various plugins in external locations.

These 40 GStreamer elements are the following:

Audio
  • rsaudioecho: Port of the audioecho element from gst-plugins-good
  • rsaudioloudnorm: Live audio loudness normalization element based on the FFmpeg af_loudnorm filter
  • claxondec: FLAC lossless audio codec decoder element based on the pure-Rust claxon implementation
  • csoundfilter: Audio filter that can use any filter defined via the Csound audio programming language
  • lewtondec: Vorbis audio decoder element based on the pure-Rust lewton implementation
Video
  • cdgdec/cdgparse: Decoder and parser for the CD+G video codec based on a pure-Rust CD+G implementation, used for example by karaoke CDs
  • cea608overlay: CEA-608 Closed Captions overlay element
  • cea608tott: CEA-608 Closed Captions to timed-text (e.g. VTT or SRT subtitles) converter
  • tttocea608: CEA-608 Closed Captions from timed-text converter
  • mccenc/mccparse: MacCaption Closed Caption format encoder and parser
  • sccenc/sccparse: Scenarist Closed Caption format encoder and parser
  • dav1dec: AV1 video decoder based on the dav1d decoder implementation by the VLC project
  • rav1enc: AV1 video encoder based on the fast and pure-Rust rav1e encoder implementation
  • rsflvdemux: Alternative to the flvdemux FLV demuxer element from gst-plugins-good, not feature-equivalent yet
  • rsgifenc/rspngenc: GIF/PNG encoder elements based on the pure-Rust implementations by the image-rs project
Text
  • textwrap: Element for line-wrapping timed text (e.g. subtitles) for better screen-fitting, including hyphenation support for some languages
Network
  • reqwesthttpsrc: HTTP(S) source element based on the Rust reqwest/hyper HTTP implementations and almost feature-equivalent with the main GStreamer HTTP source souphttpsrc
  • s3src/s3sink: Source/sink element for the Amazon S3 cloud storage
  • awstranscriber: Live audio to timed text transcription element using the Amazon AWS Transcribe API
Generic
  • sodiumencrypter/sodiumdecrypter: Encryption/decryption element based on libsodium/NaCl
  • togglerecord: Recording element that allows to pause/resume recordings easily and considers keyframe boundaries
  • fallbackswitch/fallbacksrc: Elements for handling potentially failing (network) sources, restarting them on errors/timeout and showing a fallback stream instead
  • threadshare: Set of elements that provide alternatives for various existing GStreamer elements but allow to share the streaming threads between each other to reduce the number of threads
  • rsfilesrc/rsfilesink: File source/sink elements as replacements for the existing filesrc/filesink elements
on July 15, 2020 05:00 PM

July 14, 2020

Raspberry Pi 4

Sometimes, especially in the time of COVID-19, you can’t go onsite for a penetration test. Or maybe you can only get in briefly on a physical test, and want to leave behind a dropbox (literally, a box that can be “dropped” in place and let the tester leave, no relation to the file-sharing company by the same name) that you can remotely connect to. Of course, it could also be part of the desired test itself if incident response testing is in-scope – can they find your malicious device?

In all of these cases, one great option is a small single-board computer, the best known of which is the Raspberry Pi. It’s inexpensive, compact, easy to come by, and very flexible. It may not be perfect in every case, but it gets the job done in a lot of cases.

I’ll use this opportunity to discuss the setups I’ve done in the past and the things I would change when doing it again or alternatives I considered. I hope some will find this useful. Some familiarity with the Linux command line is assumed.

Table of Contents

General Principles of Dropboxes

As mentioned above, a dropbox is a device that you can connect to the target network and leave behind. (In an authorized test, you’d likely get your hardware back at the end, but it’s always possible that someone steals/destroys/etc. your device.) This serves as your foothold into the target network.

For some penetration tests, you’ll be able to provide your contact the dropbox and have them connect it to the network. This can allow you to have an internally scoped test but not require your physical presence at their site. This can be useful to avoid travel costs (or, currently, avoid COVID-19). In this case, you’ll have an agreed-upon network segment that it will be connected to. (Commonly, this will be a network segment with workstations as opposed to a privileged segment.)

If you’re going physical and want to leave a dropbox behind, you’ll have to be more opportunistic about things. You’ll get whatever network segment you get, so you might want to consider dropping a couple of devices if the opportunity presents itself.

In all these cases, you’ll need to remotely control the dropbox over some network connection, and then operate it to perform your attacks.

Connecting Back

You’ll almost always want your dropbox to initiate an outbound connection for remote control. You’ll almost always be behind NAT or a firewall on the target network, and dynamic IP addressing would likely make it hard to find your implant anyway. There’s two different major approaches: “in band”, where your command and control (C2) traffic goes out via the target network you’re connected to, and “out of band” where your C2 is via a separate dedicated connection.

In Band

The easiest approach is to go with an “in band” C2 connection. In this case, you tunnel your traffic out on the same physical connection as you’ve connected to your target network. This is very straightforward when the right conditions exist, since you just need power and the one network connection. Unfortunately, those right conditions are several:

  • No Network Access Control, or NAC that allows access to the internet
  • Some form of unfettered outbound connection
  • DHCP IP Assignment on the segment

If any of these fails, you just have a little box connected to a cable that doesn’t let you do anything. There are also some risks associated with in band connections:

  • Connection might be noticed/detected by Intrusion Detection Systems (IDS)
  • Any DNS, etc., traffic would be visible

Despite the risks and requirements, this will work in a large majority of use cases, and it’s both cheap and simple to setup, which makes it both desirable and the approach I’ve taken nearly every time I’ve setup a Pi-based dropbox.

Out of Band

For an out of band connection, you need to bring your own network connection to the dropbox. Generally, I’ve done that with a WiFi based connection, but another popular option is to go with a cellular connection. WiFi is easier to setup, but obviously has range limitations, while a cellular connection will usually get you better range.

For WiFi, you can act as a client and connect to a guest or open network from your target, or perhaps a network of an adjacent office. Alternatively, you could run an AP mode, but then you need to stay in a relatively close range to be able to connect to your dropbox. (The AP mode really only works if you’re able to put it near a window or you can stay nearby, say in a shared office space.)

USB Cellular Dongle

For a cellular connection, the most popular option is a USB dongle that you insert a SIM card into. The nicer dongles will act like an ethernet network interface (typically via RNDIS) to give you a minimum amount of configuration, and generally will work out of the box with Linux, whereas other dongles may be a little harder to get working. I’ve used the ZTE MF833V with success in the past.

Tunnel Software

Regardless of how you connect back, you probably need some kind of tunneling software for the connection. (The exception being the case where you setup an AP on the Pi and connect directly to it.)

Pretty much all of these require a server as some kind of “meeting point” where both the dropbox(es) and the operator(s) connect. This gives the dropboxes a static IP or hostname to connect to that’s online all the time, and allows operator(s) behind NAT/firewalls to connect in. I use DigitalOcean to host servers for projects (as well as this blog – and if you use my link, you’ll get $100 in free credit for new users as well as support articles like this), but you can of course use any server or VPS that you have.

My favorite approach is actually using ssh to establish an outbound connection that forwards a port back to the dropbox for its own SSH server. I realize this sounds confusing, so I hope this diagram helps:

SSH Tunnels

This is done with a command similar to the following, where then connecting to port 2222 on the server will establish a connection to the dropbox’s port 22. If you have multiple dropboxes, assign each one a different port on the server, or use 0 for port auto assignment. When using that, you’ll likely need to use netstat to find the associated ports.

1
2
3
ssh -R 2222:localhost:22 server.example.com
# or using automatic port assignment and going to the background
ssh -f -N -R 0:localhost:22 server.example.com

If you do this on a server and then are connecting from a remote client (e.g., your laptop), you can use the ssh ProxyJump feature to connect to the dropbox through the server. For example, using the example port 2222, the following will connect to the dropbox (note the use of localhost to connect to the forwarded port):

1
ssh -J server.example.com -p 2222 root@localhost

Unfortunately, this may result in multiple layers of TCP, so throughput will be sub-optimal, but it works well, and I can run an SSH server on Port 443, which is effective for any network that allows HTTPS out without SSL interception. (Almost nobody, it turns out, does any kind of inspection to confirm that 443/tcp traffic is actually TLS.)

I recommend using a tool like autossh to manage your SSH connection in case the connection drops. You’ll also want to enable keepalives to both ensure the connection stays up longer, and also to enable autossh to more easily detect a loss of connectivity.

The other alternative I can recommend is to use a VPN. Previously, I would have used OpenVPN, but I have become a WireGuard convert, especially for lower powered devices like the Raspberry Pi. (The crypto used by WireGuard can sustain higher throughput on low-end devices.) The biggest downside to this is you’ll need to run UDP out, so it can be a little harder if you’re going with the “in band” approach on a restrictive network.

Setup & Challenges

One of the keys is to have your dropbox ready to go when you deploy it. Depending on your approach, someone else might be deploying it (if a remote engagement, or if someone else is doing your physical testing), you might not have much time, or there might be other constraints. In any case, you want to make sure it’s ready to go when you are.

Among other things, this means having the tools you’ll want already set up (you don’t want to waste time and bandwidth installing new tools once the dropbox is in place), having your connection(s) set up, and having contingency plans in case something goes wrong.

One way to have a lot of tools ready to go is to use one of the ARM images for Kali Linux provided by Offensive Security. They’re not perfect, but they’re a great jumping off point for your own custom image. Alternatively, you can install the tools you want on a base Raspberry Pi OS (formerly Raspbian) image. It really depends on what you’re comfortable with.

My general approach is to write the base image to the SD card and load up a Raspberry Pi. I’ll configure authentication (passwords and keys), install the software I want, and setup the connection back to my remote server. Once I have things working properly, I’ll make a full image backup of the SD card. Usually I do something like the following to make the backup, assuming the SD card is /dev/mmcblk0:

1
dd if=/dev/mmcblk0 bs=4M | bzip2 -9 > sdcard.img.bz2

The compression helps because, at this point, the card will be mostly empty so there’s no point in taking up space with all the blank blocks.

Resiliency

If you’re going to rely on this dropbox as your primary way into a target network, you want to make sure it’s as reliable and resilient as possible. There’s some things you just won’t be able to control for, like hardware failure, someone finding your dropbox, or the network port you’re connected to going dark on you. (Placing more than one device, of course, can be insurance against all of those.)

One common complaint with Raspberry Pis in any situation has to do with filesystem corruption from unclean shutdown and incomplete writes to the SD cards. I’ve done some experimentation with this and found some ways to reduce the risk, but nothing I’ve tried or read will completely eliminate it.

MicroSD Card

One way to reduce the risk of corruption is to use a quality SD card. While incomplete writes can be a problem with any card, it seems that some cards are more prone to data loss than others. Maybe this has to do with erase block size, or with reporting writes as finished before they’re done, or with wear-leveling strategies. It’s not clear to me what the difference is, but my greatest level of success has been with Samsung Evo MicroSD cards. I’ve also had good luck with PNY SD cards, even though they’re a somewhat lesser known brand.

Another way to reduce corruption is to minimize writes to the filesystem when it’s running. This is another benefit of having the software pre-installed and configured – you know those writes won’t be what corrupts your filesystem. Mounting /var/log and /tmp as tmpfs helps a lot as well, but does limit what you can store there based on how much RAM the Pi you’re using has. (This was a significant limitation on versions before the Pi 4B.) Alternatively, you can give them a separate partition, so at least if the filesystem is corrupted, you don’t lose your root filesystem at the same time.

You can configure the tmpfs option by adding lines to /etc/fstab like this:

1
2
echo 'tmpfs /tmp tmpfs rw,nodev,noexec,nosuid,size=256M,mode=1700 0 0' >> /etc/fstab
echo 'tmpfs /var/log tmpfs rw,nodev,noexec,nosuid,size=256M,mode=750 0 0' >> /etc/fstab

Another area to consider for reliability is your command and control system. Regardless of your C2 strategy, you may want to consider implementing a backup communications system, such as a slow, infrequently-polling DNS based mechanism. This both provides an alternate in case your primary mechanism fails, and can use a different network interface as a backup.

Yet another concern is the temporary failure of your network link. Having some kind of watchdog to restart your network connection when it can’t reach your server may prove useful.

Software

The exact software you need will depend on the type of engagement you’re performing. At a minimum, you’ll always need your connection for control of the device, and you’ll want some tools for network enumeration and attacks. You probably also want a way to tunnel arbitrary traffic, either via SSH port forwards or SOCKS proxy emulation, or a full proxy on the device. I usually build mine with at least the following:

  • SSH
  • tmux/screen
  • Wireguard
  • nmap
  • tshark/tcpdump
  • Metasploit
  • bettercap

Depending on your engagement, you might want some things like pre-built payloads for various circumstances.

Confidentiality/Data Protection

Like everything else you do as a penetration tester or red teamer, it’s important to protect your client’s data. Whatever you choose to use for your control connection should be encrypted, but it’s also a good idea to encrypt any sensitive data that’s at rest on the dropbox in case someone locates it and takes it (and feels like examining the contents of the SD card).

One obvious option is to encrypt everything, such as with full-disk encryption, but since you won’t be able to unlock the device on boot, this isn’t as easy an option. There are ways to remotely unlock with an SSH server in an initramfs, but that adds complexity and risk of failures.

Instead, having a dedicated data partition that’s encrypted is a good tradeoff that will still offer protection for the data in the case the SD card is examined. I like to use LUKS for this – it’s easy enough to setup and well-studied for the use case. Unfortunately, Broadcom didn’t license the ARM crypto extensions, so performance is not great, but XTS mode is a couple of times faster than CBC mode, so make sure you use that.

1
2
3
4
5
6
% cryptsetup benchmark
#     Algorithm |       Key |      Encryption |      Decryption
        aes-cbc        128b        23.8 MiB/s        77.6 MiB/s
        aes-cbc        256b        17.2 MiB/s        58.9 MiB/s
        aes-xts        256b        85.0 MiB/s        75.1 MiB/s
        aes-xts        512b        65.4 MiB/s        57.4 MiB/s

(Benchmarks taken from a Raspberry Pi 4B with 4GB of RAM.)

Depending on your threat model, you can either mount with a random key on each boot (so if the device is rebooted, all data is lost, including for you), or mount the encrypted partition on the first connection after each boot using a key stored either on your server or your client machines.

Let’s say you want to encrypt all the data to be stored in /data with a random key in each boot, and you’re using /dev/mmcblk0p3 as the underlying partition to be used to store the data. (This is after the /boot and / partitions on the SD card.) You’ll need to setup /etc/crypttab to enable the encryption and /etc/fstab for the filesystem mounting.

1
2
3
mkdir /data
echo 'datacrypt /dev/mmcblk0p3 /dev/urandom cipher=aes-xts-plain64:sha256,size=256,nofail,tmp' >> /etc/crypttab
echo '/dev/mapper/datacrypt /data ext4 defaults,noatime 0 0' >> /etc/fstab

To unlock /data using a remote key, we won’t use crypttab, but instead invoke cryptsetup directly and then mount the encrypted partition. First, there’s one-time setup:

1
2
3
mkdir /data
chmod 000 /data
echo '/dev/mapper/datacrypt /data ext4 defaults,noatime 0 0' >> /etc/fstab

Then copy the following script to /root/cryptsetup.sh:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
#!/bin/bash

set -ue

DEVICE=/dev/mmcblk0p3
NAME="datacrypt"

case "${1:-unlock}" in
  create)
    FIFOD=$(mktemp -d)
    trap "rm -rf ${FIFOD}" SIGINT SIGTERM ERR EXIT
    mkfifo ${FIFOD}/f1
    mkfifo ${FIFOD}/f2
    (
        cryptsetup luksFormat -b 256 -c aes-xts-plain64 "${DEVICE}" ${FIFOD}/f1
        cryptsetup luksOpen --key-file ${FIFOD}/f2 "${DEVICE}" "${NAME}"
        mkfs.ext4 "/dev/mapper/${NAME}"
        mount "/dev/mapper/${NAME}"
    ) &
    tee ${FIFOD}/f1 | tee ${FIFOD}/f2 >/dev/null
    wait
    echo "Successfully created."
    ;;
  unlock)
    cryptsetup luksOpen --key-file - "${DEVICE}" "${NAME}"
    mount /dev/mapper/datacrypt
    echo "Successfully unlocked/mounted."
    ;;
  *)
    echo "Unknown operation!" >/dev/stderr
    exit 1
    ;;
esac

This script uses key data provided over standard input to either create or unlock the data partition. Then you can unlock the data partition remotely by running the script:

1
2
3
4
5
dd if=/dev/urandom of=keyfile bs=64 count=1
# Create the filesystem
ssh root@raspberrypi /root/cryptsetup.sh create < keyfile
# Mount the filesystem on subsequent boots
ssh root@raspberrypi /root/cryptsetup.sh < keyfile

Network Access Control

If you’re unlucky, you’ll wind up on a network port with Network Access Control. Obviously, if you’re doing a coordinated remote test where the dropbox is placed by the IT staff of the target organization, this can be dealt with administratively. However, if you’re on a penetration test where the dropbox is opportunistic/part of the physical engagement, then this may be something you need to overcome.

One way to address this is just to find ports that are not configured for any form of NAC. While that sounds like it might be a long ask, there’s usually plenty on the network that’s not dealing with the NAC implementation. Printers, cameras, “IoT” devices and more may all not be capable of interfacing with the NAC solution. You can find an unused port, replace a device (with the increased risk of detection that brings), add your own network switch (I like these little USB-powered switches), or use a 2nd wired network interface on the dropbox. (On the Pi, this will need to be connected via USB.)

For MAC- or 802.1x-based NAC, a good approach is the use of two network interfaces bridged together. In both cases, the goal is to make your implant indistinguishable from the legitimate host on the network. Sometimes it’s enough to have to have the port activated by the legitimate client, but other times you’ll need to clone the MAC and IP of the device, which requires some network tricks.

For 802.1x, you’ll need to configure the bridge to pass the EAPOL frames as well. This can be done by setting an option in a sysfs file for the bridge:

1
echo 8 > /sys/class/net/br*/bridge/group_fwd_mask

You can configure a transparent firewall setup or use a tool like FENRIR to inject traffic using spoofed MAC/IP settings. The exact requirements depend on the 802.1x setup, but in the best case, once the port is enabled by the switch, all traffic on it will be allowed until the next link down.

Dealing with custom or more complex NAC requirements is left as an exercise for the reader.

Concealment

If this is an overt test, there’s no need to worry about concealment. On the other hand, for a covert test, there’s two main classes of concealment to be concerned about: digital concealment (network detection), and physical concealment.

For network concealment, a lot of the steps in the Network Access Control section will help, including cloning expected IP and MAC addresses. Additionally, putting minimal amounts of unexpected network traffic on the target network will help maintain stealth.

From a physical point of view, you basically have a couple of options: hidden or inconspicuous. Hidden is simple: place your dropbox behind something (e.g., a user’s PC, a printer, etc.) or in some other concealed location. I’ve personally discovered a Raspberry Pi above the false ceiling in a men’s room, connected to the “Guest” wireless network. (It wasn’t actually malicious, but part of a pilot program, but still a strange place to find a Raspberry Pi…)

Raspberry Pi Case

For something that blends in, you want something nobody will think twice about. This depends a lot on your environment, so as per usual with offensive security, recon is critical and the devil’s in the details, but a few thoughts:

  • Make it look like an appliance
  • Make it part of the environment (look like other things present)
  • Give it a plausible reason to exist

I’m a big fan of non-descript cases like this and this. I’m also a fan of misdirection in case the device is seen. For example, sticking a label on the device to identify it as an “Air Quality Monitor” or something equally benign. A bold choice is to include an email address like <healthandsafety@customer.com>. It lends an air of credence to it, but should someone actually take it upon themselves to check with that email address, their suspicions may be aroused if the email bounces.

Other Options

Obviously, a Raspberry Pi is not the only type of device that you can use as an offensive security dropbox. There exist a handful of dedicated devices as well as a wide range of other single-board computers that can be used. The Raspberry Pi, however, is relatively cheap, easy to come by, well-documented, and with a broad software ecosystem.

Dedicated Devices

There are a few dedicated penetration testing devices out there. If you’re in this industry, you’re no doubt familiar with Hak5’s products. I’m a big fan of the Packet Squirrel as a network implant, particularly when you need to do an inline MITM, but it has nowhere near the ecosystem nor the processing power available in a Raspberr Pi. Additionally, I’ve never been able to get an out-of-band networking system working on. (Maybe I should give it another try…)

The Ace Hackware Rootabaga is another dedicated hardware implant, based on the TP-LINK MR3040, which is a small WiFi router using OpenWRT as the base of the firmware. While it claims to be a competitor to the WiFi Pineapple, the firmware is not nearly as current. It definitely lacks the ecosystem available to the Raspberry Pi (either with Debian or Kali) and is significantly less powerful. The one big advantage to the Rootabaga is it’s built-in battery, though it only lasts hours, so you may need to plan around that.

If the battery of the Rootabaga sounds attractive, you could pair a Pi with a battery bank – the bigger, the better. A 26.8Ah battery offers nearly 100Wh of energy. Given a typical power consumption of a bit under 2.5W from a Pi, you can keep going for ~40 hours on the battery. Alternatively, if PoE is available, you can put the official Raspberry Pi PoE hat on the dropbox and run from that. Unfortunately, I’m not aware of any way to pass PoE through, so you won’t be able to use that if you need to go inline on a device that’s already PoE-powered.

Alternative Single-Board Computers

Since the inception of the Raspberry Pi, there has been a whole ecosystem of clones, with names like BananaPi and OrangePi, but these don’t offer you much. They have a smaller set of documentation, a smaller set of ready-made software and distributions, and the hardware capabilities are much the same as the Raspberry Pi. It’s actually rather amazing to me how many different variants have cropped up, so if one fits your needs, you may want to consider it.

EspressoBin

In terms of alternatives that offer benefits, there’s a couple of things I’ve looked for, mostly related to connectivity. Most prominent is multiple ethernet interfaces, like you can find on the EspressoBin or a handful of the Raspberry Pi clones like the NanoPi R2S. There are also a number of less powerful (in terms of CPU and RAM) options available in the portable router space. Obviously, running on the vendor firmware doesn’t give you a lot of options, so I look exclusively for devices that are well-supported by OpenWRT. GL.iNet specializes in this space, and I’ve used several of their AR-750S “Slate” Portable Router for various projects, though it’s not the most inconspicuous device ever.

Small Form Factor PC Dropbox

If, on the other hand, you’re looking for as much local processing power and/or storage as you can get, you can get a very small form factor PC like the Intel NUC, or even small PCs designed for firewall usage, like devices from Protectli. These use x86 processors and can have features like AES-NI, more RAM, and can use mSATA or m.2 SSDs. They’re more conspicuous, more expensive (though given the typical cost of a penetration testing engagement, probably not relevant), and more power-hungry (almost all the other options can be powered off USB).

I’m certainly not the first to discuss using a Raspberry Pi as a penetration testing dropbox, and there have been some cool projects in this space before. There’s this great project where a Raspberry Pi is embedded in a power strip. Unfortunately, their original project page is no longer online.

Artifice Security has an interesting post that describes their approach to similar uses. They specifically discuss the use of the CrazyRadio Mousejack attacks, and other techniques not covered here.

on July 14, 2020 07:00 AM

July 12, 2020

One of my dedicated servers on OVH didn’t get back online after a reboot, so I checked via KVM and found that it was stuck at GRUB 2 prompt. To solve the problem, I changed netboot to rescue mode from OVH control panel, and with the rescue mode SSH credentials emailed to me, performed the […]

The post Bootloader Fix on NVMe Drive appeared first on Cyber Kingdom of Russell John.

on July 12, 2020 02:36 AM

July 11, 2020

Lubuntu 19.10 (Eoan Ermine) was released October 17, 2019 and will reach End of Life on Friday, July 17, 2020. This means that after that date there will be no further security updates or bugfixes released. We highly recommend that you update to 20.04 as soon as possible if you are still running 19.10. After […]
on July 11, 2020 04:01 PM

Adventures in Writing

Simon Quigley

The Linux community is a fascinating and powerful space.

When I joined the Ubuntu project approximately five years ago, I (vaguely at the time) understood that there was a profound sense of community and passion everywhere that is difficult to find in other spaces. My involvement has increased, and so has my understanding. I had thought of starting a blog as a means of conveying the information that I stumbled across, but my writing skills were very crude and regrettable, being in my early teenage years.

I have finally decided to take the leap. In this blog, I would like to occasionally provide updates on my work, either through focused deep dives on a particular topic, or broad updates on low hanging fruit that has been eliminated. While the articles may be somewhat spontaneous, I decided that an initial post was in order to explain my goals. Feel free to subscribe for more detailed posts in the future, as there are many more to come.

on July 11, 2020 10:59 AM

July 10, 2020

I actually wanted to move on with the node-red series of blog posts, but noticed that there is something more pressing to write down first …

People (on the snapcraft.io forum or IRC) often ask about “how would i build a package for Ubuntu Core” …

If your Ubuntu Core device is i.e. a Raspberry Pi you won’t easily be able to build for its armhf or arm64 target architecture on your PC which makes development harder.

You can use the snapcraft.io auto-build service that builds for all supported arches automatically or use fabrica but if you want to iterate fast over your code, waiting for the auto-builds is quite time consuming. Others i heard of simply have two SD cards in use, one running classic Ubuntu Server and the second one running Ubuntu Core so you can switch them around to test your code on Core after building on Server … Not really ideal either and if you do not have two Raspberry Pis this ends in a lot reboots, eating your development time.

There is help !

There is an easy way to do your development on Ubuntu Core by simply using an LXD container directly on the device … you can make code changes and quickly build inside the container, pull the created snap package out of your build container and install it on the Ubuntu Core host without any reboots or waiting for remote build services, just take a look at the following recipe of steps:

1) Grab an Ubuntu Core image from the stable channel, run through the setup wizard to set up user and network and ssh into the device:

$ grep Model /proc/cpuinfo 
Model       : Raspberry Pi 3 Model B Plus Rev 1.3
$ grep PRETTY /etc/os-release 
PRETTY_NAME="Ubuntu Core 18"
$

2) Install lxd on the device and set up a container targeting the release that your snapcraft.yaml defines in the base: entry (i.e. base: core -> 16.04, base: core18 -> 18.04, base: core20 -> 20.04):

$ snap install lxd
$ sudo lxd init --auto
$ sudo lxc launch ubuntu:18.04 bionic
Creating bionic
Starting bionic
$

3) Enter the container with the lxc shell command, install the snapcraft snap, clone your tree and edit/build your code:

$ sudo lxc shell bionic
root@bionic:~# snap install snapcraft --classic
...
root@bionic:~# git clone https://github.com/ogra1/htpdate-daemon-snap.git
root@bionic:~# cd htpdate-daemon-snap/
... make any edits you want here ...
root@bionic:~/htpdate-daemon-snap# snapcraft --destructive-mode
...
Snapped 'htpdate-daemon_1.2.2_armhf.snap'
root@bionic:~/htpdate-daemon-snap#

4) Exit the container, pull the snap file you built and install it with the –dangerous flag

root@bionic:~/htpdate-daemon-snap# exit
logout
$ sudo lxc file pull bionic/root/htpdate-daemon-snap/htpdate-daemon_1.2.2_armhf.snap .
$ snap install --dangerous htpdate-daemon_1.2.2_armhf.snap
htpdate-daemon 1.2.2 installed
$

This is it … for each new iteration you can just enter the container again, make your edits, build, pull and install the snap.

(One additional Note: if you want to avoid having to use sudo with all the lxc calls above, add your username to the end of the line reading lxd:x:999: in the /var/lib/extrausers/group file)

 

on July 10, 2020 11:36 AM

July 09, 2020

KDE is All About the Apps as I hope everyone knows, we have top quality apps that we are pushing out to all channels to spread freedom and goodness.

As part of promoting our apps we updated the kde.org/applications pages so folks can find out what we make.  Today we’ve added some important new features:

Here on the KMyMoney page you can see the lovely new release that they made recently along with the source download link.

The “Install on Linux” link has been there for a while and uses the Appstream ID to open Discover which will offer you the install based on any installation source known to Discover: Packagekit, Snap or Flatpak.

Here in the Krita page you can see it now offers downloads from the Microsoft Store and from Google Play.

Or if you prefer a direct download it links to AppImages, macOS and Windows installs.

And here’s the KDE connect page where you can see they are true Freedom Lovers and have it on the F-Droid store.

All of this needs some attention from people who do the releases.  The KDE Appstream Guidelines has the info on how to add this metadata.  Remember it needs added to master branch as that is what the website scans. 

There is some tooling to help, the appstream-metainfo-release-update script and recently versions of appstreamcli.

Help needed! If you spot out of date info on the site do let me or another Web team spod know.  Future work includes getting more apps on more stores and also making the release service scripts do more automated additions of this metadata.  And some sort of system that scans the download site or maybe uses debian watch files to check for the latest release and notify someone if it’s not in the Appstream file would be super.

Thanks to Carl for much of the work on the website, formidable!

 

on July 09, 2020 04:22 PM
Ubuntu is the industry-leading operating system for use in the cloud. Every day millions of Ubuntu instances are launched in private and public clouds around the world. Canonical takes pride in offering support for the latest cloud features and functionality. As of today, all Ubuntu Amazon Web Services (AWS) Marketplace listings are now updated to include support for the new Graviton2 instance types. Graviton2 is Amazon’s next-generation ARM processor delivering increased performance at a lower cost.
on July 09, 2020 12:00 AM

July 01, 2020

While there is a node-red snap in the snap store (to be found at https://snapcraft.io/node-red with the source at https://github.com/dceejay/nodered.snap) it does not really allow you to do a lot with it on i.e. a Raspberry Pi if you want to read sensor data that does not actually come in via the network …

The snap is missing all essential interfaces that could be used for any sensor access (gpio, i2c, Bluetooth, spi or serial-port) and it does not even come with basics like hardware-observe, system-observe or mount-observe to get any systemic info from the device it runs on.

While the missing interfaces are indeed a problem, there is the fact that strict snap packages need to be self contained and hardly have any ability to dynamically compile any software …. Now, if you know nodejs and npm (or yarn or gyp) you know that additional node modules often need to compile back-end code and libraries when you add them to your nodejs install. Technically it is actually possible to make “npm install” work but it is indeed hard to predict what a user may want to install in her installation so you would also have to ship all possible build systems (gcc, perl, python, you name it)
plus all possible development libraries any of the added modules could ever require …

That way you might technically end up with a full OS inside the snap package. Not really a desirable thing to do (beyond the fact that this would even with the high compression snap packages use end up in a gigabytes big snap).

So lets take a look at whats there already in the upstream snapcraft.yaml we can find a line like the following:

npm install --prefix $SNAPCRAFT_PART_INSTALL/lib node-red node-red-node-ping node-red-node-random node-red-node-rbe node-red-node-serialport

This is actually great, so we can just append any modules we need to that line …

Now, as noted above, while there are many node-red modules that will simply work this way, many that are interesting for us to access sensor data will need additional libs that we will need to include in the snap as well …

In Snapcraft you can easily add a dependency via simply adding a new part to the snapcraft.yaml, so lets do this with an example:

Lets add the node-red-node-pi-gpio module, lets also break up the above long line into two and use a variable that we can append more modules to:

DEFAULT_MODULES="npm node-red node-red-node-ping node-red-node-random node-red-node-rbe \
                 node-red-node-serialport node-red-node-pi-gpio"
npm install --prefix $SNAPCRAFT_PART_INSTALL/lib $DEFAULT_MODULES

So this should get us the GPIO support for the Pi into node-red …

But ! Reading the module documentation shows that this module is actually a front-end to the RPi.GPIO python module, so we need the snap to ship this too … luckily snapcraft has an easy to use python plugin that can pip install anything you need. We will add a new part above the node-red part:

parts:
...
  sensor-libs:
    plugin: python
    python-version: python2
    python-packages:
      - RPi.GPIO
  node-red:
    ...
    after: [ sensor-libs ]

Now Snapcraft will pull in the python RPi.GPIO module before it builds node-red (see the “after:” statement i added) and node-red will find the required RPi.GPIO lib when compiling the node-red-node-pi-gpio node module. This will get us all the bits and pieces to have GPIO support inside the node-red application …

Snap packages are running confined, this means they can not see anything of the system we do not allow it to via an interface connection. Remember that i said above the upstream snap is lacking some such interfaces ? So lets better add them to the “apps:” section of our snap (the pi-gpio node module wants to access /dev/gpiomem as well as the gpio device-node itself, so we make sure both these plugs are available to the app):

apps:
  node-red:
    command: bin/startNR
    daemon: simple
    restart-condition: on-failure
    plugs:
      ...
      - gpio
      - gpio-memory-control

And this is it, we have added GPIO support to the node-red snap source, if we re-build the snap, install it on an Ubuntu Core device and do a:

snap connect node-red:gpio-memory-control
snap connect node-red:gpio pi:bcm-gpio-4

We will be able to use node-red flows using this GPIO (for other GPIOs you indeed need to connect to the pi:bcm-gpio-* of your choice … (the mapping for Ubuntu Core follows https://pinout.xyz/ )

I have been collecting a good bunch of possible modules in a forked snap that can be found at https://github.com/ogra1/nodered-snap a binary of this is at https://snapcraft.io/node-red-rpi and i plan a series of more node-red centric posts the next days telling you how to wire things up, with example flows and some deeper insight how to make your node-red snap talk to all the Raspberry Pi interfaces, from i2c to Bluetooth.

Stay tuned !

on July 01, 2020 04:11 PM

June 26, 2020

Adapting To Circumstances

Stephen Michael Kellat

I have written prior that I wound up getting a new laptop. Due to the terms of getting the laptop I ended up paying not just for a license for Windows 10 Professional but also for Microsoft Office. As you might imagine I am not about to burn that much money at the moment. With the advent of the Windows Subsystem for Linux I am trying to work through using it to handle my Linux needs at the moment.

Besides, I did not realize OpenSSH was available as an optional feature for Windows 10 as well. That makes handling the herd of Raspberry Pi boards a bit easier. Having the WSL2 window open doing one thing and a PowerShell window open running OpenSSH makes life simple. PowerShell running OpenSSH is a bit easier to use compared to PuTTY so far.

The Ubuntu Wiki mentions that you can run graphical applications using Windows Subsystem for Linux. The directions appear to work for most people. On my laptop, though, they most certainly did not work.

After review the directions were based on discussion in a bug on Github where somebody came up with a clever regex. The problem is that kludge only works if your machine acts as its own nameserver. When I followed the instructions as written my WSL2 installation of 20.04 dutifully tried to open an X11 session on the machine where I said the display was.

Unfortunately that regex took a look at what it found on my machine and said that the display happened to be on my ISP’s nameserver. X11 is a network protocol where you can run a program on one computer and have it paint the screen on another computer though that’s not really a contemporary usage. Thin clients like actual physical X Terminals from a company like Wyse would fit that paradigm, though.

After a wee bit of frustration where I was initially not seeing the problem I had found it there. Considering how strangely my ISP has been acting lately I most certainly do not want to try to run my own nameserver locally. Weirdness by my ISP is a matter for separate discussion, alas.

I inserted the following into my .bashrc to get the X server working:

export DISPLAY=$(landscape-sysinfo --sysinfo-plugins=Network | grep IPv4 | perl -pe 's/ IPv4 address for wifi0: //'):0

Considering that my laptop normally connects to the Internet via Wi-Fi I used the same landscape tool that the message of the day updater uses to grab what my IP happens to be. Getting my IPv4 address is sufficient for now. With usage of grep and a Perl one-liner I get my address in a usable form to point my X server the right way.

Elegant? Not really. Does it get the job done? Yes. I recognize that it will need adjusting but I will cross that bridge when I approach it.

Since the original bug thread on Github is a bit buried the best thing I can do is to share this and to mention the page being on the wiki at https://wiki.ubuntu.com/WSL. WSL2 will be growing and evolving. I suspect this minor matter of graphical applications will be part of that evolution.

on June 26, 2020 10:14 PM

June 20, 2020

New library: libsubid

Serge Hallyn

User namespaces were designed from the start to meet a requirement that unprivileged users be able to make use of them. Eric accomplished this by introducing subuid and subgid delegations through shadow. These are defined by the /etc/subuid and /etc/subgid files, which only root can write to. The setuid-root programs newuidmap and newgidmap, which ship with shadow, respect the subids delegated in those two files.

Until recently, programs which wanted to query available mappings, like lxc-usernsexec, have each parsed these two files. Now, shadow ships a new library, libsubid, to facilitate more programatic querying of subids. The API looks like this:

struct subordinate_range **get_subuid_ranges(const char *owner);
struct subordinate_range **get_subgid_ranges(const char *owner);
void subid_free_ranges(struct subordinate_range **ranges);

int get_subuid_owners(uid_t uid, uid_t **owner);
int get_subgid_owners(gid_t gid, uid_t **owner);

/* range should be pre-allocated with owner and count filled in, start is
 * ignored, can be 0 */
bool grant_subuid_range(struct subordinate_range *range, bool reuse);
bool grant_subgid_range(struct subordinate_range *range, bool reuse);

bool free_subuid_range(struct subordinate_range *range);
bool free_subgid_range(struct subordinate_range *range);

The next step, which I’ve not yet begun, will be to hook these general queries into NSS. You can follow the work in this github issue </p.

on June 20, 2020 06:50 PM