June 24, 2019

Thanks to the huge amount of feedback this weekend from gamers, Ubuntu Studio, and the WINE community, we will change our plan and build selected 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS.

We will put in place a community process to determine which 32-bit packages are needed to support legacy software, and can add to that list post-release if we miss something that is needed.

Community discussions can sometimes take unexpected turns, and this is one of those. The question of support for 32-bit x86 has been raised and seriously discussed in Ubuntu developer and community forums since 2014. That’s how we make decisions.

After the Ubuntu 18.04 LTS release we had extensive threads on the ubuntu-devel list and also consulted Valve in detail on the topic. None of those discussions raised the passions we’ve seen here, so we felt we had sufficient consensus for the move in Ubuntu 20.04 LTS. We do think it’s reasonable to expect the community to participate and to find the right balance between enabling the next wave of capabilities and maintaining the long tail. Nevertheless, in this case it’s relatively easy for us to change plan and enable natively in Ubuntu 20.04 LTS the applications for which there is a specific need.

We will also work with the WINE, Ubuntu Studio and gaming communities to use container technology to address the ultimate end of life of 32-bit libraries; it should stay possible to run old applications on newer versions of Ubuntu. Snaps and LXD enable us both to have complete 32-bit environments, and bundled libraries, to solve these issues in the long term.

There is real risk to anybody who is running a body of software that gets little testing. The facts are that most 32-bit x86 packages are hardly used at all. That means fewer eyeballs, and more bugs. Software continues to grow in size at the high end, making it very difficult to even build new applications in 32-bit environments. You’ve heard about Spectre and Meltdown – many of the mitigations for those attacks are unavailable to 32-bit systems.

This led us to stop creating Ubuntu install media for i386 last year and to consider dropping the port altogether at a future date.  It has always been our intention to maintain users’ ability to run 32-bit applications on 64-bit Ubuntu – our kernels specifically support that.

The Ubuntu developers remain committed as always to the principle of making Ubuntu the best open source operating system across desktop, server, cloud, and IoT.  We look forward to the ongoing engagement of our users in continuing to make this principle a reality.

The post Statement on 32-bit i386 packages for Ubuntu 19.10 and 20.04 LTS appeared first on Ubuntu Blog.

on June 24, 2019 04:52 PM

Full Circle Weekly News #136

Full Circle Magazine


OpenMandriva Lx 4.0 is here
https://betanews.com/2019/06/16/openmandriva-lx4-linux-amd/

KDE Plasma 5.16 Gets First Point Release
https://news.softpedia.com/news/kde-plasma-5-16-desktop-environment-gets-first-point-release-update-now-526455.shtml

Canonical Outs Important Security Update for All Ubuntu Releases
https://news.softpedia.com/news/canonical-outs-important-linux-kernel-security-update-for-all-ubuntu-releases-526440.shtml

Canonical Will Drop Support for 32-bit Architectures in Future Ubuntu Releases
https://news.softpedia.com/news/canonical-will-drop-support-for-32-bit-architectures-in-future-ubuntu-releases-526439.shtml

Canonical’s Snap Store Adds 11 Distro Specific Installation Pages for Every Single App
https://www.forbes.com/sites/jasonevangelho/2019/06/14/canonicals-snap-store-adds-10-distro-specific-installation-pages-for-every-single-app/#2f9f4bf65448

Mozilla Patches Firefox Zero-Day Abused in the Wild
https://www.zdnet.com/article/mozilla-patches-firefox-zero-day-abused-in-the-wild/

Mozilla Patches Second Zero-Day Flaw This Week

Credits:
Ubuntu “Complete” sound: Canonical
 
Theme Music: From The Dust – Stardust

https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

on June 24, 2019 04:21 PM

June 23, 2019

I like to take care of my desktop Linux and I do so by not installing 32-bit libraries. If there are any old 32-bit applications, I prefer to install them in a LXD container. Because in a LXD container you can install anything, and once you are done with it, you delete it and poof it is gone forever!

In the following I will show the actual commands to setup a LXD container for a system with an NVidia GPU so that we can run graphical programs. Someone can take these and make some sort of easy-to-use GUI utility. Note that you can write a GUI utility that uses the LXD API to interface with the system container.

Prerequisites

You are running Ubuntu 19.10.

You are using the snap package of LXD.

You have an NVidia GPU.

Setting up LXD (performed once)

Install LXD.

sudo snap install lxd

Set up LXD. Accept all defaults. Add your non-root account to the lxd group. Replace myusername with your own username.

sudo lxd init
usermod -G lxd -a myusername
newgrp lxd

You have setup LXD. Now you can create containers.

Creating the system container

Launch a system container. You can create as many as you wish. This one we will call steam and will put Steam in it.

 lxc launch ubuntu:18.04 steam

Create a GPU passthrough device for your GPU.

lxc config device add steam gt2060 gpu

Create a proxy device for the X11 Unix socket of the host to this container. The proxy device is called X0. The abstract Unix socket @/tmp/.X11-unix/X0 of the host is proxied into the container. The 1000/1000 is the UID and GID of your desktop user on the host.

lxc config device add steam X0 proxy listen=unix:@/tmp/.X11-unix/X0 connect=unix:@/tmp/.X11-unix/X0 bind=container security.uid=1000 security.gid=1000 

Get a shell into the system container.

lxc exec steam -- sudo --user ubuntu --login

Add the NVidia 430 driver to this Ubuntu 18.04 LTS container, using the PPA. The driver in the container has to match the driver on the host. This is an NVidia requirement.

sudo add-apt-repository ppa:graphics-drivers/ppa  

Install the NVidia library, both 32-bit and 64-bit. Also install utilities to test X11, OpenGL and Vulkan.

sudo apt install -y libnvidia-gl-430 
sudo apt install -y libnvidia-gl-430:i386
sudo apt install -y x11-apps mesa-utils vulkan-utils  

Set the $DISPLAY. You can add this into ~/.profile as well.

export DISPLAY=:0
echo export DISPLAY=:0 >> ~/.profile

Enjoy by testing X11, OpenGL and Vulkan.

xclock 
glxinfo
vulkaninfo
xclock X11 application running in a LXD container
ubuntu@steam:~$ glxinfo
 name of display: :0
 display: :0  screen: 0
 direct rendering: Yes
 server glx vendor string: NVIDIA Corporation
 server glx version string: 1.4
 server glx extensions:
     GLX_ARB_context_flush_control, GLX_ARB_create_context, 
...
ubuntu@steam:~$ vulkaninfo 
===========
VULKANINFO
===========

Vulkan Instance Version: 1.1.101


Instance Extensions:
====================
Instance Extensions    count = 16
     VK_EXT_acquire_xlib_display         : extension revision  1
...

The system is now ready to install Steam, and also Wine!

Installing Steam

We grab the deb package of Steam and install it.

wget https://steamcdn-a.akamaihd.net/client/installer/steam.deb
sudo dpkg -i steam.deb
sudo apt install -f

Then, we run it.

steam

Here is some sample output.

ubuntu@steam:~$ steam
 Running Steam on ubuntu 18.04 64-bit
 STEAM_RUNTIME is enabled automatically
 Pins up-to-date!
 Installing breakpad exception handler for appid(steam)/version(0)
 Installing breakpad exception handler for appid(steam)/version(1.0)
 Installing breakpad exception handler for appid(steam)/version(1.0)
...

Installing Wine

Here is how you install Wine in the container.

sudo dpkg --add-architecture i386 
wget -nc https://dl.winehq.org/wine-builds/winehq.key
sudo apt-key add winehq.key
sudo apt update
sudo apt install --install-recommends winehq-stable

Conclusion

There are options to run legacy 32-bit software, and here we show how to do that using LXD containers. We pick NVidia (closed-source drivers) which entails a bit of extra difficulty. You can create many system containers and put in them all sorts of legacy software. Your desktop (host) remains clean and when you are done with a legacy app, you can easily remove the container and it is gone!

on June 23, 2019 10:24 PM

June 22, 2019

Paco Molinero, Fernando Lanero y Marcos Costales debatiremos sobre la polémica de Huawei con el Gobierno de los Estados Unidos. Además hablaremos sobre los problemas de privacidad y seguridad de los dispositivos conectados al Internet de las Cosas.

Ubuntu y otras hierbas
Escúchanos en:
on June 22, 2019 01:58 PM

June 21, 2019

ROS 2 Command Line Interface

Canonical Design Team

Disclosure: read the post until the end, a surprise awaits you!

Moving from ROS 1 to ROS 2 can be a little overwhelming.
It is a lot of (new) concepts, tools and a large codebase to get familiar with. And just like many of you, I am getting started with ROS 2.

One of the central pieces of the ROS ecosystem is its Command Line Interface (CLI). It allows for performing all kind of actions; from retrieving information about the codebase and/or the runtime system, to executing code and of course helping debugging in general. It’s a very valuable set of tools that ROS developers use on a daily basis. Fortunately, pretty much all of those tools were ported from ROS 1 to ROS 2.

To those already familiar with ROS, the ROS 2 CLI wording will sound very familiar. Commands such as roslaunch is ported to ros2 launch, rostopic becomes ros2 topic while rosparam is now ros2 param.
Noticed the pattern already ? Yes that’s right ! The keyword ‘ros2‘ has become the unique entry-point for the CLI.

So what ? ROS CLI keywords where broke in two and that’s it ?


Well, yes pretty much.

Every command starts with the ros2 keyword, followed by a verb, a sub-verb and possibly positional/optional arguments. The pattern is then,

$ ros2 verb sub-verb <positional-argument> <optional-arguments>

Notice that throughout the CLI, the auto-completion (the infamous [tab][tab]) is readily available for verbs, sub-verbs and most positional arguments. Similarly, helpers are available at each stage,

$ ros2 verb --help
$ ros2 verb sub-verb -h

Let us see a few examples,

$ ros2 run demo_node_cpp talker
starts the talker cpp node from the demo_nodes_cpp package.

$ ros2 run demo_node_py listener
starts the listener python node from the demo_nodes_py package.

$ ros2 topic echo /chatter
outputs the messages sent from the talker node.

$ ros2 node info /listener
outputs information about the listener node.

$ ros2 param list
lists all parameters of every node.

Fairly similar to ROS 1 right ?

Missing CLI tools

We mentioned earlier that most of the CLI tools were ported to ROS 2, but not all. We believe such missing tools is one of the barriers to greater adoption of ROS 2, so we’ve started added some that we noticed were missing. Over the past week we contributed 5 sub-verbs, including one that is exclusive to ROS 2. Let us briefly review them,

$ ros2 topic find <message-type>
outputs a list of all topics publishing messages of a given type (#271).

$ ros2 topic type <topic-name>
outputs the message type of a given topic (#272).

$ ros2 service find <service-type>
outputs a list of all services of a given type (#273).

$ ros2 service type <service-name>
outputs the service type of a given service (#274).

This tools are pretty handy by themselves, especially to debug and grasp an overview of a running system. And they become even more interesting when combined, say, in handy little scripts,

$ ros2 topic pub /chatter $(ros2 topic type /chatter) "data: Hello ROS 2 Developers"

Advertisement:
Have you ever looked for the version of a package you are using ?
Ever wondered who is the package author ?
Or which other packages it depends upon ?
All of this information, locked in the package’s xml manifest is now easily available at the tip of your fingers !

The new sub-verb we introduced allows one to retrieve any information contained in a package xml manifest (#280). The command,

$ ros2 pkg xml <package-name>
outputs the entirety of the xml manifest of a given package.
To retrieve solely a piece of it, or a tag in xml wording, use the --tag option,

$ ros2 pkg xml <package-name> --tag <tag-name>

A few examples are (at the time of writing),

$ ros2 pkg xml demo_nodes_cpp --tag version
0.7.6

$ ros2 pkg xml demo_nodes_py -t author
Mikael Arguedas
Esteve Fernandez

$ ros2 pkg xml intra_process_demo -t build_depend libopencv-dev
rclcpp
sensor_msgs
std_msgs

This concludes our brief review of the changes that ROS 2 introduced to the CLI tools.

Before leaving, let me offer you a treat.

— A ROS 2 CLI Cheats Sheet that we put together —

Feel free to share it, print and pin it above your screen but also contribute to it as it is hosted on github !

Cheers.

The post ROS 2 Command Line Interface appeared first on Ubuntu Blog.

on June 21, 2019 05:23 PM

Plasma Vision

Jonathan Riddell

The Plasma Vision got written a couple years ago, a short text saying what Plasma is and hopes to create and defines our approach to making a useful and productive work environment for your computer.  Because of creative differences it was never promoted or used properly but in my quest to make KDE look as up to date in its presence on the web as it does on the desktop I’ve got the Plasma sprinters who are meeting in Valencia this week to agree to adding it to the KDE Plasma webpage.

 

on June 21, 2019 02:19 PM

June 20, 2019

S12E11 – 1942

Ubuntu Podcast from the UK LoCo

This week we’ve been to FOSS Talk Live and created games in Bash. We have a little LXD love in and discuss 32-bit Intel being dropped from Ubuntu 19.10. OggCamp tickets are on sale and we round up some tech news.

It’s Season 12 Episode 11 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on June 20, 2019 03:00 PM

June 18, 2019

Building a PPA for s390x

Elizabeth K. Joseph

About 20 years ago a few clever, nerdy folks got together and ported Linux to the mainframe (s390x architecture). Reasons included because it’s there, and other ones you’d expect from technology enthusiasts, but if you read far enough, you’ll learn that they also saw a business case, which has been realized today. You can read more about that history over on Linas Vepstas’ Linux on the IBM ESA/390 Mainframe Architecture.

Today the s390x architecture not only officially supports Ubuntu, Red Hat Enterprise Linux (RHEL), and SUSE Linux Enterprise Server (SLES), but there’s an entire series of IBM Z mainframes available that are devoted to only running Linux, that’s LinuxONE. At the end of April I joined IBM to lend my Linux expertise to working on these machines and spreading the word about them to my fellow infrastructure architects and developers.

As its own architecture (not the x86 that we’re accustomed to), compiled code needs to be re-compiled in order to run on the s390x platform. In the case of Ubuntu, the work has already been done to get a large chunk of the Ubuntu repository ported, so you can now run thousands of Linux applications on a LinuxONE machine. In order to effectively do this, there’s a team at Canonical responsible for this port and they have access to an IBM Z server to do the compiling.

But the most interesting thing to you and me? They also lend the power of this machine to support community members, by allowing them to build PPAs as well!

By default, Launchpad builds PPAs for i386 and amd64, but if you select “Change details” of your PPA, you’re presented with a list of other architectures you can target.

Last week I decided to give this a spin with a super simple package: A “Hello World” program written in Go. To be honest, the hardest part of this whole process is creating the Debian package, but you have to do that regardless of what kind of PPA you’re creating and there’s copious amounts of documentation on how to do that. Thankfully there’s dh-make-golang to help the process along for Go packages, and within no time I had a source package to upload to Launchpad.

From there it was as easy as clicking the “IBM System z (s390x)” box under “Change details” and the builds were underway, along with build logs. Within a few minutes all three packages were built for my PPA!

Now, mine was the most simple Go application possible, so when coupled with the build success, I was pretty confident that it would work. Still, I hopped on my s390x Ubuntu VM and tested it.

It worked! But aren’t I lucky, as an IBM employee I have access to s390x Linux VMs.

I’ll let you in on a little secret: IBM has a series of mainframe-driven security products in the cloud: IBM Cloud Hyper Protect Services. One of these services is Hyper Protect Virtual Servers which are currently Experimental and you can apply for access. Once granted access, you can launch and Ubuntu 18.04 VM for free to test your application, or do whatever other development or isolation testing you’d like on a VM for a limited time.

If this isn’t available to you, there’s also the LinuxONE Community Cloud. It’s also a free VM that can be used for development, but as of today the only distributions you can automatically provision are RHEL or SLES. You won’t be able to test your deb package on these, but you can test your application directly on one of these platforms to be sure the code itself works on Linux on s390x before creating the PPA.

And if you’re involved with an open source project that’s more serious about a long-term, Ubuntu-based development platform on s390x, drop me an email at lyz@ibm.com so we can have a chat!

on June 18, 2019 02:59 PM

So, say you’re running qemu, and decided to use hugepages, nice isn’t it? helps with performace and stuff, however a wild wall appears!

 QEMU: qemu-system-aarch64: can't open backing store /dev/hugepages/ for guest RAM: Permission denied

This basically means that you’re using the amazing -mem-path /dev/hugepages, and that QEMU running as an unprivileged user can’t write there… This is how it looked for me:

sudo -u _openqa-worker qemu-system-aarch64 -device virtio-gpu-pci -m 4094 -machine virt,gic-version=host -cpu host \ 
  -mem-prealloc -mem-path /dev/hugepages -serial mon:stdio  -enable-kvm -no-shutdown -vnc :102,share=force-shared \ 
  -cdrom openSUSE-Tumbleweed-DVD-aarch64-Snapshot20190607-Media.iso \ 
  -pflash flash0.img -pflash flash1.img -drive if=none,file=opensuse-Tumbleweed-aarch64-20190607-gnome-x11@aarch64.qcow2,id=hd0 \ 
  -device virtio-blk-device,drive=hd0

The machine tries to start, but utimately I get that dreadful message. You can simply do a chmod to the directory, use an udev rule, and get away with it, it’s quick and does the job but also there are few options to solve this using libvirt, however if you’re not using hugeadm to manage those pools and let the operating system take care of it, likely the operating system will take care of this for you, so you can look to /usr/lib/systemd/system/dev-hugepages.mount, since trying to add an udev rule failed for a colleague of mine, I decided to use the systemd approach, ending up with the following:


[Unit]
Description=Systemd service to fix hugepages + qemu ram problems.
After=dev-hugepages.mount

[Service]
Type=simple
ExecStart=/usr/bin/chmod o+w /dev/hugepages/

[Install]
WantedBy=multi-user.target
on June 18, 2019 12:00 AM

June 17, 2019

Welcome to the Ubuntu Weekly Newsletter, Issue 583 for the week of June 9 – 15, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on June 17, 2019 10:21 PM

Full Circle Weekly News #135

Full Circle Magazine


Linux Command Line Editors Vulnerable to High Severity Bug
https://threatpost.com/linux-command-line-editors-high-severity-bug/145569/

KDE 5.16 Is Now Available for Kubuntu
https://news.softpedia.com/news/kde-plasma-5-16-desktop-is-now-available-for-kubuntu-and-ubuntu-19-04-users-526369.shtml

Debian 10 Buster-based Endless OS 3.6.0 Linux Distribution Now Available
https://betanews.com/2019/06/12/debian-10-buster-endless-os-linux/

Introducing Matrix 1.0 and the Matrix.org Foundation
https://www.pro-linux.de/news/1/27145/matrix-10-und-die-matrixorg-foundation-vorgestellt.html

System 76’s Supercharged Gazelle Laptop is Finally Available
https://betanews.com/2019/06/13/system76-linux-gazelle-laptop/

Lenovo Thinkpad P Laptops Are Available with Ubuntu
https://www.omgubuntu.co.uk/2019/06/lenovo-thinkpad-p-series-ubuntu-preinstalled

Atari VCS Linux-powered Gaming Console Is Now Available for Pre-order
https://news.softpedia.com/news/atari-vcs-linux-powered-gaming-console-is-now-available-for-pre-order-for-249-526387.shtml

Credits:
Ubuntu “Complete” sound: Canonical
 
Theme Music: From The Dust – Stardust

https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

on June 17, 2019 03:46 PM

Microsoft announced in May that the new version of Windows Subsystem for Linux 2 (WSL 2), will be running on the Linux kernel, itself running alongside the Windows kernel in Windows.

In June, the first version of WSL2 has been made available as long as you update your Windows 10 installation to the Windows Insider program, and select to receive the bleeding edge updates (fast ring).

In this post we are going to see how to get LXD running in WSL2. In a nutshell, LXD does not work out of the box yet, but LXD is versatile enough to actually make it work even when the default Linux kernel in Windows is not fully suitable yet.

Prerequisites

You need to have Windows 10, then join the Windows Insider program (Fast ring).

Then, follow the instructions on installing the components for WSL2 and switching your containers to WSL2 (if you have been using WSL1 already).

Install the Ubuntu container image from the Windows Store.

At the end, when you run wsl in CMD.exe or in Powershell, you should get a Bash prompt.

The problems

We are listing here the issues that do not let LXD run out of the box. Skip to the next section to get LXD going.

In WSL2, there is a modified Linux 4.19 kernel running in Windows, inside Hyper-V. It looks like this is a cut-down/optimized version of Hyper-V that is good enough for the needs of Linux.

The Linux kernel in WSL2 has a specific configuration, and some of the things that LXD needs, are missing. Specifically, here is the output of lxc-checkconfig.

ubuntu@DESKTOP-WSL2:~$ lxc-checkconfig
 --- Namespaces ---
 Namespaces: enabled
 Utsname namespace: enabled
 Ipc namespace: enabled
 Pid namespace: enabled
 User namespace: enabled
 Network namespace: enabled

--- Control groups ---
 Cgroups: enabled

--- Control groups ---
 Cgroups: enabled

Cgroup v1 mount points:
 /sys/fs/cgroup/cpuset
 /sys/fs/cgroup/cpu
 /sys/fs/cgroup/cpuacct
 /sys/fs/cgroup/blkio
 /sys/fs/cgroup/memory
 /sys/fs/cgroup/devices
 /sys/fs/cgroup/freezer
 /sys/fs/cgroup/net_cls
 /sys/fs/cgroup/perf_event
 /sys/fs/cgroup/hugetlb
 /sys/fs/cgroup/pids
 /sys/fs/cgroup/rdma

Cgroup v2 mount points:

 Cgroup v1 systemd controller: missing
 Cgroup v1 clone_children flag: enabled
 Cgroup device: enabled
 Cgroup sched: enabled
 Cgroup cpu account: enabled
 Cgroup memory controller: enabled
 Cgroup cpuset: enabled

--- Misc ---
 Veth pair device: enabled, not loaded
 Macvlan: enabled, not loaded
 Vlan: missing
 Bridges: enabled, not loaded
 Advanced netfilter: enabled, not loaded
 CONFIG_NF_NAT_IPV4: enabled, not loaded
 CONFIG_NF_NAT_IPV6: enabled, not loaded
 CONFIG_IP_NF_TARGET_MASQUERADE: enabled, not loaded
 CONFIG_IP6_NF_TARGET_MASQUERADE: missing
 CONFIG_NETFILTER_XT_TARGET_CHECKSUM: missing
 CONFIG_NETFILTER_XT_MATCH_COMMENT: missing
 FUSE (for use with lxcfs): enabled, not loaded

--- Checkpoint/Restore ---
 checkpoint restore: enabled
 CONFIG_FHANDLE: enabled
 CONFIG_EVENTFD: enabled
 CONFIG_EPOLL: enabled
 CONFIG_UNIX_DIAG: enabled
 CONFIG_INET_DIAG: enabled
 CONFIG_PACKET_DIAG: enabled
 CONFIG_NETLINK_DIAG: enabled
 File capabilities:

Note : Before booting a new kernel, you can check its configuration
 usage : CONFIG=/path/to/config /usr/bin/lxc-checkconfig

ubuntu@DESKTOP-WSL2:~$           

The systemd-related mount point is OK in the sense that currently systemd does not work anyway in WSL (either WSL1 or WSL2). At some point it will get fixed in WSL2, and there are pending issues on this at Github. Talking about systemd, we cannot use yet the snap package of LXD because snapd depends on systemd. And no snapd, means no snap package of LXD.

The missing netfilter kernel modules mean that we cannot use the managed LXD network interfaces (the one with default name lxdbr0). If you try to create a managed network interface, you will get the following error.

Error: Failed to create network 'lxdbr0': Failed to run: iptables -w -t filter -I INPUT -i lxdbr0 -p udp --dport 67 -j ACCEPT -m comment --comment generated for LXD network lxdbr0: iptables: No chain/target/match by that name.

For completeness, here is the LXD log. Notably, AppArmor is missing from the Linux kernel and there was no CGroup network class controller.

ubuntu@DESKTOP-WSL2:~$ cat /var/log/lxd/lxd.log
 t=2019-06-17T10:17:10+0100 lvl=info msg="LXD 3.0.3 is starting in normal mode" path=/var/lib/lxd
 t=2019-06-17T10:17:10+0100 lvl=info msg="Kernel uid/gid map:"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - u 0 0 4294967295"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - g 0 0 4294967295"
 t=2019-06-17T10:17:10+0100 lvl=info msg="Configured LXD uid/gid map:"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - u 0 100000 65536"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - g 0 100000 65536"
 t=2019-06-17T10:17:10+0100 lvl=warn msg="AppArmor support has been disabled because of lack of kernel support"
 t=2019-06-17T10:17:10+0100 lvl=warn msg="Couldn't find the CGroup network class controller, network limits will be ignored."
 t=2019-06-17T10:17:10+0100 lvl=info msg="Kernel features:"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - netnsid-based network retrieval: no"
 t=2019-06-17T10:17:10+0100 lvl=info msg=" - unprivileged file capabilities: yes"
 t=2019-06-17T10:17:10+0100 lvl=info msg="Initializing local database"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Starting /dev/lxd handler:"
 t=2019-06-17T10:17:14+0100 lvl=info msg=" - binding devlxd socket" socket=/var/lib/lxd/devlxd/sock
 t=2019-06-17T10:17:14+0100 lvl=info msg="REST API daemon:"
 t=2019-06-17T10:17:14+0100 lvl=info msg=" - binding Unix socket" socket=/var/lib/lxd/unix.socket
 t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing global database"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing storage pools"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Initializing networks"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Pruning leftover image files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done pruning leftover image files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Loading daemon configuration"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Pruning expired images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done pruning expired images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Expiring log files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done expiring log files"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Updating images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done updating images"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Updating instance types"
 t=2019-06-17T10:17:14+0100 lvl=info msg="Done updating instance types"
 ubuntu@DESKTOP-WSL2:~$                     

Having said all that, let’s get LXD working.

Configuring LXD on WSL2

Let’s get a shell into WSL2.

C:\> wsl
ubuntu@DESKTOP-WSL2:~$

The aptpackage of LXD is already available in the Ubuntu 18.04.2 image, found in the Windows Store. However, the LXD service is not running by default and we will to start it.

ubuntu@DESKTOP-WSL2:~$ sudo service lxd start

Now we can run sudo lxd initto configure LXD. We accept the defaults (btrfs storage driver, 50GB default storage). But for networking, we avoid creating the local network bridge, and instead we configure LXD to use an existing bridge. The existing bridge configures macvlan, which avoids the error, but macvlan does not work yet anyway in WSL2.

ubuntu@DESKTOP-WSL2:~$ sudo lxd init
Would you like to use LXD clustering? (yes/no) [default=no]:
Do you want to configure a new storage pool? (yes/no) [default=yes]:
Name of the new storage pool [default=default]:
Name of the storage backend to use (btrfs, dir, lvm) [default=btrfs]:
Create a new BTRFS pool? (yes/no) [default=yes]:
Would you like to use an existing block device? (yes/no) [default=no]:
Size in GB of the new loop device (1GB minimum) [default=50GB]:
Would you like to connect to a MAAS server? (yes/no) [default=no]:
Would you like to create a new local network bridge? (yes/no) [default=yes]: no
Would you like to configure LXD to use an existing bridge or host interface? (yes/no) [default=no]: yes
Name of the existing bridge or host interface: eth0
Would you like LXD to be available over the network? (yes/no) [default=no]:
Would you like stale cached images to be updated automatically? (yes/no) [default=yes]
Would you like a YAML "lxd init" preseed to be printed? (yes/no) [default=no]: yes
 config: {}
 networks: []
 storage_pools:
 - config:
     size: 50GB
   description: ""
   name: default
   driver: btrfs
 profiles:
 - config: {}
   description: ""
   devices:
   eth0:
     name: eth0
     nictype: macvlan
     parent: eth0
     type: nic
   root:
     path: /
     pool: default
     type: disk
   name: default
 cluster: null 

ubuntu@DESKTOP-WSL2:~$

For some reason, LXD does not manage to mount sysfor the containers, therefore we need to perform this ourselves.

ubuntu@DESKTOP-WSL2:~$ sudo mkdir /usr/lib/x86_64-linux-gnu/lxc/sys
ubuntu@DESKTOP-WSL2:~$ sudo mount sysfs -t sysfs /usr/lib/x86_64-linux-gnu/lxc/sys

The containers will not have direct Internet connectivity, therefore we need to use a Web proxy. In our case, it suffices to use privoxy. Let’s install it. privoxy uses by default the port 8118, which means that if the containers can somehow get access to port 8118 on the host, they get access to the Internet!

ubuntu@DESKTOP-WSL2:~$ sudo apt update
...
ubuntu@DESKTOP-WSL2:~$ sudo apt install -y privoxy

Now, we are good to go! In the following we create a container with a Web server, and view it using Internet Explorer. Yes, IE has two uses, 1. to download Firefox, and 2. to view the Web server in the LXD container as evidence that all these are real.

Setting up a Web server in a LXD container in WSL2

Let’s create our first container, running Ubuntu 18.04.2. It does not get an IP address from the network because macvlan is not working. The container has no Internet connectivity!

ubuntu@DESKTOP-WSL2:~$ lxc launch ubuntu:18.04 mycontainer
Creating mycontainer
Starting mycontainer

ubuntu@DESKTOP-WSL2:~$ lxc list
+-------------+---------+------+------+------------+-----------+
|    NAME     |  STATE  | IPV4 | IPV6 |    TYPE    | SNAPSHOTS |
+-------------+---------+------+------+------------+-----------+
| mycontainer | RUNNING |      |      | PERSISTENT | 0         |
+-------------+---------+------+------+------------+-----------+

ubuntu@DESKTOP-WSL2:~$

The container has no Internet connectivity, so we need to give it access to port 8118 on the host. But how can we do that, if the container does not have even network connectivity with the host? We can do this using a LXD proxy device. Run the following on the host. The command creates a proxy device called myproxy8118 that proxies the TCP port 8118 between the host and the container (the binding happens in the container because the port already exists on the host).

ubuntu@DESKTOP-WSL2:~$ lxc config device add mycontainer myproxy8118 proxy listen=tcp:127.0.0.1:8118 connect=tcp:127.0.0.1:8118 bind=container
Device myproxy8118 added to mycontainer

ubuntu@DESKTOP-WSL2:~$

Now, get a shell in the container and configure the proxy!

ubuntu@DESKTOP-WSL2:~$ lxc exec mycontainer bash
root@mycontainer:~# export http_proxy=http://localhost:8118/
root@mycontainer:~# export https_proxy=http://localhost:8118/

It’s time to install and start nginx!

root@mycontainer:~# apt update
...
root@mycontainer:~# apt install -y nginx
...
root@mycontainer:~# service nginx start

nginx is installed. For a finer touch, let’s edit a bit the default HTML file of the Web server so that it is evident that the Web server runs in the container. Add some text you think suitable, using the command

root@mycontainer:~# nano /var/www/html/index.nginx-debian.html

Up to now, there is a Web server running in the container. This container is not accessible by the host and obviously by Windows either. So, how can we view the website from Windows? By creating an additional proxy device. The command creates a proxy device called myproxy80 that proxies the TCP port 80 between the host and the container (the binding happens on the host because the port already exists in the container).

root@mycontainer:~# logout
ubuntu@DESKTOP-WSL2:~$ lxc config device add mycontainer myproxy80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80 bind=host

Finally, find the IP address of your WLS2 Ubuntu host (hint: use ifconfig) and connect to that IP using your Web browser.

Conclusion

We managed to install LXD in WSL2 and got a container to start. Then, we installed a Web server in the container and viewed the page from Windows.

I hope future versions of WSL2 will be more friendly to LXD. In terms of the networking, there is need for more work to make it work out of the box. In terms of storage, btrfs is supported (over a loop file) and it is fine.

on June 17, 2019 02:22 PM

So That Happened...

Stephen Michael Kellat

I previously made a call for folks to check in on a net so I could count heads. It probably was not the most opportune timing but it was what I had available. You can listen to the full net at https://archives.anonradio.net/201906170000_sdfarc.mp3 and you'll find my after-net call to all Ubuntu Hams at roughly 44 minutes and 50 seconds into the recording.

This was a first attempt. The folks at SDF were perfectly fine with me making the attempt. The net topic for the night was "special projects" we happened to be undertaking.

Now you might wonder what I might be doing in terms of special projects. That bit is special. Sunspots are a bit non-existent at the moment so I have been fiddling around with listening for distant stations on the AM broadcast band which starts in the United States at 530 kHz and ends at 1710 kHz. From my spots in Ashtabula I end up hearing some fairly distant stations ranging from KYW 1060 in Philadelphia to WCBS 880 in New York City to WPRR 1680 in Ada, Michigan. When I am out driving Interstate Route 90 in the mornings during the winter I have had the opportunity to hear stations such as WSM 650 broadcasting from the vicinity of the Grand Old Opry in Nashville, Tennessee. One time I got lucky and heard WSB 750 out of Atlanta while driving when conditions were right.

These were miraculous feats of physics. WolframAlpha would tell you that the distance between Ashtabula and Atlanta is about 593 miles/955 kilometers. In the computing realm we work very hard to replicate the deceptively simple. A double-sideband non-suppressed carrier amplitude modulated radio signal is one of the simplest voice transmissions that can be made. The receiving equipment for such is often just as simple. For all the infrastructure it would take to route a live stream over a distance somewhat further than that between Derry and London proper, far less would be needed for the one-way analog signal.

Although there is Digital Audio Broadcasting across Europe we really still do not have it adopted across much of the United States. A primary problem is that it works best in areas with higher population density than we have in the USA. So far we have various trade names for IBOC, that is to say in-band on-channel, subcarriers giving us hybrid signals now. Digital-only IBOC has been tested at WWFD in Maryland and there was a proposal to the Federal Communications Commission to make a permanent rules change to make this possible. It appears in the American Experience, though, that the push is more towards Internet-connected products like iHeartRadio and Spotify rather than the legacy media outlets that has public service obligations as well as emergency alerting obligations.

I am someone who considers the Internet fairly fragile as evidenced most recently by the retailer Target having a business disaster through being unable to accept payments due to communications failures. I am not against technology advances, though. Keeping connections to the technological ways of old as well as sometimes having cash in the wallet as well as knowing how to write a check seem to be skills that are still useful in our world today.

Creative Commons License
So That Happened... by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

on June 17, 2019 02:33 AM

Hack Computer review

Bryan Quigley

I bought a hack computer for $299 - it's designed for teaching 8+ year olds programming. That's not my intended use case, but I wanted to support a Linux pre-installed vendor with my purchase (I bought an OLPC back in the day in the buy-one give-one program).

I only use a laptop for company events, which are usually 2-4 weeks a year. Otherwise, I use my desktop. I would have bought a machine with Ubuntu pre-installed if I was looking for more of a daily driver.

The underlying specs of the ASUS Laptop E406MA they sell are:

Unboxing and first boot

Unboxing

Included was an:

  • introduction letter to parents
  • tips (more for kids)
  • 2 pages of hack stickers
  • 2 hack pins
  • ASUS manual bits
  • A USB to Ethernet adapter
  • and the laptop:

Laptop in sleeve Laptop out of sleeve first open

First boot takes about 20 seconds. And you are then dropped into what I'm pretty sure is GNOME Initial Setup. They also ask on Wifi connections if they are metered or not.

first open

There are standard philips head screws on the bottom of the laptop, but it wasn't easy to remove the bottom and I didn't want to push - I've been told there is nothing user replaceable within.

The BIOS

The options I'd like change are there, and updating the BIOS was easy enough from the BIOS (although no LVFS support..).

bios ez mode bios advanced

A kids take

Keep in mind this review is done by 6 year old, while the laptop is designed for an 8+ year old.

He liked playing the art game and ball game. The ball game is an intro to the hack content. The art game is just Krita - see the artwork below. First load needed some help, but got the hang of the symmetrical tool.

He was able to install an informational program about Football by himself, though he was hoping it was a game to play.

AAAAA my favorite water color

Overall

For target market: It's really the perfect first laptop (if you want to buy new) with what I would generally consider the right trade-offs. Given Endless OS's ability to have great content pre-installed, I may have tried to go for a 128 GB drive. Endless OS is setup to use zram which will minimize RAM issues as much as possible. The core paths are designed for kids, but some applications are definitely not. It will be automatically updating and improving over time. I can't evaluate the actual Hack content whose first year is free, but after that will be $10 a month.

For people who want a cheap Linux pre-installed laptop: I don't think you can do better than this for $299.

Pros:

  • CPU really seems to be the best in this price range. A real Intel quad-core, but is cheap enough to have missed some of the vulernabities that have plaqued Intel (no HT).
  • Battery life is great
  • A 1080p screen

Cons:

  • RAM and disk sizes. Slow eMMC disk. Not upgradeable.
  • Fingerprint reader doesn't work today (and that's not part of their goal with the machine, it defaults to no password)
  • For free software purists, Trisquel didn't have working wireless or trackpad. The included usb->ethernet worked though.
  • Mouse can lack sensitivty at times
  • Ubuntu: I have had Wifi issues after suspend, but stopping and starting Wifi fixed them
  • Ubuntu: Boot times are slower than Endless
  • Ubuntu: Suspend sometimes loses the ability to play sound (gets stuck on headphones)

I do plan on investiaging the issues above and see if I can fix any of them.

Using Ubuntu?

My recommendations:

  • Purge rsyslog (may speed up boot time and reduces unnessary writes)
  • For this class of machine, I'd go deb only (remove snaps) and manual updating
  • Install zram-config
  • I'm currently running with Wayland and Chromium
  • If you don't want to use stock Ubuntu, I'd recommend Lubuntu.

Dive deeper

on June 17, 2019 12:00 AM

June 15, 2019

Kate Drane is a bit of an enigma. She helped launch hundreds of crowdfunding projects at Indiegogo (in fact, I worked with her on the Ubuntu Edge and Global Learning XPRIZE campaigns). She has helped connect hundreds of startups to expertise, capital, and customers at Techstars, and is a beer fan who co-founded a canning business called The Can Van.

There is one clear thread through her career: providing more efficient and better access for innovators, no matter what background they come from or what they want to create. Oh, and drinking great beer. She is fantastic and does great work.

In this episode of Conversations With Bacon we unpack her experiences of getting started in this work, her work facilitating broader access to information, funding, and people, what it was like to be at Indiegogo through the teenage years of crowdfunding, how she works to support startups, the experience of entrepreneurship from different backgrounds, and more.

Listen


   Listen on Google Play Music
   

The post Conversations With Bacon: Kate Drane, Techstars appeared first on Jono Bacon.

on June 15, 2019 01:14 AM

June 14, 2019

KDE.org Description Update

Jonathan Riddell

The KDE Applications website was a minimal possible change to move it from an unmaintained and incomplete site to a self-maintaining and complete site.  It’s been fun to see it get picked up in places like Ubuntu Weekly News, Late Night Linux and chatting to people in real life they have seen it get an update. So clearly it’s important to keep our websites maintained.  Alas the social and technical barriers are too high in KDE.  My current hope is that the Promo team will take over the kde-www stuff giving it communication channels and transparancy that doesn’t currently exist.  There is plenty more work to be done on kde.org/applications website to make it useful, do give me a ping if you want to help out.

In the mean time I’ve updated the kde.org front page text box where there is a brief description of KDE.  I remember a keynote from Aaron around 2010 at Akademy where he slagged off the description that was used on kde.org.  Since then we have had Visions and Missions and Goals and whatnot defined but nobody has thought to put them on the website.  So here’s the new way of presenting KDE to the world:

Thanks to Carl and others for review.

 

on June 14, 2019 12:52 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, 214 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 17 hours (out of 14 hours allocated plus 10 extra hours from April, thus carrying over 7h to June).
  • Adrian Bunk did 0 hours (out of 8 hours allocated, thus carrying over 8h to June).
  • Ben Hutchings did 18 hours (out of 18 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated plus 0.25 extra hours from April, thus carrying over 0.25h to June).
  • Emilio Pozuelo Monfort did 33 hours (out of 18 hours allocated + 15.25 extra hours from April, thus carrying over 0.25h to June).
  • Hugo Lefeuvre did 18 hours (out of 18 hours allocated).
  • Jonas Meurer did 15.25 hours (out of 17 hours allocated, thus carrying over 1.75h to June).
  • Markus Koschany did 18 hours (out of 18 hours allocated).
  • Mike Gabriel did 23.75 hours (out of 18 hours allocated + 5.75 extra hours from April).
  • Ola Lundqvist did 6 hours (out of 8 hours allocated + 4 extra hours from April, thus carrying over 6h to June).
  • Roberto C. Sanchez did 22.25 hours (out of 12 hours allocated + 10.25 extra hours from April).
  • Sylvain Beucler did 18 hours (out of 18 hours allocated).
  • Thorsten Alteholz did 18 hours (out of 18 hours allocated).

Evolution of the situation

May was a calm month, nothing really changed compared to April, we are still at 214 hours funded by month. We continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker currently lists 34 packages with a known CVE and the dla-needed.txt file has 34 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on June 14, 2019 07:20 AM

Development in LXD

Ted Gould

Most of my development is done in LXD containers. I love this for a few reasons. It takes all of my development dependencies and makes it so that they're not installed on my host system, reducing the attack surface there. It means that I can do development on any Linux that I want (or several). But it also means that I can migrate my development environment from my laptop to my desktop depending on whether I need more CPU or whether I want it to be closer to where I'm working (usually when travelling).

When I'm traveling I use my Pagekite SSH setup on a Raspberry Pi as the SSH gateway. So when I'm at home I want to connect to the desktop directly, but when away connect through the gateway. To handle this I set up SSH to connect into the container no matter where it is. For each container I have an entry in my .ssh/config like this:

Host container-name
    User user
    IdentityFile ~/.ssh/id_container-name
    CheckHostIP no
    ProxyCommand ~/.ssh/if-home.sh desktop-local desktop.pagekite.me %h

You'll notice that I use a different SSH key for each container. They're easy to generate and it is worth not reusing them, this is a good practice. Then for the ProxyCommand I have a shell script that'll setup a connection depending on where the container is running, and what network my laptop is on.

#!/bin/bash

set -e

CONTAINER_NAME=$3

SSH_HOME_HOST=$1
SSH_OUT_HOST=$2

ROUTER_IP=$( ip route get to 8.8.8.8 | sed -n -e "s/.*via \(.*\) dev.*/\\1/p" )
ROUTER_MAC=$( arp -n ${ROUTER_IP} | tail -1 | awk '{print $3}' )

HOME_ROUTER_MAC="▒▒:▒▒:▒▒:▒▒:▒▒:▒▒"

IP_COMMAND="lxc list --format csv --columns 6 ^${CONTAINER_NAME}\$ | head --lines=1 | cut -d ' ' -f 1"
NC_COMMAND="nc -6 -q0"

IP=$( bash -c "${IP_COMMAND}" )
if [ "${IP}" != "" ] ; then
    # Local
    exec ${NC_COMMAND} ${IP} 22
fi

SSH_HOST=${SSH_OUT_HOST}
if [ "${HOME_ROUTER_MAC}" == "${ROUTER_MAC}" ] ; then
    SSH_HOST=${SSH_HOME_HOST}
fi

IP=$( echo ${IP_COMMAND} | ssh ${SSH_HOST} bash -l -s )

exec ssh ${SSH_HOST} -- bash -l -c "\"${NC_COMMAND} ${IP} 22\"" 

What this script does it that it first tries to see if the container is running locally by trying to find its IP:

IP_COMMAND="lxc list --format csv --columns 6 ^${CONTAINER_NAME}\$ | head --lines=1 | cut -d ' ' -f 1"

If it can find that IP, then it just sets up nc command to connect to the SSH port on that IP directly. If not, we need to see if we're on my home network or out and about. To do that I check to see if the MAC address of the default router matches the one on my home network. This is a good way to check because it doesn't require sending additional packets onto the network or otherwise connecting to other services. To get the router's IP we look at which router is used to get to an address on the Internet:

ROUTER_IP=$( ip route get to 8.8.8.8 | sed -n -e "s/.*via \(.*\) dev.*/\\1/p" )

We can then find out the MAC address for that router using the ARP table:

ROUTER_MAC=$( arp -n ${ROUTER_IP} | tail -1 | awk '{print $3}' )

If that MAC address matches a predefined value (redacted in this post) I know that it's my home router, else I'm on the Internet somewhere. Depending on which case I know whether I need to go through the proxy or whether I can connect directly. Once we can connect to the desktop machine, we can then look for the IP address of the container off of there using the same IP command running on the desktop. Lastly, we setup an nc to connect to the SSH daemon using the desktop as a proxy.

exec ssh ${SSH_HOST} -- bash -l -c "\"${NC_COMMAND} ${IP} 22\"" 

What all this means so that I just type ssh contianer-name anywhere and it just works. I can move my containers wherever, my laptop wherever, and connect to my development containers as needed.

on June 14, 2019 12:00 AM

June 13, 2019

In the previous post about setting up a email server, I explained how I setup a forwarder using Postfix. This post will look at setting up Dovecot to store emails (and provide IMAP and authentication) on the server using GPG encryption to make sure intruders can’t read our precious data!

Architecture

The basic architecture chosen for encrypted storage is that every incoming email is delivered to postfix via LMTP, and then postfix runs a sieve script that invokes a filter that encrypts the email with PGP/MIME using a user-specific key, before processing it further. Or short:

postfix --ltmp--> dovecot --sieve--> filter --> gpg --> inbox

Security analysis: This means that the message will be on the system unencrypted as long as it is in a Postfix queue. This further means that the message plain text should be recoverable for quite some time after Postfix deleted it, by investigating in the file system. However, given enough time, the probability of being able to recover the messages should reduce substantially. Not sure how to improve this much.

And yes, if the email is already encrypted we’re going to encrypt it a second time, because we can nest encryption and signature as much as we want! Makes the code easier.

Encrypting an email with PGP/MIME

PGP/MIME is a trivial way to encrypt an email. Basically, we take the entire email message, armor-encrypt it with GPG, and stuff it into a multipart mime message with the same headers as the second attachment; the first attachment is a control information.

Technically, this means that we keep headers twice, once encrypted and once decrypted. But the advantage compared to doing it more like most normal clients is clear: The code is a lot easier, and we can reverse the encryption and get back the original!

And when I say easy, I mean easy - the function to encrypt the email is just a few lines long:

def encrypt(message: email.message.Message, recipients: typing.List[str]) -> str:
    """Encrypt given message"""
    encrypted_content = gnupg.GPG().encrypt(message.as_string(), recipients)
    if not encrypted_content:
        raise ValueError(encrypted_content.status)

    # Build the parts
    enc = email.mime.application.MIMEApplication(
        _data=str(encrypted_content).encode(),
        _subtype='octet-stream',
        _encoder=email.encoders.encode_7or8bit)

    control = email.mime.application.MIMEApplication(
        _data=b'Version: 1\n',
        _subtype='pgp-encrypted; name="msg.asc"',
        _encoder=email.encoders.encode_7or8bit)
    control['Content-Disposition'] = 'inline; filename="msg.asc"'

    # Put the parts together
    encmsg = email.mime.multipart.MIMEMultipart(
        'encrypted',
        protocol='application/pgp-encrypted')
    encmsg.attach(control)
    encmsg.attach(enc)

    # Copy headers
    headers_not_to_override = {key.lower() for key in encmsg.keys()}

    for key, value in message.items():
        if key.lower() not in headers_not_to_override:
            encmsg[key] = value

    return encmsg.as_string()

Decypting the email is even easier: Just pass the entire thing to GPG, it will decrypt the encrypted part, which, as mentioned, contains the entire original email with all headers :)

def decrypt(message: email.message.Message) -> str:
    """Decrypt the given message"""
    return str(gnupg.GPG().decrypt(message.as_string()))

(now, not sure if it’s a feature that GPG.decrypt ignores any unencrypted data in the input, but well, that’s GPG for you).

Of course, if you don’t actually need IMAP access, you could drop PGP/MIME and just pipe emails through gpg --encrypt --armor before dropping them somewhere on the filesystem, and then sync them via ssh somehow (e.g. patching maildirsync to encrypt emails it uploads to the server, and decrypting emails it downloads).

Pretty Easy privacy (p≥p)

Now, we almost have a file conforming to draft-marques-pep-email-02, the Pretty Easy privacy (p≥p) format, version 2. That format allows us to encrypt headers, thus preventing people from snooping on our metadata!

Basically it relies on the fact that we have all the headers in the inner (encrypted) message. To mark an email as conforming to that format we just have to set the subject to p≥p and add a header describing the format version:

       Subject: =?utf-8?Q?p=E2=89=A1p?=
       X-Pep-Version: 2.0

A client conforming to p≥p will, when seeing this email, read any headers from the inner (encrypted) message.

We also might want to change the code to only copy a limited amount of headers, instead of basically every header, but I’m going to leave that as an exercise for the reader.

Putting it together

Assume we have a Postfix and a Dovecot configured, and a script gpgmymail written using the function above, like this:

def main() -> None:
    """Program entry"""
    parser = argparse.ArgumentParser(
        description="Encrypt/Decrypt mail using GPG/MIME")
    parser.add_argument('-d', '--decrypt', action="store_true",
                        help="Decrypt rather than encrypt")
    parser.add_argument('recipient', nargs='*',
                        help="key id or email of keys to encrypt for")
    args = parser.parse_args()
    msg = email.message_from_file(sys.stdin)

    if args.decrypt:
        sys.stdout.write(decrypt(msg))
    else:
        sys.stdout.write(encrypt(msg, args.recipient))


if __name__ == '__main__':
    main()

(don’t forget to add missing imports, or see the end of the blog post for links to full source code)

Then, all we have to is edit our .dovecot.sieve to add

filter "gpgmymail" "myemail@myserver.example";

and all incoming emails are automatically encrypted.

Outgoing emails

To handle outgoing emails, do not store them via IMAP, but instead configure your client to add a Bcc to yourself, and then filter that somehow in sieve. You probably want to set Bcc to something like myemail+sent@myserver.example, and then filter on the detail (the sent).

Encrypt or not Encrypt?

Now do you actually want to encrypt? The disadvantages are clear:

  • Server-side search becomes useless, especially if you use p≥p with encrypted Subject.

    Such a shame, you could have built your own GMail by writing a notmuch FTS plugin for dovecot!

  • You can’t train your spam filter via IMAP, because the spam trainer won’t be able to decrypt the email it is supposed to learn from

There are probably other things I have not thought about, so let me know on mastodon, email, or IRC!

More source code

You can find the source code of the script, and the setup for dovecot in my git repository.

on June 13, 2019 08:47 PM

S12E10 – Salamander

Ubuntu Podcast from the UK LoCo

This week we’ve been playing with tiling window managers, we “meet the forkers”, bring you some command line love and go over all your feedback.

It’s Season 12 Episode 10 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:
    • Alan has been playing with i3wm.
  • We “meet the forkers”; when projects end, forks are soon to follow.

  • We share a command line lurve:

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!
  • Image taken from Salamander arcade machine manufactured in 1986 by Konami.

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on June 13, 2019 02:00 PM

A Modest Ham-Related Proposal

Stephen Michael Kellat

Over the past couple months I have been trying to participate in the Monday morning net run by the SDF Amateur Radio Club from SDF.org. It has been pretty hard for me to catch up with any of the local amateur radio clubs. There is no local club associated with the American Radio Relay League in Ashtabula County but it must be remembered that land-wise Ashtabula County is fairly large in terms of land area.

For reference, the state of Rhode Island and Providence Plantations has a dry land area of 1,033.81 square miles. Ashtabula County has a dry land area of 702 square miles. Ashtabula County is 68% the size of the state of Rhode Island in terms of land area even though population-wise Ashtabula County has 9.23% of Rhode Island's equivalent population. Did I hear mooing off in the distance somewhere? For British readers, it is not only safe to say I'm not just in a fairly isolated area but that it may resemble Ambridge a bit too much.

Now, the beautiful part about the SDF Amateur Radio Club net is that it takes place via the venerable EchoLink system. The package known as qtel allows for access to the repeater-linking network from your Ubuntu desktop. Unlike normal times, the Wikipedia page about EchoLink actually provides a fairly nice write-up for the non-specialist.

Now, there is a relatively old article on the American Radio Relay League's website about Ubuntu. If you look at the Ubuntu Wiki, there is talk about Ubuntu Hams having their own net but the last time that page was edited was 2012. While there is talk of an IRC channel, a quick look at irclogs.ubuntu.com shows that it does not look like the log bot has been in the channel this month. E-mail to the Launchpad Team's mailing list hosted on Launchpad itself is a bit sporadic.

I have been a bit MIA myself due to work pressures. That does not mean I am unwilling to act as the Net Control Station if there is a group willing to hold a net on EchoLink perhaps. It would be a good way to get hams from across the Ubuntu realms to have some fellowship with each other.

For now, I am going to make a modest proposal. If anybody is interested in such an Ubuntu net could you please check in on the SDF ARC net on June 17 at 0000 UTC? To hear what the most recent net sounded like, you can listen to the recorded archive of that net's audio in MP3 format. Just check in on June 17th at 0000 UTC and please stick around until after the net ends. We can talk about possibilities after the SDF net ends. All you need to do is be registered to use EchoLink and have appropriate software to connect to the appropriate conference.

I will cause notice of this blog post to be made to the Launchpad Team's mailing list.

Creative Commons License
A Modest Ham-Related Proposal by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

on June 13, 2019 02:35 AM

June 11, 2019

We are pleased to announce that Plasma 5.16, is now available in our backports PPA for Disco 19.04.

The release announcement detailing the new features and improvements in Plasma 5.16 can be found here

Released along with this new version of Plasma is an update to KDE Frameworks 5.58. (5.59 will soon be in testing for Eoan 19.10 and may follow in the next few weeks.)

To upgrade:

Add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

IMPORTANT

Please note that more bugfix releases are scheduled by KDE for Plasma 5.16, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.15 as included in the original 19.04 Disco release.

Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], IRC [3], and/or file a bug against our PPA packages [4].

1. KDE bugtracker: https://bugs.kde.org
2. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
3. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
4. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

on June 11, 2019 03:24 PM

June 10, 2019

Welcome to the Ubuntu Weekly Newsletter, Issue 582 for the week of June 2 – 8, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on June 10, 2019 08:53 PM

Ep. 57 – O bom, o mau e o lambão

Podcast Ubuntu Portugal

Neste episódio ficámos a saber quais foram os últimos destinos das viagens do Diogo Constantino, onde é que ele andou a gastar dinheiro, mas também o que andou o Tiago a fazer para não ter estado nos últimos episódios. Já sabes, ouve, subscreve e partilha!

  • https://sintra2019.ubucon.org/
  • https://videos.ubuntu-paris.org/
  • https://slimbook.es/zero-smart-thin-client-linux-windows-fanless
  • https://slimbook.es/pedidos/mandos/mando-gaming-inal%C3%A1mbrico-nox-comprar
  • https://panopticlick.eff.org/
  • https://www.mozilla.org/en-US/firefox/67.0/releasenotes/
  • https://blog.mozilla.org/addons/2019/03/26/extensions-in-firefox-67/
  • https://discourse.ubuntu.com/t/mir-1-2-0-release/11034
  • https://www.linuxondex.com/
  • https://github.com/tcarrondo/aws-mfa-terraform
  • https://devblogs.microsoft.com/commandline/announcing-wsl-2/

Apoios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Atribuição e licenças

A imagem de capa é: VisualHunt licenciada nos termos da licença: CC0 1.0

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on June 10, 2019 02:47 PM

No dia 6 de Abril,realizou-se na ISCTE, o evento Ubucon Portugal 2019, este evento foi organizado pela Comunidade Ubuntu Portugal, o ISCTE – Instituto Universitário de Lisboa, o ISTAR-IUL – Information Sciences and Technologies and Architecture Research Center e o ISCTE-IUL ACM Student Chapter. O intuito deste evento foi o de divulgar o software e comunidade Open Source sobre a alçada da comunidade portuguesa de Ubuntu, no entanto acabou por se tornar num evento muito mais amplo do que isso, podendo destacar-se por as seguintes palavras chaves:

  • Consciencialização: Durante o decorrer do evento, os participantes foram consciencializados várias vezes para os nossos direitos digitais, o que fazer para os manter, lutar por eles e acima de tudo compreendê-los. Foram analisados temas como os infames artigos 11 e 13 tais como as alterações que irão existir aquando da implementação dos mesmos pela União Europeia e adaptação por cada estado membro.
    Foram ainda consciencializados relativamente ao tipo de licenciamento de software que existe e como é possível proteger o trabalho que é desenvolvido de forma aberta, tal como todos os processos que são necessários para se conseguir alterar a nossa legislação para ter em conta estes aspectos.
  • Comunidade: Sendo este um evento destinado à comunidade, foi possível assistir as apresentação de projectos que são impulsionados por membros das nossa comunidade.
    Foi possível descobrir a evolução do ambiente gráfico KDE desde a primeira versão de todas até ao tão aclamado Kde plasma e o seu ecossistema, deixando o Kde de ser apenas um ambiente gŕafico e passando a ser um ecossistema orientado para a comunidade. Foi possível ainda desmistificar o velho tema “O linux não serve para jogar”, onde foram apresentados os números atuais de jogos que são suportados para linux nativamente ou através de software de terceiros, esta apresentação demonstrou ainda os esforços que estão a ser desenvolvidos pela Steam, para tornar o Linux uma plataforma para jogos e conseguir desmistificar de uma vez por todas que em Linux não se joga ou apenas se jogam jogos mais simples.
  • Futuro: Não é possível projectar o futuro sem primeiro compreender o passado, como tal foram apresentados os planos futuros da comunidade portuguesa de Ubuntu, quais são os planos e missões da mesma no futuro. Os participantes foram ainda contemplados pela oferta formativa do ISCTE a nível de mestrado na área de open source, talvez um dos mestrados mais fora da caixa que existe neste momento no panorama nacional de ensino e que dá uma força especial aos apoiantes do open source para seguir um sonho se formarem nesta área. Por fim foi debatida a utilização uma das maiores revoluções industriais dos últimos tempos, a impressão 3D, casos de
    estudo e aplicação, onde é que iremos chegar com esta tecnologia, tal como os impactos que a impressão 3d poderá ter na sociedade industrial e ambiental, recorrendo a produtos que possam ser degradáveis e amigos do ambiente.

Para finalizar, foi um evento cheio de bom ambiente, muita curiosidade pelo assunto do opensource, em que nunca se pôs de parte, uma das coisas mais importantes, o espírito de comunidade e companheirismo por esta causa e por este sonho de ter um mundo tecnológico mais aberto e mais transparente. Se vale a pena participar ?
Claro, mas não só como alguém que vai assistir ao evento, mais sim como alguém que organiza eventos, apoia a comunidade e participa na comunidade.

Nota: Como a comunidade ambiciona sempre mais e mais, assim que acabou a Ubucon Portugal, a comunidade começou logo a prepar a Ubucon Europe, como tal este rescaldo veio de forma mais tardia, no entanto, um evento deste género, não poderia ser esquecido.
Obrigado a todos os que tiveram presentes e que para o ano estejamos todos juntos novamente para a realização da Ubucon Portugal 2020.

on June 10, 2019 09:34 AM

June 08, 2019

Over the past 9+ months I've been cleaning up stress-ng in preparation for a V0.10.00 release.   Stress-ng is a portable Linux/UNIX Swiss army knife of micro-benchmarking kernel stress tests.

The Ubuntu kernel team uses stress-ng for kernel regression testing in several ways:
  • Checking that the kernel does not crash when being stressed tested
  • Performance (bogo-op throughput) regression checks
  • Power consumption regression checks
  • Core CPU Thermal regression checks
The wide range of micro benchmarks in stress-ng allow us to keep track of a range of metrics so that we can catch regressions.

I've tried to focus on several aspects of stress-ng over the last last development cycle:
  • Improve per-stressor modularization. A lot of code has been moved from the core of stress-ng back into each stress test.
  • Clean up a lot of corner case bugs found when we've been testing stress-ng in production.  We exercise stress-ng on a lot of hardware and in various cloud instances, so we find occasional bugs in stress-ng.
  • Improve usability, for example, adding bash command completion.
  • Improve portability (various kernels, compilers and C libraries). It really builds on runs on a *lot* of Linux/UNIX/POSIX systems.
  • Improve kernel test coverage.  Try to exercise more kernel core functionality and reach parts other tests don't yet reach.
Over the past several days I've been running various versions of stress-ng on a gcov enabled 5.0 kernel to measure kernel test coverage with stress-ng.  As shown below, the tool has been slowly gaining more core kernel coverage over time:

With the use of gcov + lcov, I can observe where stress-ng is not currently exercising the kernel and this allows me to devise stress tests to touch these un-exercised parts.  The tool has a history of tripping kernel bugs, so I'm quite pleased it has helped us to find corners of the kernel that needed improving.

This week I released V0.09.59 of stress-ng.  Apart from the usual sets of clean up changes and bug fixes, this new release now incorporates bash command line completion to make it easier to use.  Once the 5.2 Linux kernel has been released and I'm satisfied that stress-ng covers new 5.2 features I will  probably be releasing V0.10.00. This  will be a major release milestone now that stress-ng has realized most of my original design goals.
on June 08, 2019 04:26 PM

June 07, 2019

Sinonym

Benjamin Mako Hill

I’d like to use “sinonym” as another word for an immoral act. Or perhaps to refer to the Chinese name for something. Sadly, I think it might just be another word for another word.

on June 07, 2019 06:46 PM

What is Ubunchu?

"Ubunchu!" is a Japanese manga series featuring Ubuntu Linux.
Three school students in a system-admin club are getting into Ubuntu!
(see http://seotch.wordpress.com/ubunchu/)

These manga are serializes with the "Ubuntu Magazine Japan". This is a first magazine specialised for Ubuntu in Japan. It is widely sold at book stores throughout Japan. and the license of a back number is CC-BY-NC.

Back to the subject

I ordered USB flash drive of the original ubunchu design. However, it was very expensive. (about $100 USD, 8GB :p)
f:id:mizuno-as:20110928180935j:image
f:id:mizuno-as:20110928180936j:image
Of course, Ubuntu is installed!

on June 07, 2019 02:30 AM

f:id:mizuno-as:20110903120516j:image
I and an another Japanese LoCo team member Mitsuya Shibata(lp: cosmos-door) attended Open Source "Small" Conference 2011 Aizu, Japan.
Aizu is in Fukushima Pref. Fukushima suffered the damage of the earthquake, Tsunami, and Nuclear hazard in 2011/03/11. But the open source of Fukushima is very active!

f:id:mizuno-as:20110903101020j:image
Junya Terazono(寺薗淳也). He is organizer of OSSC Aizu. He was the staff of Hayabusa in JAXA before. And, of course, he is a Ubuntu user!
Junya's presentation is about the use method of Ubuntu on his work in a university.

Aizu sightseeing

I and Mitsuya traveled to Aizu on the previous day.

f:id:mizuno-as:20110902151316j:image:left
This small temple called Sazae-Do was built 200 years or more ago.
The inside is a one-way traffic slope of double helix structure. It has structure similar to a univalve shell. It is because a Sazae(さざえ) means a univalve shell.
It is a straight road from an entrance to an exit through a summit. It is a very unusual and interesting design.
f:id:mizuno-as:20110902151527j:image

on June 07, 2019 02:30 AM

June 06, 2019

Updates for June 2019

Ubuntu Studio

We hope that Ubuntu Studio 19.04’s release has been a welcome update for our users. As such, we are continuing our work on Ubuntu Studio with our next release scheduled for October 17, 2019, codenamed “Eoan Ermine”. Bug Fix for Ubuntu Studio Controls A bug identified in which the ALSA-Jack MIDI bridge was not surviving […]
on June 06, 2019 09:08 PM

Recently Michael blogged about epiphany being outdated in Ubuntu. While I don’t think that a blog ranting was the best way to handle the problem (several of the Ubuntu Desktop members are on #gnome-hackers for example, it would have been easy to talk to us there) he was right that the Ubuntu package for epiphany was outdated.

Ubuntu does provide updates, even for packages in the universe repository

One thing Michael wrote was

Because Epiphany is in your universe repository, rather than main, I understand that Canonical does not provide updates

That statement is not really accurate.

First Ubuntu is a community project and not only maintained by Canonical. For example most of work done in recent cycles on the epiphany package was from Jeremy (which was one of the reason the package got outdated, Jeremy had to step down from that work and no-one picked it up).

Secondly, while it’s true that Canonical doesn’t provide official support for packages in universe we do have engineers who have interest in some of those components and help maintaining them.

Epiphany is now updated (deb & snap)

Going back to the initial problem, Michael was right and in this case Ubuntu didn’t keep up with available updates for epiphany, which has now been resolved

    • 3.28.5 is now available in Bionic (current LTS)
    • 3.32.1 is available in the devel serie and in the Disco (the current stable)
    • The snap versions are a build of gnome-3-32 git for the stable channel and a build of master in the edge channel.

Snaps and GTK 3.24

Michael also wrote that

The snap is still using 3.30.4, because Epiphany 3.32 depends on GTK 3.24, and that is not available in snaps yet.

Again the reality is a bit more complex. Snaps don’t have depends like debs do, so by nature they don’t have problems like being blocked by missing depends. To limit duplication we do provide a gnome platform snap though and most of our GNOME snaps use it. That platform snap is built from our LTS archive which is on GTK 3.22 and our snaps are built on a similar infrastructure.

Ken and Marcus are working on resolving that problem by providing an updated gnome-sdk snap but that’s not available yet. Meanwhile they changed the snap to build gtk itself instead of using the platform one, which unblocked the updates, thanks Ken and Marcus!

Ubuntu does package GNOME updates

I saw a few other comments recently along the lines of “Ubuntu does not provide updates for its GNOME components in stable series” which I also wanted to address here.

We do provide stable updates for GNOME components! Ubuntu usually ship its new version with the .1 updates included from the start and we do try to keep up with doing stable updates for point releases (especially for the LTS series).

Now we have a small team and lot to do so it’s not unusual to see some delays in the process.
Also while we have tools to track available updates, our reports are currently only for the active distro and not stable series which is a gap and leads us sometime to miss some updates.
I’ve now hacked up a stable report and reviewed the current output and we will work on updating a few components that are currently outdated as a result.

Oh, and as a note, we do tend to skip updates which are “translations updates only” because launchpad does allows us to get those without needing a stable package upload (the strings are shared by serie so getting the new version/translations uploaded to the most recent serie is enough to have those available for the next language pack stable updates)

And as a conclusion, if as an upstream or user you have an issue with a component that is still outdated in Ubuntu feel free to get in touch with us (IRC/email/launchpad) and we will do out best to fix the situation.

on June 06, 2019 03:07 PM

June 01, 2019

A relatively quiet free software month, I’m feeling the Debian Buster final freeze fatigue for sure. Also dealt with a bunch of personal and work stuff that kept me busy otherwise, and also, haven’t been very good at logging activities so this will be short one…

Debian packaging work

2019-05-01: Upload live-wrapper (0.10) to debian unstable (Closes: #927216, #927217).

2019-05-01: Upload live-tasks (0.6) to debian unstable (Closes: #924211, #924214, #925331).

2019-05-16: Upload live-tasks (0.7) to debian unstable (Closes: #866391)

2019-05-24: File unblock request for live-tasks (0.7)

Debian live

2019-05-01: Update debian-live local scripts to fix stale fstab, duplicate sources.list entries.

2019-05-02: Full testing on daily build media.

2019-05-02: Prepare Debian Live RC1 announcement.

2019-05-03: Spot check testing on RC1 builds.

DebConf

2019-05-01: Submit DebConf BoF proposal for “100 Paper cuts kick-off“.

2019-05-01: Post initial bursary results.

2019-05-02: Post bursary results for smaller (<$200) amounts.

2019-05-02: Submit DebConf BoF proposal for “Debian Live“.

And lots of bursaries admin throughout the month. To be honest I’m glad that the bulk of it is mostly over.

on June 01, 2019 06:37 PM

May 31, 2019

As announced, cloud-init 19.1 was released last Friday! From the announcement, some highlights include: Azure datasource telemetry, network configuration and ssh key hardening new config module for interacting with third party drivers on Ubuntu EC2 Classic instance support for network config changes across reboot Add support for the com.vmware.guestInfo OVF transport. Scaleway: Support ssh keys provided inside an instance tag. Better NoCloud support for case-insensitive fs labels. SuSE network sysconfig rendering fixes for IPv6, default routes and startmode It was really exciting to see that we had a large number of community commits this release cycle as well!
on May 31, 2019 12:00 AM

May 28, 2019

Previously: v5.0.

Linux kernel v5.1 has been released! Here are some security-related things that stood out to me:

introduction of pidfd
Christian Brauner landed the first portion of his work to remove pid races from the kernel: using a file descriptor to reference a process (“pidfd”). Now /proc/$pid can be opened and used as an argument for sending signals with the new pidfd_send_signal() syscall. This handle will only refer to the original process at the time the open() happened, and not to any later “reused” pid if the process dies and a new process is assigned the same pid. Using this method, it’s now possible to racelessly send signals to exactly the intended process without having to worry about pid reuse. (BTW, this commit wins the 2019 award for Most Well Documented Commit Log Justification.)

explicitly test for userspace mappings of heap memory
During Linux Conf AU 2019 Kernel Hardening BoF, Matthew Wilcox noted that there wasn’t anything in the kernel actually sanity-checking when userspace mappings were being applied to kernel heap memory (which would allow attackers to bypass the copy_{to,from}_user() infrastructure). Driver bugs or attackers able to confuse mappings wouldn’t get caught, so he added checks. To quote the commit logs: “It’s never appropriate to map a page allocated by SLAB into userspace” and “Pages which use page_type must never be mapped to userspace as it would destroy their page type”. The latter check almost immediately caught a bad case, which was quickly fixed to avoid page type corruption.

LSM stacking: shared security blobs
Casey Shaufler has landed one of the major pieces of getting multiple Linux Security Modules (LSMs) running at the same time (called “stacking”). It is now possible for LSMs to share the security-specific storage “blobs” associated with various core structures (e.g. inodes, tasks, etc) that LSMs can use for saving their state (e.g. storing which profile a given task confined under). The kernel originally gave only the single active “major” LSM (e.g. SELinux, Apprmor, etc) full control over the entire blob of storage. With “shared” security blobs, the LSM infrastructure does the allocation and management of the memory, and LSMs use an offset for reading/writing their portion of it. This unblocks the way for “medium sized” LSMs (like SARA and Landlock) to get stacked with a “major” LSM as they need to store much more state than the “minor” LSMs (e.g. Yama, LoadPin) which could already stack because they didn’t need blob storage.

SafeSetID LSM
Micah Morton added the new SafeSetID LSM, which provides a way to narrow the power associated with the CAP_SETUID capability. Normally a process with CAP_SETUID can become any user on the system, including root, which makes it a meaningless capability to hand out to non-root users in order for them to “drop privileges” to some less powerful user. There are trees of processes under Chrome OS that need to operate under different user IDs and other methods of accomplishing these transitions safely weren’t sufficient. Instead, this provides a way to create a system-wide policy for user ID transitions via setuid() (and group transitions via setgid()) when a process has the CAP_SETUID capability, making it a much more useful capability to hand out to non-root processes that need to make uid or gid transitions.

ongoing: refcount_t conversions
Elena Reshetova continued landing more refcount_t conversions in core kernel code (e.g. scheduler, futex, perf), with an additional conversion in btrfs from Anand Jain. The existing conversions, mainly when combined with syzkaller, continue to show their utility at finding bugs all over the kernel.

ongoing: implicit fall-through removal
Gustavo A. R. Silva continued to make progress on marking more implicit fall-through cases. What’s so impressive to me about this work, like refcount_t, is how many bugs it has been finding (see all the “missing break” patches). It really shows how quickly the kernel benefits from adding -Wimplicit-fallthrough to keep this class of bug from ever returning.

stack variable initialization includes scalars
The structleak gcc plugin (originally ported from PaX) had its “by reference” coverage improved to initialize scalar types as well (making “structleak” a bit of a misnomer: it now stops leaks from more than structs). Barring compiler bugs, this means that all stack variables in the kernel can be initialized before use at function entry. For variables not passed to functions by reference, the -Wuninitialized compiler flag (enabled via -Wall) already makes sure the kernel isn’t building with local-only uninitialized stack variables. And now with CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL enabled, all variables passed by reference will be initialized as well. This should eliminate most, if not all, uninitialized stack flaws with very minimal performance cost (for most workloads it is lost in the noise), though it does not have the stack data lifetime reduction benefits of GCC_PLUGIN_STACKLEAK, which wipes the stack at syscall exit. Clang has recently gained similar automatic stack initialization support, and I’d love to this feature in native gcc. To evaluate the coverage of the various stack auto-initialization features, I also wrote regression tests in lib/test_stackinit.c.

That’s it for now; please let me know if I missed anything. The v5.2 kernel development cycle is off and running already. :)

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on May 28, 2019 03:49 AM

May 27, 2019

Today I am excited to launch my brand new podcast, Conversations With Bacon!

The idea is simple: when I think back over my career, the times I have learned the most, and had the most fun learning, has been getting into detailed, interesting discussions with interesting people from all walks of life. This exchange of ideas and experience is often fascinating, and presents new ways for me to think about my own work and projects.

My aim with Conversations With Bacon is to bring on a wide range of people to explore the ideas and experiences that have shaped their work and lives. I don’t just want to discuss what they work on, I want to get into the driving forces behind their thinking and approach. The aim of all of this is to give you an interesting discussion for you to be a fly on the wall listening to, and hopefully glean some new ideas for yourself. There will be new shows about every three weeks.

For this first show, I am thrilled to bring on Todd Lewis, who created the All Things Open conference in Raleigh in 2012. He has gone on to grow the event from 600 attendees in their first year to over 4000 attendees last year. Todd is a bit of a renaissance man when it comes to events, and in this first episode of the podcast we explore what goes into great events, how to get the right balance of content and taking care of sponsors, building an open marketplace of (often contrasting ideas), and what Todd has learned over the years as he has refined his craft.

Listen to the show below, and I would love to hear your feedback for how I can improve it, and which guests you think would be interesting to bring on. Like anything, this is a learning experience, and it will take time to get the show format and content perfected: your help will help me get there faster!

Listen


   Listen on Google Play Music
   

The post Conversations With Bacon: Todd Lewis, Founder of All Things Open appeared first on Jono Bacon.

on May 27, 2019 09:35 PM

May 24, 2019

As you may have noticed on Twitter, Mastodon, IRC, our mailing lists, and this website, we have now launched a forum using the incredibly popular open source Discourse software. We join Ubuntu, Ubuntu MATE, Ubuntu Budgie, LXQt, Phabricator, and others that share this powerful tool for communication with us. This forum is a general meeting […]
on May 24, 2019 07:08 PM

May 23, 2019

Ep. 56 – Laravel Dingo

Podcast Ubuntu Portugal

Neste episódio falámos com o José Postiga developer da comunidade Laravel Portugal, e contribuidor para diversos projectos de Software Livre, que migrou recentemente para Ubuntu, graças ao Disco Dingo!

  • https://www.laravel.pt/
  • http://laracon.eu/
  • http://devday.io/

Apoios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Atribuição e licenças

A imagem de capa é: “Albino Dingo puppies”by TheGirlsNY is licensed under CC BY-SA 2.0

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on May 23, 2019 08:37 AM

May 21, 2019

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 204 work hours have been dispatched among 14 paid contributors. Their reports are available:

  • Abhijith PA did 4 hours (out of 14 hours allocated, thus carrying over 10 hours to May).
  • Adrian Bunk did 8 hours (out of 8 hours allocated).
  • Ben Hutchings did 31.25 hours (out of 17.25 hours allocated plus 14 extra hours from April).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 17 hours (out of 17.25 hours allocated, thus carrying over 0.25h to May).
  • Emilio Pozuelo Monfort did 8 hours (out of 17.25 hours allocated + 6 extra hours from March, thus carrying over 15.25h to May).
  • Hugo Lefeuvre did 17.25 hours.
  • Jonas Meurer did 14 hours (out of 14 hours allocated).
  • Markus Koschany did 17.25 hours.
  • Mike Gabriel did 11.5 hours (out of 17.25 hours allocated, thus carrying over 5.75h to May).
  • Ola Lundqvist did 5.5 hours (out of 8 hours allocated + 1.5 extra hours from last month, thus carrying over 4h to May).
  • Roberto C. Sanchez did 1.75 hours (out of 12 hours allocated, thus carrying over 10.25h to May).
  • Sylvain Beucler did 17.25 hours.
  • Thorsten Alteholz did 17.25 hours.

Evolution of the situation

During this month, and after a two-year break, Jonas Meurer became again an active LTS contributor. Still, we continue to be looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The number of sponsors did not change. There are 58 organizations sponsoring 215 work hours per month.

The security tracker currently lists 33 packages with a known CVE and the dla-needed.txt file has 31 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on May 21, 2019 02:11 PM

May 18, 2019

Paco Molinero, Javier Teruelo y Marcos Costales debatiremos si es posible reclamar una licencia de Windows y si Android es realmente Linux ¿?

Ubuntu y otras hierbas

Escúchanos en:
on May 18, 2019 05:53 PM

Are you using Kubuntu 19.04, our current Stable release? Or are you already running our daily development builds?

We currently have Plasma 5.15.90 (Plasma 5.16 Beta)  available in our Beta PPA for Kubuntu 19.04, and in our 19.10 development release daily live ISO images.

For 19.04 Disco Dingo, add the PPA and then upgrade

sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt update && sudo apt full-upgrade -y

Then reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

For already installed 19.10 Eoan Ermine development release systems, simply upgrade your system.

Update directly from Discover, or use the command line:

sudo apt update && sudo apt full-upgrade -y

And reboot. If you cannot reboot from the application launcher,

systemctl reboot

from the terminal.

Otherwise, to test or install the live image grab an ISO build from the daily live ISO images link.

Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

  • If you believe you might have found a packaging bug, you can use your launchpad.net account is required to post testing feedback to the Kubuntu team. 
  • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

Please review the changelog.

[Test Case]

* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.15.5?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing. e.g. “clock combobox instead of tri-state checkbox for 12/24 hour display.”
– Test the ‘fixed’ functionality.

Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important beta release in shape for Kubuntu 19.10 as well as added to our backports.

Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

on May 18, 2019 04:46 PM