January 03, 2018
January 02, 2018
So, you’ve clicked on a link or came to check for a new release at smdavis.us, and now you’re here at bluesabre.org. Fear not! Everything is working just as it should.
To kick off 2018, I’ve started tidying up my personal brand. Since my website has consistently been about FOSS updates, I’ve transitioned to a more fitting .org domain. The .org TLD is often associated with community and open source initiatives, and the content you’ll find here is always going to fit that bill. You can continue to expect a steady stream of Xfce and Xubuntu updates.
And that’s enough of that, let’s get started with the new year. 2018 is going to be one of the best yet!
December 31, 2017
2017 is ending. It’s been a rather uneventful year, I’d say. About 6 months ago I started working on my master’s thesis – it plays with adding linear types to Go – and I handed that in about 1.5 weeks ago. It’s not really complete, though – you cannot actually use it on a complete Go program. The source code is of course available on GitHub, it’s a bunch of Go code for the implementation and a bunch of Markdown and LaTex for the document. I’m happy about the code coverage, though: As a properly developed software project, it achieves about 96% code coverage – the missing parts happening at the end, when time ran out 
I released apt 1.5 this year, and started 1.6 with seccomp sandboxing for methods.
I went to DebConf17 in Montreal. I unfortunately did not make it to DebCamp, nor the first day, but I at least made the rest of the conference. There, I gave a talk about APT development in the past year, and had a few interesting discussions. One thing that directly resulted from such a discusssion was a new proposal for delta upgrades, with a very simple delta format based on a variant of bsdiff (with external compression, streamable patches, and constant memory use rather than linear). I hope we can implement this – the savings are enormous with practically no slowdown (there is no reconstruction phase, upgrades are streamed directly to the file system), which is especially relevant for people with slow or data capped connections.
This month, I’ve been buying a few “toys”: I got a pair of speakers (JBL LSR 305), and I got a noise cancelling headphone (a Sony WH-1000XM2). Nice stuff. Been wearing the headphones most of today, and they’re quite comfortable and really make things quite, except for their own noise
Well, both the headphone and the speakers have a white noise issue, but oh well, the prices were good.
This time of the year is not only a time to look back at the past year, but also to look forward to the year ahead. In one week, I’ll be joining Canonical to work on Ubuntu foundation stuff. It’s going to be interesting. I’ll also be moving places shortly, having partially lived in student housing for 6 years (one room, and a shared kitchen), I’ll be moving to a complete apartement.
On the APT front, I plan to introduce a few interesting changes. One of them involves automatic removal of unused packages: This should be happening automatically during install, upgrade, and whatever. Maybe not for all packages, though – we might have a list of “safe” autoremovals. I’d also be interested in adding metadata for transitions: Like if libfoo1 replaces libfoo0, we can safely remove libfoo0 if nothing depends on it anymore. Maybe not for all “garbage” either. It might make sense to restrict it to new garbage – that is packages that become unused as part of the operation. This is important for safe handling of existing setups with automatically removable packages: We don’t suddenly want to remove them all when you run upgrade.
The other change is about sandboxing. You might have noticed that sometimes, sandboxing is disabled with a warning because the method would not be able access the source or the target. The goal is to open these files in the main program and send file descriptors to the methods via a socket. This way, we can avoid permission problems, and we can also make the sandbox stronger – for example, by not giving it access to the partial/ directory anymore.
Another change we need to work on is standardising the “Important” field, which is sort of like essential – it marks an installed package as extra-hard to remove (but unlike Essential, does not cause apt to install it automatically). The latest draft calls it “Protected”, but I don’t think we have a consensus on that yet.
I also need to get happy eyeballs done – fast fallback from IPv6 to IPv4. I had a completely working solution some months ago, but it did not pass CI, so I decided to start from scratch with a cleaner design to figure out if I went wrong somewhere. Testing this is kind of hard, as it basically requires a broken IPv6 setup (well, unreachable IPv6 servers).
Oh well, 2018 has begun, so I’m going to stop now. Let’s all do our best to make it awesome!
Filed under: Debian, General, Ubuntu
December 30, 2017
instead of connecting to the DeepLens with HDMI micro cable, monitor, keyboard, mouse
Credit for this excellent idea goes to Ernie Kim. Thank you!
Instructions without ssh
The standard AWS DeepLens instructions recommend connecting the device to a monitor, keyboard, and mouse. The instructions provide information on how to view the video streams in this mode:
If you are connected to the DeepLens using a monitor, you can view the unprocessed device stream (raw camera video before being processed by the model) using this command on the DeepLens device:
mplayer –demuxer /opt/awscam/out/ch1_out.h264
If you are connected to the DeepLens using a monitor, you can view the project stream (video after being processed by the model on the DeepLens) using this command on the DeepLens device:
mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 /tmp/ssd_results.mjpeg
Instructions with ssh
You can also view the DeepLens video streams over ssh, without having a monitor connected to the device. To make this possible, you need to enable ssh access on your DeepLens. This is available as a checkbox option in the initial setup of the device. I’m working to get instructions on how to enable ssh access afterwards and will update once this is available.
To view the video streams over ssh, we take the same mplayer command
options above and the same source stream files, but send the stream
over ssh, and feed the result to the stdin of an mplayer process
running on the local system, presumably a laptop.
All of the following commands are run on your local laptop (not on the DeepLens device).
You need to know the IP address of your DeepLens device on your local network:
ip_address=[IP ADDRESS OF DeepLens]
You will need to install the mplayer software on your local
laptop. This varies with your OS, but for Ubuntu:
sudo apt-get install mplayer
You can view the unprocessed device stream (raw camera video before being processed by the model) over ssh using the command:
ssh aws_cam@$ip_address cat /opt/awscam/out/ch1_out.h264 |
mplayer –demuxer -
You can view the project stream (video after being processed by the model on the DeepLens) over ssh with the command:
ssh aws_cam@$ip_address cat /tmp/ssd_results.mjpeg |
mplayer –demuxer lavf -lavfdopts format=mjpeg:probesize=32 -
Benefits of using ssh to view the video streams include:
You don’t need to have an extra monitor, keyboard, mouse, and micro-HDMI adapter cable.
You don’t need to locate the DeepLens close to a monitor, keyboard, mouse.
You don’t need to be physically close to the DeepLens when you are viewing the video streams.
For those of us sitting on a couch with a laptop, a DeepLens across the room, and no extra micro-HDMI cable, this is great news!
Bonus
To protect the security of your sensitive DeepLens video feeds:
Use a long, randomly generated password for ssh on your DeepLens, even if you are only using it inside a private network.
I would recommend setting up .ssh/authorized_keys on the DeepLens so you can ssh in with your personal ssh key, test it, then disable password access for ssh on the DeepLens device. Don’t forget the password, because it is still needed for sudo.
Enable automatic updates on your DeepLens so that Ubuntu security patches are applied quickly. This is available as an option in the initial setup, and should be possible to do afterwards using the standard Ubuntu unattended-upgrades package.
Unrelated side note: It’s kind of nice having the DeepLens run a standard Ubuntu LTS release. Excellent choice!
Original article and comments: https://alestic.com/2017/12/aws-deeplens-video-stream-ssh/
December 29, 2017
I’ve been using GTD to organize projects for a long time. The “tickler file” in particular is a crucial part of how I handle scheduling of upcoming and recurring tasks. I’ve blogged about some of the scripts I’ve written to help me do so in the past at https://s3hh.wordpress.com/2013/04/19/gtd-managing-projects/ and https://s3hh.wordpress.com/2011/12/10/tickler/. This week I’ve combined these tools, slightly updated them, and added an install script, and put them on github at http://github.com/hallyn/gtdtools.
Disclaimer
The opinions expressed in this blog are my own views and not those of Cisco.
I wanted a bench power supply for powering small projects and devices I’m testing. I ended up with a DIY approach for around $30 and am very happy with the outcome. It’s a simple project that almost anyone can do and is a great introductory power supply for any home lab.
I had a few requirements when I set out:
- Variable voltage (up to ~12V)
- Current limiting (to protect against stupid mistakes)
- Small footprint (my electronics work area is only about 8 square feet)
- Relatively cheap
Initially, I considered buying an off the shelf bench power supply, but most of those are either very expensive, very large, or both. I also toyed with the idea of an ATX power supply as a bench power supply, but those don’t offer current limiting (and are capable of delivering enough current to destroy any project I’m careless with).
I had seen a few DC-DC buck converter modules floating around, but most had pretty bad reviews, until the Ruidong DPS series came out. These have quickly become quite popular modules, with support for up to 50V at 5A – a 250W power supply! Because of the buck topology, they require a DC input at a higher voltage than the output, but that’s easily provided with another power supply. In my case, I decided to use cheap power supplies from electronic devices (commonly called “wall warts”). (I’m actually reusing one from an old router.)
I’m far from the first to do such a project, but I still wanted to share as well as describe what I’d like to do in the future.

This particular unit consists of a DPS3005 that I got for about $25 from AliExpress. (The DPS5005 is now available on Amazon with Prime. Had that been the case at the time I built this, I likely would have gone with that option.)
I placed the power supply in a plastic enclosure and added a barrel jack for input power, and added 5-way binding posts for the output. This allows me to connect banana plugs, breadboard leads, or spade lugs to the power supply.

Internally, I connected the parts with some 18 AWG red/black zip cord using crimped ring connectors on the binding posts, the screw terminals on the power supply, and solder on the barrel jack. Where possible, the connections were covered with heat shrink tubing.
I used this power supply in developing my Christmas Ornament, and it worked a treat. It allowed me to simulate behavior at lower battery voltages (though note that it is not a battery replacement – it does not simulate the internal resistance of a run down battery) and figure out how long my ornament was likely to run, and how bright it would be as the battery ran down.
I’ve also used it to power a few embedded devices that I’ve been using for security research, and I think it would make a great tool for voltage glitching in the future. (In fact, I saw Dr. Dmitry Nedospasov demonstrate a voltage glitching attack using a similar module at hardwaresecurity.training.)
In the future, I’d like to build a larger version with an internal AC to DC power supply (maybe a repurposed ATX supply) and either two or three of the DPS power modules to provide output. Note that, due to the single AC to DC supply, they would not be isolated channels – both would have the same ground reference, so it would not be possible to reference them to each other. For most use cases, this wouldn’t be a problem, and both channels would be isolated from mains earth if an isolated switching supply is used as the first stage power supply.
December 28, 2017
Every year we do a bit of a pub crawl in Birmingham between Christmas and New Year; a chance to get away from the turkey risotto, and hang out with people and talk about techie things after a few days away with family and so on. It’s all rather loosely organised — I tried putting exact times on every pub once and it didn’t work out very well. So this year, 2017, I wanted a map which showed where we were so people can come and find us — it’s a twelve-hour all-day-and-evening thing but nobody does the whole thing1 so the idea is that you can drop in at some point, have a couple of drinks, and then head off again. For that, you need to know where we all are.
Clearly, the solution here is technology; I carry a device in my pocket2 which knows where I am and can display that on a map. There are a few services that do this, or used to — Google Latitude, FB messenger does it, Apple find-my-friends — but they’re all “only people with the Magic Software can see this”, and “you have to use our servers”, and that’s not very web-ish, is it? What I wanted was a thing which sat there in the background on my phone and reported my location to my server when I moved around, and didn’t eat battery. That wouldn’t be tricky to write but I bet there’s a load of annoying corner cases, which is why I was very glad to discover that OwnTracks have done it for me.
You install their mobile app (for Android or iOS) and then configure it with the URL of your server and every now and again it reports your location by posting JSON to that URL saying what your location is. Only one word for that: magic darts. Exactly what I wanted.
It’s a little tricky because of that “don’t use lots of battery” requirement. Apple heavily restrict background location sniffing, for lots of good reasons. If your app is the active app and the screen’s unlocked, it can read your location as often as it wants, but that’s impractical. If you want to get notified of location changes in the background on iOS then you only get told if you’ve moved more than 500 metres in less than five minutes3 which is fine if you’re on the motorway but less fine if you’re walking around town and won’t move that far. However, you can nominate certain locations as “waypoints” and then the app gets notified whenever it enters or leaves a waypoint, even if it’s in the background and set to “manual mode”. So, I added all the pubs we’re planning on going to as waypoints, which is a bit annoying to do manually but works fine.
OwnTracks then posts my location to a tiny PHP file which just dumps it in a big JSON list. The #brumtechxmas 2017 map then reads that JSON file and plots the walk on the map (or it will do once we’re doing it; as I write this, the event isn’t until tomorrow, Friday 29th December, but I have tested it out).
The map is an SVG, embedded in the page. This has the nice property that I can change it with CSS. In particular, the page looks at the list of locations we’ve been in and works out whether any of them were close enough to a pub on the map that we probably went in there… and then uses CSS to colour the pub we’re in green, and ones we’ve been in grey. So it’s dynamic! Nice and easy to find us wherever we are. If it works, which is a bit handwavy at this point.
If you’re coming, see you tomorrow. If you’re not coming: you should come. :-)

- well, except me. And hero of the revolution Andy Yates. ↩
- and you do too ↩
- the OwnTracks docs explain this in more detail ↩
Draft of a proposal I'm working on.. Feedback/improvements welcome
December 27, 2017
It seems such a passive word for a passive role.
Let's consider how it is instead a position of power.
First, as a bystander, I can observe what is happening which nobody else sees, because nobody else is standing exactly where I am. Nobody else has my mix of genes and history and all of what makes me who I am and so I see uniquely.
As bystanders each of us has power we often do not grasp. It is of the moment. We can plan, and prepare so that we are ready to act, intervene if necessary; build up potential energy. While remaining polite, I can step in to help, intervene, participate, engage. I can ACT.
Pro-tip: run this program (courtesy of the Linuxchix:
1. be polite
2. be helpful
3. iterate
Boom! You have a team.
Supporting free software is one of the things I do. Right now is a great time to help support KDE.
KDE Powers You - You Can Power KDE, Too!
https://www.kde.org/fundraisers/yearend2017/
Over the Christmas period I had a need to watch some videos from my laptop on my TV via Chromecast. I once again tried my faithful old VLC player which according to the website should support casting in the latest release. But alas, Chromecast is disabled:
* No change rebuild to add some information about why we disable chromecast
support: it fails to build from source due to protobuf/mir:
- https://trac.videolan.org/vlc/ticket/18329
- https://github.com/google/protobuf/issues/206
Source : https://launchpad.net/ubuntu/+source/vlc/3.0.0~rc2-2ubuntu2
Then I came across ‘castnow‘ which is a CLI based app to stream a mp4 file to your chromecast device. You can see the code here – https://github.com/xat/castnow
To install, I needed the node package manager (npm), to do this on my system I run
sudo apt install npm
Then using npm you can install it by:
sudo npm install castnow
This will install the tool. Instructions for use are here – https://github.com/xat/castnow/blob/master/README.md
Now if you are like me and use the Plasma Desktop, there is now an addon to Dolphin menu which allows you to start the cast directly from Dolphin 
In a dolphin window go to Settings > Configure Dolphin. In the Services pane click the “Download New Services” button. In the search box look for “cast” and install “Send to Chromecast” by Shaddar.
Now all you have to do is browse your collection of mp4 videos and use the Dolphin menu to play it on your Chromecast device, pretty handy! I will certainly enjoy the holidays with this feature, with my favourite movies on full size HD screen.
As usual, this post does not necessarily represent the views of my employer (past, present, or future).
It’s Friday afternoon and the marketing manager receives an email with the new printed material proofs for the trade show. Double clicking the PDF attachment, his PDF reader promptly crashes.
“Ugh, I’m gonna have to call IT again. I’ll do it Monday morning,” he thinks, and turns off his monitor before heading home for the weekend.
Meanwhile, in a dark room somewhere, a few lines appear on the screen of a laptop:
1
2
3
4
5
6
7
8
9
10
11
12
[*] Sending stage (205891 bytes) to 10.66.60.101
[*] Meterpreter session 1 opened (10.66.60.100:4444 -> 10.66.60.101:49159) at 2017-12-27 16:29:13 -0800
msf exploit(multi/handler) > sessions 1
[*] Starting interaction with 1...
meterpreter > sysinfo
Computer : INHUMAN-WIN7
OS : Windows 7 (Build 7601, Service Pack 1).
Architecture : x64
System Language : en_US
Domain : ENTERPRISE
Logged On Users : 2
Meterpreter : x64/windows
Finally, the hacker had a foothold. He started exploring the machine remotely. First, he used Mimikatz to dump the password hashes from the local system. He sent the hashes to his computer with 8 NVidia 1080Ti graphics cards to start cracking, and then kept exploring the filesystem of the marketing manager’s computer. He grabbed the browsing history and saved passwords from the browser, and noticed access to a company directory. He started a script to download the entire contents through the meterpreter session. He started to move on to the network shares when his password cracking rig flashed a new result.
“That was fast,” he thought, looking over at the screen. “SuperS3cr3t isn’t much of a password.” He used the password to log in to the company’s webmail and forwarded the “proofs” (in fact a PDF exploiting a known bug in the PDF reader) to one of the IT staffers with a message asking them to take a look at why it wouldn’t render.
Dissatisfied with waiting until the next week for an IT staffer to open the malicious PDF, he started looking for another option. He began by using his access to a single workstation to look for other computers that were vulnerable to some of the most recent publicly known exploits. Surprisingly, he found two machines that were vulnerable to MS17-010. He sent the exploit through his exisiting meterpreter session and crossed his fingers.
Moments later, he was rewarded with a second Meterpreter session. Looking around, he was quickly disappointed to realize this machine was freshly installed and so would not contain sensitive information or be hosting interesting applications. However, after running Mimikatz again, he discovered that another one of the IT staff had logged into this machine, probably as part of the setup process.
He threw the hashes into his password cracking rig again and started looking for anything else interesting. In a few minutes, he realized this machine was devoid of anything but a basic Windows setup – not even productivity applications had been installed yet. He returned to the original host and looked for anything good, but only found a bunch of marketing materials that were basically public information.
Frustrated, he banged on his keyboard until he remembered the scraped company directory. He went and looked at the directory information for the IT staffer and realized it not only included names and contact informatuon for employees, but also allowed employees to include information about hobbies and interests, plus birthdays and more. He took the data from the IT staffer, split it up into all the included words, and placed it into a wordlist for his password cracking rig. Hoping that would get him somewhere, he went for a Red Bull.
When he came back, he saw another result on his password cracker. This surprised him slightly, because he had expected more of an IT staffer. He was even more surprised when he saw that the password was “Snowboarding2020!” Though it met all the company’s password complexity requirements, it was still an incredibly weak password by modern standards.
Using this new found password, he logged into the workstation belonging to the IT engineer. He dumped the local hashes to look for further pivoting opportunities, but found only the engineer’s own password hash. As he started exploring the filesystem, however, he found many more interesting options. He quickly located an SSH private key and several text files containing AWS API keys. It only took a little bit of investigation to realize that one of the AWS API keys was a root API key for the company’s production environment.
Using the API key, he logged in to the AWS account and quickly identified the virtual machines running the company’s database servers containing user credentials and information. He connected with the API keys he had and started dumping the usernames and password hashes. Given that the hashes were unsalted SHA-1, he figured it shouldn’t take long for his password cracking rig to work through them.
A day later, he was posting an offering for the plaintext credential database for just a fraction of a bitcoin per customer. Satisfied, he started hunting for the next vulnerable enterprise.
While the preceeding story was fiction, it’s an all too-common reality. Many modern enterprises have put considerable effort into hardening their datacenter (be it virtualized or physical) but very little effort into hardening workstations. I often work with companies that seem to believe placing their applications into the cloud is a security panacea. While the cloud offers numerous security benefits – major cloud providers have invested heavily into security, monitor their networks 24/7, and a cloud service is clearly heavily segregated from the corporate network – it does not solve all security problems.
An attacker who is able to compromise a workstation is able to do anything that a legitimate user of that workstation would be able to do. In the example above, the AWS keys stored on a workstation proved critical to gaining access to a treasure trove of user information, but even a lower level of access can be useful to an attacker and dangerous to your company.
The 2017 Verizon DBIR provides data to support this. 66% of malware began with malicious email attachments (client-based), 81% of breaches involved stolen credentials (pivoting), and 43% of attacks involved social engineering (tactics against legitimate users).
Imagine the you have customer service representatives who log in to an application hosted in the cloud to process refunds or perform other services. An attacker with access to a customer service workstation might be able to grab their username and password (or saved cookies from the browser) and then use it to buy expensive items and refund them to themselves. (Or change delivery addresses, issue store credits, or other costly expenditures.)
In a hospital, compromising a workstation used by doctors and nurses would lead, at a minimum, to a major HIPAA breach. In the worst case, it could be used to modify patient records or order medications what could be dangerous or fatal to a patient. Each environment needs to consider the risks posed by the access granted from their workstations and clients.
Attackers will take the easiest route to the data they seek. If you’ve spent some effort on hardening your servers (or applications in the cloud), that may well be through the workstation or client. Consider all entry points in your security strategy.
December 24, 2017
I’ve been meaning to start a video channel for years. This is more of a test video than anything else, but if you have any ideas or suggestions, then don’t hesitate to comment.
December 22, 2017
Today I’ve released version 0.10.0 of the Rust GStreamer bindings, and after a journey of more than 1½ years the first release of the GStreamer plugin writing infrastructure crate “gst-plugin”.
Check the repositories¹² of both for more details, the code and various examples.
GStreamer Bindings
Some of the changes since the 0.9.0 release were already outlined in the previous blog post, and most of the other changes were also things I found while writing GStreamer plugins. For the full changelog, take a look at the CHANGELOG.md in the repository.
Other changes include
- I went over the whole API in the last days, added any missing things I found, simplified API as it made sense, changed functions to take Option<_> if allowed, etc.
- Bindings for using and writing typefinders. Typefinders are the part of GStreamer that try to guess what kind of media is to be handled based on looking at the bytes. Especially writing those in Rust seems worthwhile, considering that basically all of the GIT log of the existing typefinders consists of fixes for various kinds of memory-safety problems.
- Bindings for the Registry and PluginFeature were added, as well as fixing the relevant API that works with paths/filenames to actually work on Paths
- Bindings for the GStreamer Net library were added, allowing to build applications that synchronize their media of the network by using PTP, NTP or a custom GStreamer protocol (for which there also exists a server). This could be used for building video-walls, systems recording the same scene from multiple cameras, etc. and provides (depending on network conditions) up to < 1ms synchronization between devices.
Generally, this is something like a “1.0” release for me now (due to depending on too many pre-1.0 crates this is not going to be 1.0 anytime soon). The basic API is all there and nicely usable now and hopefully without any bugs, the known-missing APIs are not too important for now and can easily be added at a later time when needed. At this point I don’t expect many API changes anymore.
GStreamer Plugins
The other important part of this announcement is the first release of the “gst-plugin” crate. This provides the basic infrastructure for writing GStreamer plugins and elements in Rust, without having to write any unsafe code.
I started experimenting with using Rust for this more than 1½ years ago, and while a lot of things have changed in that time, this release is a nice milestone. In the beginning there were no GStreamer bindings and I was writing everything manually, and there were also still quite a few pieces of code written in C. Nowadays everything is in Rust and using the automatically generated GStreamer bindings.
Unfortunately there is no real documentation for any of this yet, there’s only the autogenerated rustdoc documentation available from here, and various example GStreamer plugins inside the GIT repository that can be used as a starting point. And various people already wrote their GStreamer plugins in Rust based on this.
The basic idea of the API is however that everything is as Rust-y as possible. Which might not be too much due to having to map subtyping, virtual methods and the like to something reasonable in Rust, but I believe it’s nice to use now. You basically only have to implement one or more traits on your structs, and that’s it. There’s still quite some boilerplate required, but it’s far less than what would be required in C. The best example at this point might be the audioecho element.
Over the next days (or weeks?) I’m not going to write any documentation yet, but instead will write a couple of very simple, minimal elements that do basically nothing and can be used as starting points to learn how all this works together. And will write another blog post or two about the different parts of writing a GStreamer plugin and element in Rust, so that all of you can get started with that.
Let’s hope that the number of new GStreamer plugins written in C is going to decrease in the future, and maybe even new people who would’ve never done that in C, with all the footguns everywhere, can get started with writing GStreamer plugins in Rust now.
December 21, 2017

Image courtesy of VMWare
Containers are one of the most exciting technologies in the cloud right now. But when it comes to your IT strategy, where is the best place to start? With so many different options and configurations, it’s critical that you find the best possible strategy for your software stack.
To answer these questions, Canonical’s VP of Product Development Dustin Kirkland and VMware Staff Engineer Sabari Murugesan presented at the SF Bay Area OpenStack User Group Meeting. You can watch the full talk here!
Watch this keynote to learn
- The high level concepts and principles behind containers
- How Ubuntu provides a first class container experience
- How to determine the best container use case
- Container case studies: How are enterprises using containers in production?

The Security Team weekly reports are intended to be very short summaries of the Security Team’s weekly activities.
If you would like to reach the Security Team, you can find us at the #ubuntu-hardened channel on FreeNode. Alternatively, you can mail the Ubuntu Hardened mailing list at: ubuntu-hardened@lists.ubuntu.com
During the last week, the Ubuntu Security team:
- Triaged 301 public security vulnerability reports, retaining the 47 that applied to Ubuntu.
- Published 5 Ubuntu Security Notices which fixed 3 security issues (CVEs) across 7 supported packages.
Ubuntu Security Notices
Bug Triage
Mainline Inclusion Requests
-
libteam underway (LP: #1392012)
-
MIR backlog: https://bugs.launchpad.net/~ubuntu-security/+assignedbugs?field.searchtext=%5BMIR%5D
Development
- Disable squashfs fragments in snap
- prepared/tested/uploaded squashfs-tools fixes for 1555305 in bionic through trusty and did SRU paperwork
- PR 4387 – explicitly deny ~/.gnupg/random_seed in gpg-keys interface
- Submitted PR 4399 for rewrite snappy-app-dev in Go
- Created PR 4406 – interfaces/dbus: adjust slot policy for listen, accept and accept4 syscalls
- Reviews
- PR 4365 – wayland slot implementation
What the Security Team is Reading This Week
Weekly Meeting
More Info
This week we get confy in a new chair, conduct our Perennial Podcast Prophecy Petition Point and go over your feedback. This is the final show of the season and we’ll now be taking a couple of months break to eat curry, have a chat and decide if we’ll be returning for Season 11.
It’s Season Ten Episode Forty-Two of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.
In this week’s show:
- We discuss what we’ve been up to recently:
- Martin has bought a Secretlab TITAN chair. It is very comfy.
We review our 2017 predictions:
Alan
- Multiple devices from Tier one vendors will ship with snappy by default (like Dell, HP, Cisco) “top line big name vendors will ship hardware with Ubuntu snappy as a default OS”
- No
- GitHub will downsize their 600 workforce to a much smaller number and may also do something controversial to raise funds
- No – 723 according to wikipedia
- Microsoft will provide a Linux build of a significant application – possibly exchange or sharepoint
- No?
- Donald Trump will not last a year as president
- Sadly not.
Mark
- There will be no new Ubuntu phone on sale in 2017
- Yes
- The UK government will lose a court case related to the Investigatory Powers Act
- This time next year, one of the top 5 distros on Distrowatch will be a distro that isn’t currently in the top 20.
- No
Martin
- Ubuntu 17.10 will be able to run Mir using the proprietary nvidia drivers and Steam will work reliably via XMir. It will also be possible to run Mir in Virtualbox.
- No
- A high profile individual (or individuals) will fall victim to one of the many privacy threats introduced as a result of the Investigatory Powers Bill. Intimate details of their online life will be exposed to the world, compiled from one of more databases storing Internet Connection Records. The disclosure will possibly have serious consequences for the individuals concerned, such as losing their job or being professionally discredited.
- No
- The hype surrounding VR will build during 2017 but Virtual Reality will continue to lack adoption. Sales figures will be well below market projections.
We make our prediction for 2018:
Alan
- A large gaming hardware vendor from the past will produce new hardware. Someone of the size/significance of Sega. Original hardware, not just re-using the brand-name, but official product.
- Valve will rev the steamlink, perhaps making it more powerful for 4K gaming, and maybe a minor bump to the steam controller too
- A large government UK body will accidentally leak a significant body of data. Could be laptop/USB stick on a train or website hack.
Mark
- Either the UK or US government will collapse
- A major hardware manufacturer (not a crowd funder) will release a device in the form factor of a GPD pocket
- I will specifically buy (i.e. not in a Humble Bundle) and play through a native Linux game that is initially released in 2018.
- Canonical will go public and suffer a hostile takeover by the shuffling corpse of SCO. (bonus prediction)
Martin
- Give or take a couple of thousand dollars, BitCoin will have the same US dollar value in December 2018 as it does today.
- 17205.63 US Dollar per btc at the time of recording.
- A well established PC OEM, not currently supporting Linux, will offer a pre-installed Linux distro option for their flagship products.
- Four smart phones will launch in 2018 that cost $1000 or more, thanks to Apple normalising this ludicrous price tag in 2017.
Ubuntu Podcast listenrs share their predictions for 2018:
- Simon Butcher – The Queen piles into bitcoin and loses her fortune when bitcoin collapses to 10p
- Jezra – Someone considers open sourcing a graphics driver for chip that works with ARM, and then doesn’t
- Ian – Canonical will be bought out by Ubuntu Mate.
- Mattias Wernér – I predict a new push for SteamOS and Steam Machines with a serious marketing effort behind it.
- Jon Spriggs – I think we’ll see Etherium value exceeding £2,000 before 1st December 2018 (Currently £476 on Coinbase.com). Litecoin will cross £1,000 before 1st Dec 2018 (currently £286)
- Eddie Figgie – Bitcoin falls below $1k US.
- McPhail – I saw the call for 2018 predictions. I predict that command line snaps will run natively in Windows and some graphical snaps will run too
- Leo Arias – Costa Rica wins the FIFA world cup.
- Ivan Pejić – India will ship RISC-V based Ubuntu netbook/tablet/phone.
- Sachin Saini* – Solus takes over the world.
- Laura Czajkowski and Joel J – Year of the (mainstream) Linux desktop

- Adam Eveleigh – snappy/Flatpak/AppImage(Update/d) will gain more traction as people realize that it solves the stable-for-noobs vs rolling dilemma once and for all. Which of the three will go furthest? Despite being in the snappy camp, I bet Flatpak
- Marius Gripsgard – Ubuntu touch world domination
- Jan Sprinz – Ubuntu Touch will rebase to 16.04
- Ian – Canonical will IPO
- Simon Butcher – Bitcoins go to £500,000 and the whole brexit divorce bill is funded by a stash of bc found on Gordon Brown’s old laptop
- Conor Murphy – Linux Steam Integration snap will get wide adoption. Over 30% of all steam installs on linux
- Jon Spriggs – RPi 4 with either with MOAR MEMORY or Gig Ethernet.
- Mattias Wernér – I’ll predict that bitcoin will hit six figures in 2018. To be more specific, the six figures will be in dollars.
- Jon Spriggs – I predict there will be an OggCamp ’18 😉
- Laura Czajkowski – Microsoft will buy Canonical
- Mortiz – Pipewire will be included in at least two major distros.
- Daniel Llewelyn – Snaps will become the defacto standard and appimages and flatpaks will continue to be ignored
- Jezra – Samsung ports Tizen to another device that is not a Samsung Phone.
- Badger – Sound will finally work on cherry trail processors
- Justin – Ubuntu Podcast to return for an eleventh season 🙂
-
And we go over all your amazing feedback – thanks for sending it – please keep sending it!
-
This weeks cover image is taken from Wikimedia.
That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.
- Join us in the Ubuntu Podcast Chatter group on Telegram
December 20, 2017

AutoCompletion in Nextcloud's Commenting Feature
For a long time it is already possible to leave comments on files. With Nextcloud 11 we provide a way to mention users in comments, so that a notification is created for the respective individual. This was never really advertised, however, because it was lacking an auto-completion tool that offers you a type-ahead selection of users. The crucial point is that user IDs are neither known to the end user nor exposed to them. For instance, if users are provided by LDAP, it might look like "1d5566a5-87e6-4451-bd2f-e0e6ba5944d9". Nobody wants to see this and the average person also will not memorize it :)
It would be sad to see the functionality rot away being unseen in a dark corner. With Nextcloud 13 the time was ripe to finally include this missing bit, which is actually pretty fundamental and every application that allows text based communication amongst multiple people ships it.
The Plan
As a first step, I drafted a spec, consisting of three parts, subject to Nextcloud's layers. Let's start from the user facing aspects and get down in the stack:
-
Web UI / Frontend
The requirements are to request the user items for auto completion, offering the elements to find, pick and insert the mention and also to render it beautifully and in a consistent way. While talking to the server and rendering where to be done with means we had in place, for presentation and interaction I picked up the At.js plugin for jQuery. It can be adjusted and themed nicely, offers access points where they are needed and is just working pleasantly.
One crucial point for the user experience is to have the results as quick as possible at hand, so that the author is as little as possible disturbed when writing the comment. The first idea was to pull in all possible users up front, but obviously this does not scale. In the implementation we pull them on demand, and in that regard I was also working on improving the performance of the LDAP backend to gain a positive user experience.
-
Web API Endpoint
This is the interface between the Web UI and the server. What the endpoint essentially will do is to gather the people from its sources, optionally let them be sorted, and send the results back to the client. Since it does not provide anything explicit for the Comments app, it ought to be a public and reusable API.
Under the hood, I intended it to get the users from the infamous
shareeendpoint specific to the file sharing app. No urge to reinvent wheels, right? However, it turned out that I needed to round out this wheel a little bit. More about this later. -
Server PHP API
Provided our API endpoint can retrieve the users only one aspect is missing that needs to be added as an API. Since the newly crafted web endpoint is independent from the Comments app, also this component is. The service called
AutoCompleteManageris supposed to take sorter plugin registrations and provide a method to run a result set through them.The idea is that persons that are most likely to be mentioned are pushed to the top of the result list. Those are identified by: people that have access to the file, people that were already commenting on the file or people the author interacted with. The necessary information are specific to other apps (
commentsandfiles_sharingin our case), hence they should provide a plugin. The API endpoint then only asks the service to run the results through the specified sorters.
The original plan is placed in the description of issue #2443 and contains more details, but be aware it does not reflect the final implementation.
Being a backend-person I started the implementation bottom up. The first to-do however was not working on the components mentioned above, but axe-shaping. The chosen source of persons for auto-completion, the sharee endpoint, has had all its logic in the Controller. This simply means that it does not belong to the public API scope (\OCP\ namespace) and was also designed to be consumed from the web interface. In short: refactoring!
The sharees endpoint
The sharee endpoint is a real treasure. The file sharing code requests it to gather instance users, federated users, email addresses, circles and groups that are available for sharing. It is a pretty useful method, in fact not only for file sharing itself. Despite being not an official API other apps are making use of it. One example is Deck which uses it for sharing suggestions, too.
On the server side we added a service to search through all the previously mentioned users, groups, circles, etc. Let us call them collaborators, to have a single, short term. The service is OCP\Collaboration\Collaborators\ISearch and offers two methods: one for searching, the other for registering plugins. Those plugins are the providers (sources) of these collaborators and search() delegates the query to each of them which is registered for the requested shareTypes (the technical term for sorts of collaborators). Also for backward compatibility (and to keep the changes in a certain range) the result follows the array based format used in the files_sharing app's controller, and an indicator whether to expect more results. Internally the introduced ISearchResult is being used to collect and manage the results from the providing plugins.
Each provider has to implement ISearchPlugin and more than one provider can be registered for each shareType. The existing logic was ported to each own provider, most residing in the server itself (because they themselves utilize server components) and additionally the circlesPlugin into the Circles app. Some glue code was necessary for registration. The apps announce the providers in their info.xml, which is then automatically fed to the register method on app load. The App Store's XSD schema was adjusted accordingly, so it will accept such apps.
The original sharee controller from thefiles_sharing app lost almost 600 lines of code and is now consuming the freshly formed PHP API. I cannot stress enough how good it is to have tests in place, which assure that everything still works as expected after refactoring (even if they need to be moved around). The refactor went in in pull request #6328. The adaption for the Circles app went in PRs #126, #135 and #136.
Backend efforts
Now that fundamentals are brought into shape, I was able to create the new auto completion endpoint and the services it depends on. The new AutoCompleteController responds to GET requests, and accepts a wide range of parameters of which only one is required and the others are optional. First, it requests instance users (defined by parameter) from the collaborator search. It merges the exact matches with the regular ones in one array, with exact matches (not just substrings) being on top. Then, if any sorter was defined by parameter, the auto-complete manager pipes the results through them. Finally, the sorted array is transformed into a simpler format that contains its ids, the end-user display name, and its source type (shareType) before being send back to the browser.
Auto-complete manager? Yes, \OCP\Collaboration\AutoComplete\IManager, forming the Server API aspect, was also introduced. It does not divert from the spec and is not difficult or special in any way.
Of course the sorters, especially the public interface ISorter, was also introduced, as well as the required info.xml glue. Two apps were equipped with sorter plugins: comments pushes people that commented on a file to the top and files_sharing puts people with access to the file on first.
Serving Layer 8
Having the foundations laid, the Web GUI only needs to use it. The first step I did was to ship the At.js plugin for jQuery and connect it to the API endpoint. Easy, until I realized that we fundamentally need to change the comment input elements for writing new as well as editing comments. For one, it does not provide all amenities feature-wise (sadly I do not remember what exactly), secondly HTML markup will not be formatted which we require to hide the user id behind the avatar and display name. It is the first time I heard about the contentEditable attribute. That's cool, and mostly what was necessary to do was replacing <input> with <div contentEditable="true">, applying the styles, changing a little bit of code for dealing with the value… and figuring out that you even can paste HTML into it! Now, this is handled as well.
Rendering was a topic, since we always want to show the mention in a nice way to the end user, even when editing. Mind, we send the plain text comment containing the user id to the server. The client is responsible for rendering the contents (the server sends the extracted mentions and the corresponding display name as meta data with the comment). A bit more work was to ensure that line breaks were kept when switching between the plain and rich forms.
A minor issue was ensuring that the contacts menu popup was also shown when clicking on a mention. Since Nextcloud 12 clicking on a user name shows a popup enabling to email or call the person, for example. The At.js plugin brought it's own set of CSS, which was adopted to our styling and so theming is fully supported. Eventually, a switch needed to be flipped so the plugin would not sort the already sorted results.
The backend- and frontend adaptions were merged with pull request 6982. A challenge that was left was making the retrieval of users from LDAP a lot faster to be acceptable. It was crucial for adoption and I made sure to have it solve before asking for final reviews.
Speed up the LDAP Backend
My strategy was to figure out what the bottleneck is, resolve it, measure and compare the effects of the changes and polish them. For the analysis part I was using the xdebug profiler for data collection and KCachegrind for visualization. The hog was quickly uncovered.
This requires a bit of explanation how the LDAP backend works when retrieving users. If not cached, we reach out to the LDAP server and inquire for the users matching the search term, based on a configured filter. The users receive an ID internal to Nextcloud based on an LDAP attribute, by default the entry's UUID. The ID is mapped together with the Distinguished Name (DN) for quick interaction and the UUID to detect DN changes. Depending on the configuration several other attributes are read and used in Nextcloud, for instance the email address, the photo or the quota. Since the records are read anyways, we also request most of these attributes (the inexpensive ones) with any search operation and apply them. Doing this is on each (non-cached) request is the hog.
Having the email up front, for instance, is important so that share notifications can be sent as soon as someone wants to share a file or folder with that person. So we cannot really skip that, but when we know that the user is already mapped, we do not need to update these features now. We can move it to a background job. Splitting off updating the features already did the trick! Provided that the users requested are already known, which is a matter of time.
In order to measure the before- and after states, first I prepared my dev instance. I connected to a redis cache via local socket, and ensured that before every run the cache is flushed. Unnecessary applications were closed. The command to measure was a command line call to search the LDAP server on Nextcloud, but the third batch of 500 for a provided search term: time sudo -u http ./occ ldap:search --limit=500 --offset=1000 "ha". I was running this ten times for the old state, an intermediate and the final state and went from averaging 14.7 seconds via 3.5s to finally 1.8s. This suffices. For auto-completion we request the first 10 results, a significantly lighter task.
Now we have a background job, which self-determines its run interval (in ranges) depending on the amount of known LDAP users, which iterates over the LDAP users fitting to the corresponding filter, mapping and updating their features. This kicks in only, if the background job mode is not set to "ajax" to avoid weird effects if it is triggered by the browser. Any serious setup should have the background jobs ran through cron. Also, it runs at least one hour after the last config change to interfere with setting up LDAP connections.
So, where are we?
Well, this feature is already merged in master which will become Nextcloud 13. Beta versions are already released and ready for testing. Some smaller issues were identified and fix. Currently there's also a discussion on whether avatars should appear in the comments text, or text-only is favourable. Either way, I am really happy to have this long-lasting item done and out. In a whole I am really satisfied with and looking for the 13 release!
This work has benefited from many collaborators, special thanks in random order to Jan, Maxence, Joas, Bernhard, Björn, Roeland and whoever I might have forgotten.
![]() |
| Spread your knowledge to hundred of people! |
We would love to hear you!
December 19, 2017
Weird test failures are great at teaching you things that you didn’t realise you might need to know.
As previously mentioned, I’ve been working on converting Launchpad from Buildout to virtualenv and pip, and I finally landed that change on our development branch today. The final landing was mostly quite smooth, except for one test failure on our buildbot that I hadn’t seen before:
ERROR: lp.codehosting.codeimport.tests.test_worker.TestBzrSvnImport.test_stacked
worker ID: unknown worker (bug in our subunit output?)
----------------------------------------------------------------------
Traceback (most recent call last):
_StringException: log: {{{
36.384 creating repository in file:///tmp/testbzr-6CwSLV.tmp/lp.codehosting.codeimport.tests.test_worker.TestBzrSvnImport.test_stacked/work/stacked-on/.bzr/.
36.388 creating branch <bzrlib.branch.BzrBranchFormat7 object at 0xeb85b36c> in file:///tmp/testbzr-6CwSLV.tmp/lp.codehosting.codeimport.tests.test_worker.TestBzrSvnImport.test_stacked/work/stacked-on/
}}}
Traceback (most recent call last):
File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/lib/lp/codehosting/codeimport/tests/test_worker.py", line 1108, in test_stacked
stacked_on.fetch(Branch.open(source_details.url))
File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/branch.py", line 186, in open
possible_transports=possible_transports, _unsupported=_unsupported)
File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py", line 689, in open
_unsupported=_unsupported)
File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py", line 718, in open_from_transport
find_format, transport, redirected)
File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/transport/__init__.py", line 1719, in do_catching_redirections
return action(transport)
File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py", line 706, in find_format
probers=probers)
File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py", line 1155, in find_format
raise errors.NotBranchError(path=transport.base)
NotBranchError: Not a branch: "/tmp/tmpdwqrc6/trunk/".
When I investigated this locally, I found that I could reproduce it if I ran just that test on its own, but not if I ran it together with the other tests in the same class. That’s certainly my favourite way round for test isolation failures to present themselves (it’s more usual to find state from one test leaking out and causing another one to fail, which can make for a very time-consuming exercise of trying to find the critical combination), but it’s still pretty odd.
I stepped through the Branch.open call in each case in the hope of some
enlightenment. The interesting difference was that the custom probers
installed by the bzr-svn plugin weren’t installed when I ran that one test
on its own, so it was trying to open a branch as a Bazaar branch rather than
using the foreign-branch logic for Subversion, and this presumably depended
on some configuration that only some tests put in place. I was on the verge
of just explicitly setting up that plugin in the test suite’s setUp
method, but I was still curious about exactly what was breaking this.
Launchpad installs several Bazaar plugins, and
lib/lp/codehosting/__init__.py is responsible for putting most of these in
place: anything in Launchpad itself that uses Bazaar is generally supposed
to do something like import lp.codehosting to set everything up. I
therefore put a breakpoint at the top of lp.codehosting and stepped
through it to see whether anything was going wrong in the initial setup.
Sure enough, I found that bzrlib.plugins.svn was failing to import due to
an exception raised by bzrlib.i18n.load_plugin_translations, which was
being swallowed silently but meant that its custom probers weren’t being
installed. Here’s what that function looks like:
def load_plugin_translations(domain):
"""Load the translations for a specific plugin.
:param domain: Gettext domain name (usually 'bzr-PLUGINNAME')
"""
locale_base = os.path.dirname(
unicode(__file__, sys.getfilesystemencoding()))
translation = install_translations(domain=domain,
locale_base=locale_base)
add_fallback(translation)
return translation
In this case, sys.getfilesystemencoding was returning None, which isn’t
a valid encoding argument to unicode. But why would that be? It gave
me a sensible result when I ran it from a Python shell in this environment.
A bit of head-scratching later and it occurred to me to look at a backtrace:
(Pdb) bt
/home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/site.py(703)<module>()
-> main()
/home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/site.py(694)main()
-> execsitecustomize()
/home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/site.py(548)execsitecustomize()
-> import sitecustomize
/home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/sitecustomize.py(7)<module>()
-> lp_sitecustomize.main()
/home/cjwatson/src/canonical/launchpad/lp-branches/testfix/lib/lp_sitecustomize.py(193)main()
-> dont_wrap_bzr_branch_classes()
/home/cjwatson/src/canonical/launchpad/lp-branches/testfix/lib/lp_sitecustomize.py(139)dont_wrap_bzr_branch_classes()
-> import lp.codehosting
> /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/lib/lp/codehosting/__init__.py(54)<module>()
-> load_plugins([_get_bzr_plugins_path()])
I wonder if there’s something interesting about being imported from a
sitecustomize hook? Sure enough, when I went to look at Python for where
sys.getfilesystemencoding is set up, I found this in Py_InitializeEx:
if (!Py_NoSiteFlag)
initsite(); /* Module site */
...
#if defined(Py_USING_UNICODE) && defined(HAVE_LANGINFO_H) && defined(CODESET)
/* On Unix, set the file system encoding according to the
user's preference, if the CODESET names a well-known
Python codec, and Py_FileSystemDefaultEncoding isn't
initialized by other means. Also set the encoding of
stdin and stdout if these are terminals, unless overridden. */
if (!overridden || !Py_FileSystemDefaultEncoding) {
...
}
I moved this out of
sitecustomize,
and it’s working better now. But did you know that a sitecustomize hook
can’t safely use anything that depends on sys.getfilesystemencoding? I
certainly didn’t, until it bit me.
Recently the FCC voted down the previously held rules on net neutrality. I think that this is a bad decision by the FCC, but I don't think that it will result in the amount of chaos that some people are suggesting. I thought I'd write about how I see the net changing, for better or worse, with these regulations removed.
If we think about how the Internet is today, basically everyone pays to access the network individually. Both groups that want to host information and people who want to access those sites. Everyone pays a fee for 'their connection' which contributes to companies that create and connect the backbone together. An Internet connection by itself has very little value, but it is the definition of a "network effect", because everyone is on the Internet it has value for you to connect there as well. Some services you connect to use a lot of your home Internet connection, and some of them charge different rates for it. Independent of how much they use or charge you, your ISP isn't involved in any meaningful way. The key change here is that now your ISP will be associated with the services that you use.
Let's talk about a theoretical video streaming service that charged for their video service. Before they'd charge something like $10 a month for licensing and their hosting costs. Now they're going to end up paying an access fee to get to consumer's Internet connections, so their charges are going to change. They end up charging $20 a month and giving $10 of that to the ISPs of their customers. In the end consumers will end up paying for their Internet connection just as much, but it'd be bundled into other services they're buying on the Internet. ISPs love this because suddenly they're not the ones charging too much, they're out of the billing here. They could even possibly charge less (free?) for home Internet access as it'd be subsidized by the services you use.
Better connections
I think that it is quite possible that this could result in better Internet connections for a large number of households. Today those households have mediocre connectivity, and they can complain about it, but for the most part ISPs don't care about a few individuals complaints. What could change is that when a large company is paying millions of dollars in access fees is complaining, they might start listening.
The ISPs are supporting the removal of Net Neutrality regulations to get money from the services on the Internet. I don't think that they realize that with that money will come an obligation to perform to those service's requirements. Most of those services are more customer focused than ISPs are, which is likely to cause a culture shock once they hold weight with their management. I think it is likely ISPs will come to regret not supporting net neutrality.
Expensive hosting for independent and smaller providers
It is possible for large services on the Internet to negotiate contracts with large ISPs and make everything generally work out so that most consumers don't notice. There is then a reasonable question on how providers that are too small to negotiate a contract play in this environment. I think it is likely that the hosting providers will fill in this gap with different plans that match a level of connectivity. You'll end up with more versions of that "small" instance, some with consumer bandwidth built-in to the cost and others without. There may also be mirroring services like CDNs that have group negotiated rates with various ISPs. The end result is that hosting will get more expensive for small businesses.
The bundling of bandwidth is also likely to shake up the cloud hosting business. While folks like Amazon and Google have been able to dominate costs through massive datacenter buys, suddenly that isn’t the only factor. It seems likely the large ISPs will build public clouds of their own as they can compete by playing funny-money with the bandwidth charges.
Increased hosting costs will hurt large non-profits the most, folks like Wikipedia and The Internet Archive. They already have a large amount of their budget tied up in hosting and increasing that is going to make their finances difficult. Ideally ISPs and other Internet companies would help by donating to these amazing projects, but that's probably too optimistic. We'll need individuals to make up this gap. These organizations could be the real victims of not having net neutrality.
Digital Divide
A potential gain would be that, if ISPs are getting most of the money from services, the actual connections could become very cheap. There would then be potential for more lower-income families to get access to the Internet as a whole. While this is possible, the likelihood would be that only families in regions that have customers the end-services themselves want. It will help those who are near an affluent area, not everyone. It seems that there is some potential for gain, but I don't believe it will end up being a large impact.
What can I do?
If you're a consumer, there's probably not a lot, you're along for the ride. You can contact your representatives, and if this is a world that you don't like the sound of, ask them to change it. Laws are a social contract for how our society works, make sure they're a contract you want to be part of.
As a developer of a web service you can make sure that your deployment is able to work on multi-cloud type setups. You're probably going to end up going from multi-cloud to a whole-lotta-cloud as each has bandwidth deals your business is interested in. Also, make sure you can isolate which parts need the bandwidth and which don't as that may become more important moving forward.
I’ve been using Kitten Block for years, since I don’t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn’t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced.
However, if your primary goal is just to block the websites in question
rather than seeing kitten pictures as such (let’s face it, the internet is
not short of alternative sources of kitten pictures), then it’s easy to do
with uBlock
Origin.
After installing the extension if necessary, go to Tools → Add-ons →
Extensions → uBlock Origin → Preferences → My filters, and add
www.dailymail.co.uk and www.express.co.uk, each on its own line. (Of
course you can easily add more if you like.) Voilà: instant tranquility.
Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone.
December 18, 2017
Welcome to the Ubuntu OpenStack development summary!
This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.
If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!
OpenStack Distribution
Stable Releases
Current in-flight SRU’s for OpenStack related packages:
nova-novncproxy process gets wedged, requiring kill -HUP
Horizon Cinder Consistency Groups
Recently released SRU’s for OpenStack related packages:
Percona XtraDB Cluster Security Updates
Development Release
Since the last dev summary, OpenStack Queens Cloud Archive pockets have been setup and have received package updates for the first and second development milestones – you can install them on Ubuntu 16.04 LTS using:
sudo add-apt-repository cloud-archive:queens[-proposed]
OpenStack Queens will also form part of the Ubuntu 18.04 LTS release in April 2018, so alternatively you can try out OpenStack Queens using Ubuntu Bionic directly.
You can always test with up-to-date packages built from project branches from the Ubuntu OpenStack testing PPA’s:
sudo add-apt-repository ppa:openstack-ubuntu-testing/queens
Nova LXD
No significant feature work to report on since the last dev summary.
The OpenStack Ansible team have contributed an additional functional gate for nova-lxd – its currently non-voting, but does provide some additional testing feedback for nova-lxd developers during the code review process. If it proves stable and useful, we’ll make this a voting check/gate.
OpenStack Charms
Ceph charm migration
Since the last development summary, the Charms team released the 17.11 set of stable charms; this includes a migration path for users of the deprecated ceph charm to using ceph-mon and ceph-osd. For full details on this process checkout the charm deployment guide.
Queens Development
As part of the 17.11 charm release a number of charms switched to execution of charm hooks under Python 3 – this includes the nova-compute, neutron-{api,gateway,openvswitch}, ceph-{mon,osd} and heat charms; once these have had some battle testing, we’ll focus on migrating the rest of the charm set to Python 3 as well.
Charm changes to support the second Queens milestone (mainly in ceilometer and keystone) and Ubuntu Bionic are landing into charm development to support ongoing testing during the development cycle. OpenStack Charm deployments for Queens and later will default to using the Keystone v3 API (v2 has been removed as of Queens). Telemetry users must deploy Ceilometer with Gnocchi and Aodh as the Ceilometer API has now been removed from charm based deployments and from the Ceilometer codebase. You can install the current tip of charm development using the the openstack-charmers-next prefix for charmstore URL’s – for example:
juju deploy cs:~openstack-charmers-next/neutron-api
ZeroMQ support has been dropped from the charms; having no know users and no functional testing in the gate and having issued deprecation warnings in release notes it was time to drop the associated code from the code base. PostgreSQL and deploy from source are also expected to be removed from the charms this cycle.
You can read the full list of specs currently scheduled for Queens here.
Releases
The last stable charm release went out at the end of November including the first stable release of the Gnocchi charm – you can read the full details in the release notes. The next stable charm release will take place in February alongside OpenStack Queens, with a release shortly after the Ubuntu 18.04 LTS release in May to sweep up any pending LTS support and fixes needed.
IRC (and meetings)
As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details. The next IRC meeting will be on the 8th of January at 1700 UTC.
And finally – Merry Christmas!
EOM
December 16, 2017
I use r2e and pocket to follow tech related rss feeds. To read these I sometimes use the nook, sometimes use the pocket website, but often I use edbrowse and pockyt on a terminal. I tend to prefer this because I can see more entries more quickly, delete them en masse, use the terminal theme already set for the right time of day (dark and light for night/day), and just do less clicking.
My .ebrc has the following:
# pocket get
function+pg {
e1
!pockyt get -n 40 -f '{id}: {link} - {excerpt}' -r newest -o ~/readitlater.txt > /dev/null 2>&1
e98
e ~/readitlater.txt
1,10n
}
# pocket delete
function+pd {
!awk -F: '{ print $1 }' ~/readitlater.txt > ~/pocket.txt
!pockyt mod -d -i ~/pocket.txt
}
It’s not terribly clever, but it works – both on linux and macos. To use these, I start up edbrowse, and type <pg. This will show me the latest 10 entries. Any which I want to keep around, I delete (5n). Any which I want to read, I open (4g) and move to a new workspace (M2).
When I'm done, any references which I want deleted are still in ~/readitlater.txt. Those which I won't to keep, are deleted from that file. (Yeah a bit backwards from normal
) At that point I make sure to save (w), then run <pd to delete them from pocket.
Disclaimer
The opinions expressed in this blog are my own views and not those of Cisco.
December 15, 2017
It is the season of giving and if you use KDE software, donate to KDE. Software such as Krita, Kdenlive, KDE Connect, Kontact, digiKam, the Plasma desktop and many many more are all projects under the KDE umbrella.
KDE have launched a fund drive running until the end of 2017. If you want to help make KDE software better, please consider donating. For more information on what KDE will do with any money you donate, please go to https://www.kde.org/fundraisers/yearend2017/
This is an enjoyable introduction to programming in Java by an author I have enjoyed in the past.
Learn Java the Easy Way: A Hands-On Introduction to Programming was written by Dr. Bryson Payne. I previously reviewed his book Teach Your Kids to Code, which is Python-based.
Learn Java the Easy Way covers all the topics one would expect, from development IDEs (it focuses heavily on Eclipse and Android Studio, which are both reasonable, solid choices) to debugging. In between, the reader receives clear explanations of how to perform calculations, manipulate text strings, use conditions and loops, create functions, along with solid and easy-to-understand definitions of important concepts like classes, objects, and methods.
Java is taught systematically, starting with simple and moving to complex. We first create a simple command-line game, then we create a GUI for it, then we make it into an Android app, then we add menus and preference options, and so on. Along the way, new games and enhancement options are explored, some in detail and some in end-of-chapter exercises designed to give more confident or advancing students ideas for pushing themselves further than the book’s content. I like that.
Side note: I was pleasantly amused to discover that the first program in the book is the same as one that I originally wrote in 1986 on a first-generation Casio graphing calculator, so I would have something to kill time when class lectures got boring.
The pace of the book is good. Just as I began to feel done with a topic, the author moved to something new. I never felt like details were skipped and I also never felt like we were bogged down with too much detail, beyond what is needed for the current lesson. The author has taught computer science and programming for nearly 20 years, and it shows.
Bottom line: if you want to learn Java, this is a good introduction that is clearly written and will give you a nice foundation upon which you can build.
Disclosure: I was given my copy of this book by the publisher as a review copy. See also: Are All Book Reviews Positive?
Like each month, here comes a report about the work of paid contributors to Debian LTS.
Individual reports
In October, about 144 work hours have been dispatched among 12 paid contributors. Their reports are available:
- Antoine Beaupré did 8.5h (out of 13h allocated + 3.75h remaining, thus keeping 8.25h for December).
- Ben Hutchings did 17 hours (out of 13h allocated + 4 extra hours).
- Brian May did 10 hours.
- Chris Lamb did 13 hours.
- Emilio Pozuelo Monfort did 14.5 hours (out of 13 hours allocated + 15.25 hours remaining, thus keeping 13.75 hours for December).
- Guido Günther did 14 hours (out of 11h allocated + 5.5 extra hours, thus keeping 2.5h for December).
- Hugo Lefeuvre did 13h.
- Lucas Kanashiro did not request any work hours, but he had 3 hours left. He did not publish any report yet.
- Markus Koschany did 14.75 hours (out of 13 allocated + 1.75 extra hours).
- Ola Lundqvist did 7h.
- Raphaël Hertzog did 10 hours (out of 12h allocated, thus keeping 2 extra hours for December).
- Roberto C. Sanchez did 32.5 hours (out of 13 hours allocated + 24.50 hours remaining, thus keeping 5 extra hours for November).
- Thorsten Alteholz did 13 hours.
About external support partners
You might notice that there is sometimes a significant gap between the number of distributed work hours each month and the number of sponsored hours reported in the “Evolution of the situation” section. This is mainly due to some work hours that are “externalized” (but also because some sponsors pay too late). For instance, since we don’t have Xen experts among our Debian contributors, we rely on credativ to do the Xen security work for us. And when we get an invoice, we convert that to a number of hours that we drop from the available hours in the following month. And in the last months, Xen has been a significant drain to our resources: 35 work hours made in September (invoiced in early October and taken off from the November hours detailed above), 6.25 hours in October, 21.5 hours in November. We also have a similar partnership with Diego Bierrun to help us maintain libav, but here the number of hours tend to be very low.
In both cases, the work done by those paid partners is made freely available for others under the original license: credativ maintains a Xen 4.1 branch on GitHub, Diego commits his work on the release/0.8 branch in the official git repository.
Evolution of the situation
The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.
The security tracker currently lists 55 packages with a known CVE and the dla-needed.txt file 35 (we’re a bit behind in CVE triaging apparently).
Thanks to our sponsors
New sponsors are in bold.
- Platinum sponsors:
- Gold sponsors:
- The Positive Internet (for 42 months)
- Blablacar (for 41 months)
- Linode (for 31 months)
- Babiel GmbH (for 20 months)
- Plat’Home (for 20 months)
- Silver sponsors:
- Domeneshop AS (for 41 months)
- Université Lille 3 (for 41 months)
- Trollweb Solutions (for 39 months)
- Nantes Métropole (for 35 months)
- Dalenys (for 32 months)
- Univention GmbH (for 27 months)
- Université Jean Monnet de St Etienne (for 27 months)
- Sonus Networks (for 21 months)
- maxcluster GmbH (for 15 months)
- Exonet B.V. (for 11 months)
- Leibniz Rechenzentrum (for 5 months)
- Vente-privee.com
- Bronze sponsors:
- David Ayers – IntarS Austria (for 42 months)
- Evolix (for 42 months)
- Offensive Security (for 42 months)
- Seznam.cz, a.s. (for 42 months)
- Freeside Internet Service (for 41 months)
- MyTux (for 41 months)
- Intevation GmbH (for 39 months)
- Linuxhotel GmbH (for 39 months)
- Daevel SARL (for 37 months)
- Bitfolk LTD (for 36 months)
- Megaspace Internet Services GmbH (for 36 months)
- Greenbone Networks GmbH (for 35 months)
- NUMLOG (for 35 months)
- WinGo AG (for 35 months)
- Ecole Centrale de Nantes – LHEEA (for 31 months)
- Sig-I/O (for 28 months)
- Entr’ouvert (for 26 months)
- Adfinis SyGroup AG (for 23 months)
- GNI MEDIA (for 18 months)
- Laboratoire LEGI – UMR 5519 / CNRS (for 18 months)
- Quarantainenet BV (for 18 months)
- RHX Srl (for 15 months)
- Bearstech (for 9 months)
- LiHAS (for 9 months)
- People Doc (for 6 months)
- Catalyst IT Ltd (for 4 months)
No comment | Liked this article? Click here. | My blog is Flattr-enabled.
Sorry, the web page you have requested is not available through your internet connection.
We have received an order from the Courts requiring us to prevent access to this site in order to help protect against Lex Julia Majestatis infridgement.
If you are a home broadband customer, for more information on why certain web pages are blocked, please click here.If you are a business customer, or are trying to view this page through your company's internet connection, please click here.
∞
December 14, 2017
A GStreamer Plugin like the Rec Button on your Tape Recorder – A Multi-Threaded Plugin written in Rust
Sebastian Dröge
As Rust is known for “Fearless Concurrency”, that is being able to write concurrent, multi-threaded code without fear, it seemed like a good fit for a GStreamer element that we had to write at Centricular.
Previous experience with Rust for writing (mostly) single-threaded GStreamer elements and applications (also multi-threaded) were all quite successful and promising already. And in the end, this new element was also a pleasure to write and probably faster than doing the equivalent in C. For the impatient, the code, tests and a GTK+ example application (written with the great Rust GTK bindings, but the GStreamer element is also usable from C or any other language) can be found here.
What does it do?
The main idea of the element is that it basically works like the rec button on your tape recorder. There is a single boolean property called “record”, and whenever it is set to true it will pass-through data and whenever it is set to false it will drop all data. But different to the existing valve element, it
- Outputs a contiguous timeline without gaps, i.e. there are no gaps in the output when not recording. Similar to the recording you get on a tape recorder, you don’t have 10s of silence if you didn’t record for 10s.
- Handles and synchronizes multiple streams at once. When recording e.g. a video stream and an audio stream, every recorded segment starts and stops with both streams at the same time
- Is key-frame aware. If you record a compressed video stream, each recorded segment starts at a keyframe and ends right before the next keyframe to make it most likely that all frames can be successfully decoded
The multi-threading aspect here comes from the fact that in GStreamer each stream usually has its own thread, so in this case the video stream and the audio stream(s) would come from different threads but would have to be synchronized between each other.
The GTK+ example application for the plugin is playing a video with the current playback time and a beep every second, and allows to record this as an MP4 file in the current directory.
How did it go?
This new element was again based on the Rust GStreamer bindings and the infrastructure that I was writing over the last year or two for writing GStreamer plugins in Rust.
As written above, it generally went all fine and was quite a pleasure but there were a few things that seem noteworthy. But first of all, writing this in Rust was much more convenient and fun than writing it in C would’ve been, and I’ve written enough similar code in C before. It would’ve taken quite a bit longer, I would’ve had to debug more problems in the new code during development (there were actually surprisingly few things going wrong during development, I expected more!), and probably would’ve written less exhaustive tests because writing tests in C is just so inconvenient.
Rust does not prevent deadlocks
While this should be clear, and was also clear to myself before, this seems like it might need some reiteration. Safe Rust prevents data races, but not all possible bugs that multi-threaded programs can have. Rust is not magic, only a tool that helps you prevent some classes of potential bugs.
For example, you can’t just stop thinking about lock order if multiple mutexes are involved, or that you can carelessly use condition variables without making sure that your conditions actually make sense and accessed atomically. As a wise man once said, “the safest program is the one that does not run at all”, and a deadlocking program is very close to that.
The part about condition variables might be something that can be improved in Rust. Without this, you can easily end up in situations where you wait forever or your conditions are actually inconsistent. Currently Rust’s condition variables only require a mutex to be passed to the functions for waiting for the condition to be notified, but it would probably also make sense to require passing the same mutex to the constructor and notify functions to make it absolutely clear that you need to ensure that your conditions are always accessed/modified while this specific mutex is locked. Otherwise you might end up in debugging hell.
Fortunately during development of the plugin I only ran into a simple deadlock, caused by accidentally keeping a mutex locked for too long and then running into conflict with another one. Which is probably an easy trap if the most common way of unlocking a mutex is to let the mutex lock guard fall out of scope. This makes it impossible to forget to unlock the mutex, but also makes it less explicit when it is unlocked and sometimes explicit unlocking by manually dropping the mutex lock guard is still necessary.
So in summary, while a big group of potential problems with multi-threaded programs are prevented by Rust, you still have to be careful to not run into any of the many others. Especially if you use lower-level constructs like condition variables, and not just e.g. channels. Everything is however far more convenient than doing the same in C, and with more support by the compiler, so I definitely prefer writing such code in Rust over doing the same in C.
Missing API
As usual, for the first dozen projects using a new library or new bindings to an existing library, you’ll notice some missing bits and pieces. That I missed relatively core part of GStreamer, the GstRegistry API, was surprising nonetheless. True, you usually don’t use it directly and I only need to use it here for loading the new plugin from a non-standard location, but it was still surprising. Let’s hope this was the biggest oversight. If you look at the issues page on GitHub, you’ll find a few other things that are still missing though. But nobody needed them yet, so it’s probably fine for the time being.
Another part of missing APIs that I noticed during development was that many manual (i.e. not auto-generated) bindings didn’t have the Debug trait implemented, or not in a too useful way. This is solved now, as otherwise I wouldn’t have been able to properly log what is happening inside the element to allow easier debugging later if something goes wrong.
Apart from that there were also various other smaller things that were missing, or bugs (see below) that I found in the bindings while going through all these. But those seem not very noteworthy – check the commit logs if you’re interested.
Bugs, bugs, bgsu
I also found a couple of bugs in the bindings. They can be broadly categorized in two categories
- Annotation bugs in GStreamer. The auto-generated parts of the bindings are generated from an XML description of the API, that is generated from the C headers and code and annotations in there. There were a couple of annotations that were wrong (or missing) in GStreamer, which then caused memory leaks in my case. Such mistakes could also easily cause memory-safety issues though. The annotations are fixed now, which will also benefit all the other language bindings for GStreamer (and I’m not sure why nobody noticed the memory leaks there before me).
- Bugs in the manually written parts of the bindings. Similarly to the above, there was one memory leak and another case where a function could’ve returned NULL but did not have this case covered on the Rust-side by returning an Option<_>.
Generally I was quite happy with the lack of bugs though, the bindings are really ready for production at this point. And especially, all the bugs that I found are things that are unfortunately “normal” and common when writing code in C, while Rust is preventing exactly these classes of bugs. As such, they have to be solved only once at the bindings layer and then you’re free of them and you don’t have to spent any brain capacity on their existence anymore and can use your brain to solve the actual task at hand.
Inconvenient API
Similar to the missing API, whenever using some rather new API you will find things that are inconvenient and could ideally be done better. The biggest case here was the GstSegment API. A segment represents a (potentially open-ended) playback range and contains all the information to convert timestamps to the different time bases used in GStreamer. I’m not going to get into details here, best check the documentation for them.
A segment can be in different formats, e.g. in time or bytes. In the C API this is handled by storing the format inside the segment, and requiring you to pass the format together with the value to every function call, and internally there are some checks then that let the function fail if there is a format mismatch. In the previous version of the Rust segment API, this was done the same, and caused lots of unwrap() calls in this element.
But in Rust we can do better, and the new API for the segment now encodes the format in the type system (i.e. there is a Segment<Time>) and only values with the correct type (e.g. ClockTime) can be passed to the corresponding functions of the segment. In addition there is a type for a generic segment (which still has all the runtime checks) and functions to “cast” between the two.
Overall this gives more type-safety (the compiler already checks that you don’t mix calculations between seconds and bytes) and makes the API usage more convenient as various error conditions just can’t happen and thus don’t have to be handled. Or like in C, are simply ignored and not handled, potentially leaving a trap that can cause hard to debug bugs at a later time.
That Rust requires all errors to be handled makes it very obvious how many potential error cases the average C code out there is not handling at all, and also shows that a more expressive language than C can easily prevent many of these error cases at compile-time already.
This week we’ve taken a stroll around a parallel universe and watched some YouTube. Patreon updates it’s fee structure and then realises it was a terrible idea, Mozilla releases a speech-to-text engine, Oumuamua gets probed and Microsoft release the Q# quantum programming language.
It’s Season Ten Episode Forty-One of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.
In this week’s show:
- We discuss what we’ve been up to recently:
- Mark has been exploring Oxford in a parallel universe.
- Alan has been watching YouTube.
- We discuss the news:
- Patreon announce that they are Updating Patreon’s Fee Structure and the Internet caught fire. Since recording this episode Patreon have said, We messed up. We’re sorry, and we’re not rolling out the fees change.
- Mozilla releases Speech-to-text engine and a voice training dataset
- Oumuamua to be probed
- Microsoft’s Q# quantum programming language out now in preview
- We discuss the community news:
- We mention some events:
- Google CodeIn: 28th November to 17th January 2018 – All around the world.
- FOSDEM 2018: 3 & 4 February 2018. Brussels, Belgium.
- UbuCon Europe 2018: 27th, 28th and 29th of April 2018. Xixón, Spain.
- This weeks cover image is taken from Wikimedia.
That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.
- Join us in the Ubuntu Podcast Chatter group on Telegram
December 13, 2017
Because of the distributed nature of Ubuntu development, it is sometimes a little difficult for me to keep track of the "special" URLs for various actions or reports that I'm regularly interested in.
Therefore I started gathering them in my personal wiki (I use the excellent "zim" desktop wiki), and realized some of my colleagues and friends would be interested in that list as well. I'll do my best to keep this blog post up-to-date as I discover new ones.

If you know of other candidates for this list, please don't hesitate to get in touch!
Behold, tribaal's "secret URL" list!
Pending SRUs
Once a package has been uploaded to a -proposed pocket, it needs to be verified as per the SRU process. Packages pending verification end up in this list.
https://people.canonical.com/~ubuntu-archive/pending-sru.html
Sponsorship queue
People who don't have upload rights for the package they fixed need to request sponsorship. This queue is the place to check if you're waiting for someone to pick it up and upload it.
http://reqorts.qa.ubuntu.com/reports/sponsoring/
Upload queue
A log of what got uploaded (and to which pocket) for a particular release, and also a queue of packages that have been uploaded and are now waiting for review before entering the archive.
For the active development release this is for brand new packages, for frozen releases these are SRU packages. Once approved at this step, the packages enter -proposed.
https://launchpad.net/ubuntu/xenial/+queue?queue_state=1
The launchpad build farm
A list of all the builders Launchpad currently has, broken down by architecture. You can look at jobs being built in real time, and the occupation level of the whole build farm in here as well.
https://launchpad.net/builders
Proposed migration excuses
For the currently in-development Ubuntu release, packages are first uploaded to -proposed, then a set of conditions need to be met before it can be promoted to the released pockets. The list of packages that have failed this automatic migration and the reason why they haven't can be found on this page.
https://people.canonical.com/~ubuntu-archive/proposed-migration/update_excuses.html
Merge-O-matic
Not really a "magic" URL, but this system gathers information and lists for the automatic merging system, that merges debian packages to the development release of Ubuntu.
Transitions tracker
This page tracks transitions, which are toolchain changes or other package updates with "lots" of dependencies. This tracks the dependencies build status.
https://people.canonical.com/~ubuntu-archive/transitions/html/
I long thought about whether I should post a/my #metoo. It wasn't a rape. Nothing really happened. And a lot of these stories are very disturbing.
And yet it still it bothers me every now and then. I was in school age, late elementary or lower school ... In my hometown there is a cinema. Young as we've been we weren't allowed to see Rambo/Rocky. Not that I was very interested in the movie ... But there the door to the screening room stood open. And curious as we were we looked through the door. The projectionist saw us and waved us in. It was exciting to see a moview from that perspective that was forbidden to us.
He explained to us how the machines worked, showed us how the film rolls were put in and showed us how to see the signals on the screen which are the sign to turn on the second projector with the new roll.
During these explenations he was standing very close to us. Really close. He put his arm around us. The hand moved towards the crotch. It was unpleasantly and we knew that it wasn't all right. But screaming? We weren't allowed to be there ... So we thanked him nicely and retreated disturbed. The movie wasn't that good anyway.
Nothing really happened, and we didn't say anything.
December 12, 2017
With each new release, the Xfce PulseAudio Plugin becomes more refined and better suited for Xfce users. The latest release adds support for the MPRIS Playlists specification and improves support for Spotify and other media players.
What’s New?
New Feature: MPRIS Playlists Support
- This is a basic implementation of the MediaPlayer2.Playlists specification.
- The 5 most recently played playlists are displayed (if supported by the player). Admittedly, I have not found a player that seems to implement the ordering portion of this specification.
New Feature: Experimental libwnck Support
- libwnck is a window management library. This feature adds the “Raise” method for media players that do not support it, allowing the user to display the application window after clicking the menu item in the plugin.
- Spotify for Linux is the only media player that I have found which does not implement this method. Since this is the media player I use most of the time, this was an important issue for me to resolve.
General
- Unexpected error messages sent via DBUS are now handled gracefully. The previous release of Pithos (1.1.2) displayed a Python error when doing DBUS queries before, crashing the plugin.
- Numerous memory leaks were patched.
Translation Updates
Chinese (Taiwan), Croatian, Czech, Danish, Dutch, French, German, Hebrew, Japanese, Korean, Lithuanian, Polish, Russian, Slovak, Spanish, Swedish, Thai
Downloads
The latest version of Xfce PulseAudio Plugin can always be downloaded from the Xfce archives. Grab version 0.3.4 from the below link.
- SHA-256: 43fa39400eccab1f3980064f42dde76f5cf4546a6ea0a5dc5c4c5b9ed2a01220
- SHA-1: 171f49ef0ffd1e4a65ba0a08f656c265a3d19108
- MD5: 05633b8776dd3dcd4cda8580613644c3
December 11, 2017
Testing a switch to default Breeze-Dark Plasma theme in Bionic daily isos and default settings
Kubuntu General News
Today’s daily ISO for Bionic Beaver 18.04 sees an experimental switch to the Breeze-Dark Plasma theme by default.
Users running 18.04 development version who have not deliberately opted to use Breeze/Breeze-Light in their systemsettings will also see the change after upgrading packages.
Users can easily revert back to the Breeze/Breeze-Light Plasma themes by changing this in systemsettings.

Feedback on this change will be very welcome:
You can reach us on the Kubuntu IRC channel or Telegram group, on our user mailing list, or post feedback on the (unofficial) Kubuntu web forums
Thank you to Michael Tunnell from TuxDigital.com for kindly suggesting this change.
December 10, 2017
![]() |
| Ubucons around the world |
Is Ubucon made for me?
This event is just for you! ;) You don't need to be a developer, because you'll enjoy a lot of talks about everything you can to imagine about Ubuntu and share great moments with other users.Even the language won't be a problem. there, you'll meet people from everywhere and surely someone will speak in your language :)
You can read different posts about the previous Ubucon in Paris here:
English: https://insights.ubuntu.com/2017/09/25/ubucon-europe-2017/
Portuguese: https://carrondo.net/wp/2017/09/19/ubucon-europa-2017-paris/
Spanish: http://thinkonbytes.blogspot.pt/2017/09/2-ubucon-europea.html
Another in Spanish: https://www.innerzaurus.com/ubuntu-touch/articulos-touch/cronicas-la-ubucon-paris-2017/
Where?
![]() |
| Gijón/Xixón, Asturies, Spain |
![]() |
| Antiguo Instituto |
When?
27th, 28th and 29th of April 2018.Organized by
- Francisco Javier Teruelo de Luis
- Francisco Molinero
- Sergi Quiles Pérez
- Antonio Fernandes
- Paul Hodgetts
- Santiago Moreira
- Joan CiberSheep
- Fernando Lanero
- Manu Cogolludo
- Marcos Costales
Get in touch!
We're working in a few details yet, please don't book a flight yet and join our Telegram channel now, Google+ or Twitter for getting the last news and future discounts on hotels and transport.December 08, 2017
Being on furlough from your job for just under four full months and losing 20 pounds during that time can hardly be considered healthy. If anything, it means that something is wrong. I allude in various fora that I work for a bureau of the United States of America's federal government as a civil servant. I am not particularly high-ranking as I only come in at GS-7 Step 1 under "CLEVELAND-AKRON-CANTON, OH" locality pay. My job doesn't normally have me working a full 12 months out of the year (generally 6-8 months depending upon the needs of the bureau) and I am normally on-duty only 32 hours per week.
As you might imagine, I have been trying to leave that job. Unfortunately, working for this particular government bureau makes any resume look kinda weird. My local church has some domestic missions work to do and not much money to fund it. I already use what funding we have to help with our mission work reaching out to one of the local nursing homes to provide spiritual care as well as frankly one of the few lifelines to the outside world some of those residents have. Xubuntu and the bleeding edge of LaTeX2e plus CTAN help greatly in preparing devotional materials for use in the field at the nursing home. Funding held us back from letting me assist with Hurricane Harvey or Hurricane Maria relief especially since I am currently finishing off quite a bit of training in homeland security/emergency management. But for the lack of finances to back it up as well as the lack of a large enough congregation, there is quite a bit to do. Unfortunately the numbers we get on a Sunday morning are not what they once were when the congregation had over a hundred in attendance.
I don't like talking about numbers in things like this. If you take 64 hours in a two week pay period multiplied it by the minimum of 20 pay periods that generally occur and then multiplied by the hourly equivalent rate for my grade and step it only comes out to a pre-tax gross under $26,000. I rounded up to a whole number. Admittedly it isn't too much.
At this time of the year last year, many people across the Internet burned cash by investing in the Holiday Hole event put on by the Cards Against Humanity people. Over $100,000 was raised to dig a hole about 90 miles outside Chicago and then fill the thing back in. This year people spent money to help buy a piece of land to tie up the construction of President Trump's infamous border wall and even more which resulted in Cards Against Humanity raking in $2,250,000 in record time.
Now, the church I would be beefing up the missionary work with doesn't have a web presence. It doesn't have an e-mail address. It doesn't have a fax machine. Again, it is a small church in rural northeast Ohio. According to IRS Publiction 526, contributions to them are deductible under current law provided you read through the stipulations in that thin booklet and are a taxpayer in the USA. Folks outside the USA could contribute in US funds but I don't know what the rules are for foreign tax administrations to advise about how such is treated if at all.
The congregation is best reached by writing to:
West Avenue Church of Christ
5901 West Avenue
Ashtabula, OH 44004
United States of America
With the continuing budget shenanigans about how to fund Fiscal Year 2018 for the federal government, I get left wondering if/when I might be returning to duty. Helping the congregation fund me to undertake missions for it removes that as a concern. Besides, any job that gives you gray hair and puts 30 pounds on you during eight months of work cannot be good for you to remain at. Too many co-workers took rides away in ambulances at times due to the pressures of the job during the last work season.

Not Messing With Hot Wheels Car Insertion by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
I've used CI in many projects in the past, and it's a really handy tool. However, I've never had to set it up myself and when I've looked it's been non-trivial to do so. The great news is this is really easy to do in GitLab!
There's lots of good documentation on how to set it up, but to save you some time I'll show how I set it up for Simple Scan, which is a fairly typical GNOME application.
To configure CI you need to create a file called .gitlab-ci.yml in your git repository. I started with the following:
build-ubuntu:
image: ubuntu:rolling
before_script:
- apt-get update
- apt-get install -q -y --no-install-recommends meson valac gcc gettext itstool libgtk-3-dev libgusb-dev libcolord-dev libpackagekit-glib2-dev libwebp-dev libsane-dev
script:
- meson _build
- ninja -C _build install
The first line is the name of the job - "build_ubuntu". This is going to define how we build Simple Scan on Ubuntu.
The "image" is the name of a Docker image to build with. You can see all the available images on Docker Hub. In my case I chose an official Ubuntu image and used the "rolling" link which uses the most recently released Ubuntu version.
The "before_script" defines how to set up the system before building. Here I just install the packages I need to build simple-scan.
Finally the "script" is what is run to build Simple Scan. This is just what you'd do from the command line.
And with that, every time a change is made to the git repository Simple Scan is built on Ubuntu and tells me if that succeeded or not! To make things more visible I added the following to the top of the README.md:
[](https://gitlab.gnome.org/GNOME/simple-scan/pipelines)
This gives the following image that shows the status of the build:
And because there's many more consumers of Simple Scan that just Ubuntu, I added the following to.gitlab-ci.yml:
build-fedora:
image: fedora:latest
before_script:
- dnf install -y meson vala gettext itstool gtk3-devel libgusb-devel colord-devel PackageKit-glib-devel libwebp-devel sane-backends-devel
script:
- meson _build
- ninja -C _build install
Now it builds on both Ubuntu and Fedora with every commit!
I hope this helps you getting started with CI and gitlab.gnome.org. Happy hacking.








