February 20, 2019

Across my career I have met countless people who have struggled with Imposter Syndrome.

For those of you not up on the lingo, Imposter Syndrome is when people feel they are not experienced, qualified, or talented enough to be in the position they are in (such as a new role in a company). Typically the sensation is a feeling of “well, it is only a matter of time until people find out that I suck at this and then I will be out on my ear”. Hence the “Imposter” in Imposter syndrome.

For a long time people far smarter than me thought this to be a condition that primarily affected high-performing women, but since then it has been connected to men, women, trans, and other demographics. It is unsurprisingly a condition that can especially affect those in minorities and people of color.

Here’s the deal: Imposter Syndrome is really common, but a lot of people simply don’t talk about it. Why? Well, it takes a strong person to climb up the ladder in their career and openly show signs of weakness. Many a presentation slide has been peppered with inspirational blatherings of “true leaders share their vulnerabilities”, but few leaders actually have the confidence to do this. I promise you that many of the C-level execs, SVPs and VPs in your company struggle with Imposter Syndrome, particularly those who are new in their positions or first-timers at that level.

Imposter Syndrome is not just common, but it is entirely normal.

Firstly, our brains are hard wired to look for threats in our environment and to actively perform loss prevention. We are also wired to care about status and our social standing in our groups. This milieu of status, social positioning, and risk can generate this unstable “imposter” feeling many people often report.

I sympathize with people who experience Imposter Syndrome because I have experienced myself too.

When I think back to many of the key milestones in my life…my first published piece, my first real job, my first book, my first time as a manager, getting married, having a kid, playing my first shows in my band, starting my business…there was always an element of Imposter Syndrome gift-wrapped within these moments. It took me some time to understand that this was entirely normal and I needed to turn it from a negative into a net positive.

So, how do you kick it?

OK, hold your horses. We need to get two things straight:

  1. I am not a doctor. If you take medical or psychological advice from me, you need to stop doing that.
  2. You will never 100% get rid of it. You need to focus on managing it.

Imposter Syndrome is similar to anxiety in many ways. People who experience anxiety often want to figure out a way to completely eradicate that awful feeling from their lives. As many therapists and mindfulness professionals will attest though: you can’t really get rid of it, you just need to change your relationship with it.

Here are five ways to do this.

1. Measure yourself and your performance

The root cause of Imposter Syndrome is usually a feeling. It is typically a sensation of not measuring up as opposed to a concrete data-driven conclusion. Here’s the thing: feelings are noxiously bollocks in terms of reliability.

So, become more data-driven. How would you define success in your career? Is it how much product you sell? Is it engagement on social media and your blog? Is it managing a team well? Is it shipping reliable code? Is it writing great documentation? Define an objective set of metrics for how you define success and get a sanity check on them from friends and colleagues.

Pick five to seven of these metrics and start measuring your work. Don’t set unrealistic goals, but focus on growth and development. Can you keep growing in those areas?

For example, if you are marketer, you may consider traffic growth to a website as a key metric for your profession. Are you generally seeing the trend moving forward? Yes? Great! No? No problem, what new approaches can you explore to move the needle? There are always a wealth of ideas and approaches available online…go and explore and try some new things.

Being great at your job is not just about delivering results, but it is about always learning and growing, and being humble that we are eternal students. Track your progress: it will help to show in black and white that you are growing and developing.

2. Get objective validation from your peers

It is astonishing how poor some managers are at providing validation to their teams. Some people seem to think that their teams should “know” when they are doing a great job or that managers don’t need to provide validation.


I don’t care whether you are the CEO of a Fortune 500 company or Thomas from my local bar: everyone needs to know they are on the right track. We all seek validation from our friends, family, colleagues, associates, and more. Not getting the right level of validation can be a critical source of Imposter Syndrome issues.

I remember I once had a manager who was terrible at providing validation and I had no idea whether he thought I was any good or not.

Pictured: terrible manager.

My colleague (and good friend) said, “don’t go down that dark alleyway, it is pit of self-doubt”. He suggested I raise my concern with our manager, which I did, and he had no idea this was an issue. He did a much better job providing feedback for both great work and areas of improvement, and my concerns were abated significantly.

Talk it through with your manager and colleagues. Tell them you are not needy, but you need to ensure your perception of your work is calibrated with theirs. This is part of getting good at what you do, and good managers need to provide good validation.

3. Build a team of mentors around you

I remember when I first moved to America, my wife Erica always stunned me. If she wasn’t sure of a given strategic or tactical move in her business, she would call other people in the industry to ask for their input and guidance.

I was amazed. Back then, rather embarrassingly, I almost never asked for advice. It wasn’t that I wasn’t receptive, but I just didn’t think to reach out. It never struck me that this was an option. She helped change that into a healthy habit.

Many of the worlds problems have been figured out by other people. These solutions live in (a) their heads, and (b) the books they write. Why on earth wouldn’t we tap this experience and learn from it?

Mentoring is enormously powerful. It doesn’t just grow our skills, but it is a valuable feedback mechanism for ensuring we are on the right path. Try to find people you know and respect and ask them for a few calls here and there. Don’t just limit yourself to one mentor: build a team that can mentor you in different skills.

I absolutely love mentoring people: it is part of the reason I starting consulting and being an advisor. It is awesome to help shape and watch people grow and affirm their progress as they do it. We all need mentors.

4. Set yourself some more realistic expectations

Many of you reading this will be really driven about being successful in your career and doing a good job. This is admirable, but there is a risk: becoming a ludicrously unrealistic perfectionist. This is a sure-fire way to get a dose of heartburn.

Life isn’t perfect. You are going to screw up. You are going to make mistakes. You are going to develop new ideas you wish you had years back. You are going to use approaches and methods that are a distraction or don’t work.

This is normal. You weren’t born perfect at what you do. No-one was. Every one of us is learning and growing, but as I said earlier, many people simply don’t talk about it. There is not a single person, even well known hot shots such as Elon Musk, Sheryl Sandberg, George Clooney, and Neil Degrasse Tyson, who hasn’t made significant errors of judgement or mistakes over the course of their career. Why should you be any different?

Not perfect. I hear his “Twister” game chops are severely lacking.

Take a step back and re-evaluate your position. Do you think your colleagues are really expecting perfection from you? Do you think they are expecting you to be rock solid at your job all the time? Do they have the same expectations for themselves? Probably not.

We should focus on always growing and evolving, but on a foundation that we are all imperfect human beings.

5. Don’t take yourself so seriously

This is for me personally, the most critical of these suggestions, but again something we all struggle with.

I don’t believe life should be one-dimensional. I absolutely love my job, but I love being a dad and husband. I love playing music and going to gigs. I love going for a few beers with my buddies at my local. I love laughing at stand-up comedy, movies, and TV shows.

I get enormous enjoyment from my career, but it is one component in my life, not the only one. Are some people going to think I am imperfect? Sure, that’s fine. I am imperfect.

I am fairly convinced a big chunk of figuring out the right balance in life is knowing when to give a shit or not to. Focus on doing great work, building great relationships, and being an honorable and civil person: those are the most important things. Don’t focus on a 100% success rate in everything in your career: it not only isn’t possible, but it will take important mental energy from other elements of your life too.

Well, I hope some of this was useful. If you thought this was interesting, you may also want to check out 10 Avoidable Career Mistakes (and How to Conquer Them) and my Remote Working Survival Guide.

The post Imposter Syndrome: Understanding and Managing It appeared first on Jono Bacon.

on February 20, 2019 07:02 AM

February 19, 2019

Full Circle Weekly News #122

Full Circle Magazine

Canonical Apologizes for Boot Failure in Ubuntu 18.10 & 18.04, Fix Available Now
Source: https://news.softpedia.com/news/canonical-apologizes-for-another-ubuntu-linux-kernel-regression-fix-available-524892.shtml

Open source project aims to make Ubuntu usable on Arm-powered Windows laptops
Source: https://www.techrepublic.com/article/open-source-project-aims-to-make-ubuntu-usable-on-arm-powered-windows-laptops/

KDE neon Systems Based on Ubuntu 16.04 LTS Have Reached End of Life, Upgrade Now
Source: https://news.softpedia.com/news/kde-neon-systems-based-on-ubuntu-16-04-lts-have-reached-end-of-life-upgrade-now-524959.shtml

Good Guy Malware: Linux Virus Removes Other Infections to Mine on Its Own
Source: https://news.softpedia.com/news/good-guy-malware-linux-virus-removes-other-infections-to-mine-on-its-own-524915.shtml

Dirty_Sock vulnerability in Canonical’s snapd could give root access on Linux machines
Source: https://betanews.com/2019/02/13/dirty-sock-snapd-linux/

Ethical Hacking, Ubuntu-Based BackBox Linux OS Is Now Available on AWS
Source: https://news.softpedia.com/news/ethical-hacking-ubuntu-based-backbox-linux-is-now-available-on-aws-524960.shtml

on February 19, 2019 07:23 PM

February 18, 2019

Welcome to the Ubuntu Weekly Newsletter, Issue 566 for the week of February 10 – 16, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • TheNerdyAnarchist
  • mIk3_08
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on February 18, 2019 11:50 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 204.5 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 12 hours (out of 12 hours allocated).
  • Antoine Beaupré did 9 hours (out of 20.5 hours allocated, thus keeping 11.5h extra hours for February).
  • Ben Hutchings did 24 hours (out of 20 hours allocated plus 5 extra hours from December, thus keeping one extra hour for February).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 42.5 hours (out of 20.5 hours allocated + 25.25 extra hours, thus keeping 3.25 extra hours for February).
  • Hugo Lefeuvre did 20 hours (out of 20 hours allocated).
  • Lucas Kanashiro did 5 hours (out of 4 hours allocated plus one extra hour from December).
  • Markus Koschany did 20.5 hours (out of 20.5 hours allocated).
  • Mike Gabriel did 10 hours (out of 10 hours allocated).
  • Ola Lundqvist did 4.5 hours (out of 8 hours allocated + 6.5 extra hours, thus keeping 8 extra hours for February, as he also gave 2h back to the pool).
  • Roberto C. Sanchez did 10.75 hours (out of 20.5 hours allocated, thus keeping 9.75 extra hours for February).
  • Thorsten Alteholz did 20.5 hours (out of 20.5 hours allocated).

Evolution of the situation

In January we again managed to dispatch all available hours (well, except one) to contributors. We also still had one new contributor in training, though starting in February Adrian Bunk has become a regular contributor. But: we will lose another contributor in March, so we are still very much looking for new contributors. Please contact Holger if you are interested to become a paid LTS contributor.

The security tracker currently lists 40 packages with a known CVE and the dla-needed.txt file has 42 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on February 18, 2019 02:05 PM


Robert Ancell

    Here is the story of how I fell down a rabbit hole and ended up learning far more about the GIF image format than I ever expected...
    We had a problem with users viewing a promoted snap using GNOME Software. When they opened the details page they'd have huge CPU and memory usage. Watching the GIF in Firefox didn't show a problem - it showed a fairly simple screencast demoing the app without any issues.
    I had a look at the GIF file and determined:
    • It was quite large for a GIF (13Mb).
    • It had a lot of frames (625).
    • It was quite high resolution (1790×1060 pixels).
    • It appeared the GIF was generated from a compressed video stream, so most of the frame data was just compression artifacts. GIF is lossless so it was faithfully reproducing details you could barely notice.
    GNOME Software uses GTK+, which uses gdk-pixbuf to render images. So I had a look a the GIF loading code. It turns out that all the frames are loaded into memory. That comes to 625×1790×1060×4 bytes. OK, that's about 4.4Gb... I think I see where the problem is. There's a nice comment in the gdk-pixbuf source that sums up the situation well:

     /* The below reflects the "use hell of a lot of RAM" philosophy of coding */

    They weren't kidding. 🙂

    While this particular example is hopefully not the normal case the GIF format has has somewhat come back from the dead in recent years to be a popular format. So it would be nice if gdk-pixbuf could handle these cases well. This was going to be a fairly major change to make.

    The first step in refactoring is making sure you aren't going to break any existing behaviour when you make changes. To do this the code being refactored should have comprehensive tests around it to detect any breakages. There are a good number of GIF tests currently in gdk-pixbuf, but they are mostly around ensuring particular bugs don't regress rather than checking all cases.

    I went looking for a GIF test suite that we could use, but what was out there was mostly just collections of GIFs people had made over the years. This would give some good real world examples but no certainty that all cases were covered or why you code was breaking if a test failed.

    If you can't find what you want, you have to build it. So I wrote PyGIF - a library to generate and decode GIF files and made sure it had a full test suite. I was pleasantly surprised that GIF actually has a very well written specification, and so implementation was not too hard. Diversion done, it was time to get back to gdk-pixbuf.

    Tests plugged in, and the existing code actually has a number of issues. I fixed them, but this took a lot of sanity to do so. It would have been easier to replace the code with new code that met the test suite, but I wanted the patches to be back-portable to stable releases (i.e. Ubuntu 16.04 and 18.04 LTS).

    And with a better foundation, I could now make GIF frames load on demand. May your GIF viewing in GNOME continue to be awesome.
    on February 18, 2019 12:55 PM

    February 17, 2019

    Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 18.04.2 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight X11 Desktop Environment (LXDE). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock solid […]
    on February 17, 2019 12:01 AM

    February 16, 2019

    I am no expert on all the ins and outs of virtualization, hell, before I started looking into this stuff a “hypervisor” to me was just a really cool visor.

    Geordi La Forge

    But after a reading a bunch of documentation, blog posts and StackExchange entries, I think I have enough of a basic understanding—or at least I have learnt enough to get it to work for my limited use case—to write some instructions.

    The virtualization method I went with is Kernel-based Virtual Machine (KVM) which, to paraphrase Wikipedia, is a virtualization module in the Linux kernel that allows it to function as a hypervisor, i.e. it is able to create, run and manage virtual machines (emulated computer systems). 🤓

    Creating a Virtualization Server with KVM

    My home server runs Ubuntu and (among other things) I have set it up to use KVM and QEMU for virtualization, plus I have the libvirt toolset installed for managing virtual machines from the command line and to help with accessing virtual machines over my local network on my other Linux devices.

    My Server OS Ubuntu 18.04.2
    My Client OS Fedora 29
    VM OS whatever you prefer, for the example I’m using Fedora 29

    For all of the following instructions, I am going to assume you are logged into your server (or whatever is going to be your virtualization hardware) and are in a terminal prompt (either directly or over ssh).

    Part 0: Prerequisites

    First, you have to see if your server’s processor supports hardware virtualization. You can do so by running the following command.

    egrep -c '^flags.*(vmx|svm)' /proc/cpuinfo

    This will check information on your CPU for the relevant extension support and return a number (based on the number of cores in your CPU). If it is greater than 0 your machine supports virtualization! 🎉 But if there is no result or 0, than it does not and there’s no point in continuing.

    Part 1: Server Setup

    Next, we have to install KVM, and the other required software for a virtualization environment. For an Ubuntu-based server do the following.

    sudo apt install qemu-kvm libvirt-bin virtinst bridge-utils

    Next, start and enable the virtualization service:

    sudo systemctl enable libvirtd.service
    sudo systemctl start libvirtd.service

    It’s as simple as that. Now you can also use virsh from the libvirt toolset to see the status virtual machines:

    virsh --list all

    But you’ll likely not see any listed but rather something like:

    Id    Name                           State

    On to installation!

    Part 2: Installing a Virtual Machine

    I’m going to assume for this part that you have already downloaded a disk image of your desired operating system, that will be used for the virtual machine, and you know where it is on the server.

    Deploying a virtual machine only requires one command: virt-install but it has several option flags that you’ll need to go through and adjust to your preference.

    The following is an example using Fedora 29.

    sudo virt-install \
    --name Fedora-29 \
    --ram=2048 \
    --vcpus=2 \
    --cpu host \
    --disk path=/var/lib/libvirt/images/Fedora-29.qcow2,size=32,device=disk,bus=virtio,format=qcow2 \
    --cdrom /var/lib/libvirt/boot/Fedora-Workstation-netinst-x86_64-29-1.2.iso \
    --connect qemu:///system \
    --os-type=linux \
    --graphics spice \
    --hvm --noautoconsole

    From the above, the following are the bits you’ll need to edit to your preferences

    name Name your virtual machine (VM)
    ram Assign an amount of memory (in MB) to be used by the VM
    vcpus Select a number of CPU cores to be assigned to the VM
    disk The disk image used by the virtual machine. You need only specify the name (i.e change Fedora-29 to something else) and update the size=32 to a desired capacity for the disk image (in GB).
    cdrom The path to the boot image that is to be used by the virtual machine. It need not be in /var/lib/libvirt/boot but the full path must be included here.

    The disk format (qcow2) and I/O bus and things aren’t things I’m gonna tinker with or know enough about, I’m just trusting other information I found.

    Once you have the config flags set, and you have ran virt-install you will likely see an output similar the following.

    WARNING  No operating system detected, VM performance may suffer. Specify an OS with --os-variant for optimal results.
    Starting install...
    Allocating 'Fedora-29.qcow2'
    Domain installation still in progress. You can reconnect to the console to complete the installation process.

    The “WARNING” is just that and nothing to worry about.

    At this point your virtual machine should be up and running and ready for you to connect to it. You can check the status of your virtual machines by again running virsh --list all and you should see something like:

    Id    Name                           State
     3     Fedora-29                      running
     -     Debian-9.7.0                   shut off

    You can create as many virtual machines as your server can handle at this point, though I wouldn’t recommend running too many concurrently as there’s only so far you can stretch the sharing of hardware resources.

    Part 3: Connecting to your Virtual Machine(s)

    To connect to your virtual machine you’re going to use a tool called Virtual Machine Manager, there are a few other applications out there but this one worked the best/most consistently for me. You can likely install it on your system in the command line, using a package manager, as virt-manager.

    Virtual Machine Manager Logo

    Virtual Machine Manager can create and manage virtual machines just as we did in the command line on the server, but we’re going to use it on your computer as a client to connect to virtual machine(s) running remotely on your server.

    To add a connection, from the main window menubar, you’re going to go File > Add Connection..., which brings up the following dialog.

    Virtual Machine Manager Add Connection

    The hypervisor we are using is QEMU/KVM so that is fine as is, but in this dialog you will need to check Connect to remote host over SSH and enter your username and the hostname (or IP address) for your server, so it resembles the above, then hit “Connect”.

    If all goes well, your server should appear in the main window with a list of VMs (see below for an example) and you can double-click to on any machines in the list to connect.

    Virtual Machine Manager Main Window

    Doing so will launch a new window and from there you can carry on as if it were a regular computer and go through the operating system install process.

    Virtual Machine Manager Connected

    Closing this window or quitting the Virtual Machine Manager app will not stop the virtual machine as it will always be running on your server.

    You can start and stop and even delete machines on your server using virt-manager on your computer, but it can also be done from the command line on your server with virsh, using some fairly straightforward commands:

    # to suspend a machine
    sudo virsh suspend Fedora-29
    # to shutdown a machine
    sudo virsh shutdown Fedora-29
    # to resume a machine
    sudo virsh resume Fedora-29
    # to remove a machine
    sudo virsh undefine Fedora-29
    sudo virsh destroy Fedora-29

    A Few Notes

    Now unless you have astoundingly good Wi-Fi your best bet is to connect to your server over a wired connection—personally I have a direct connection via an ethernet cable between my server and another machine—otherwise (I found) there will be quite a bit of latency.

    on February 16, 2019 09:00 PM
    With Ubuntu 19.04’s feature freeze quickly approaching, we would like to announce the new updates coming to Ubuntu Studio 19.04. Updated Ubuntu Studio Controls This is really a bit of a bugfix for the version of Ubuntu Studio Controls that landed in 18.10. Ubuntu Studio Controls dramatically simplifies audio setup for the JACK Audio Connection […]
    on February 16, 2019 08:31 PM

    February 15, 2019

    Full Circle Weekly News #121

    Full Circle Magazine

    Endless OS Functionality Controls Simplify Computing
    Source: https://www.linuxinsider.com/story/Endless-OS-Functionality-Controls-Simplify-Computing-85819.html

    System76 ‘Darter Pro’ laptop finally here
    Source: https://betanews.com/2019/02/06/system76-darter-pro-linux-notebook/

    LibreELEC 9.0 released: Linux distro built around Kodi media center
    Source: https://liliputing.com/2019/02/libreelec-9-0-released-linux-distro-built-around-kodi-media-center.html

    Bareos 18.2 released
    Source: https://www.pro-linux.de/news/1/26733/bareos-182-freigegeben.html

    Linux kernel gets another option to disable Spectre mitigations
    Source: https://www.zdnet.com/article/linux-kernel-gets-another-option-to-disable-spectre-mitigations/

    Canonical Releases Important Ubuntu Linux Kernel Security Patches, Update Now
    Source: https://news.softpedia.com/news/canonical-releases-important-ubuntu-kernel-security-patches-update-now-524834.shtml

    Linux driver for old Mali GPUs should be maintained
    Source: https://www.golem.de/news/lima-projekt-linux-treiber-fuer-alte-mali-gpus-soll-eingepflegt-werden-1902-139251.html

    Hackers can compromise your Android phone with a single image file
    Source: https://bgr.com/2019/02/07/android-security-update-one-png-image-file-can-compromise-your-phone/

    on February 15, 2019 07:11 PM

    Back in August 2017, I had the privilege of being invited to support the hackathon for women in Prizren, Kosovo. One of the things that caught my attention at this event was the enthusiasm with which people from each team demonstrated their projects in five minute presentations at the end of the event.

    This encouraged me to think about further steps to support them. One idea that came to mind was introducing them to the Toastmasters organization. Toastmasters is not simply about speaking, it is about developing leadership skills that can be useful for anything from promoting free software to building successful organizations.

    I had a look at the Toastmasters club search to see if I could find an existing club for them to visit but there doesn't appear to be any in Kosovo or neighbouring Albania.

    Starting a Toastmasters club at the Innovation Centre Kosovo

    In January, I had a conference call with some of the girls and explained the idea. They secured a venue, Innovation Centre Kosovo, for the evening of 11 February 2019.

    Albiona and I met on Saturday, 9 February and called a few people we knew who would be good candidates to give prepared speeches at the first meeting. They had 48 hours to prepare their Ice Breaker talks. The Ice Breaker is a 4-6 minute talk that people give at the beginning of their Toastmasters journey.

    Promoting the meeting

    At our club in EPFL Lausanne, meetings are promoted on a mailing list. We didn't have that in Kosovo but we were very lucky to be visited by Sara Koci from the morning TV show. Albiona and I were interviewed on the rooftop of the ICK on the day of the meeting.

    The first meeting

    That night, we had approximately 60 people attend the meeting.

    Albiona acted as the meeting presider and trophy master and I was the Toastmaster. At the last minute we found volunteers for all the other roles and I gave them each an information sheet and a quick briefing before opening the meeting.

    One of the speakers, Dion Deva, has agreed to share the video of his talk publicly:

    The winners were Dhurata, best prepared speech, Arti, best impromptu speech and Ardora for best evaluation:

    After party

    Afterwards, some of us continued around the corner for pizzas and drinks and discussion about the next meeting.

    Future events in Kosovo and Albania

    Software Freedom Kosovo will be back from 4-7 April 2019 and I would encourage people to visit.

    OSCAL in Tirana, Albania is back on 18-19 May 2019 and they are still looking for extra speakers and workshops.

    Many budget airlines now service Prishtina from all around Europe - Prishtina airport connections, Tirana airport connections.

    on February 15, 2019 11:08 AM

    The Ubuntu team is pleased to announce the release of Ubuntu 18.04.2 LTS (Long-Term Support) for its Desktop, Server, and Cloud products, as well as other flavours of Ubuntu with long-term support.

    Like previous LTS series, 18.04.2 includes hardware enablement stacks for use on newer hardware. This support is offered on all architectures and is installed by default when using one of the desktop images.

    Ubuntu Server defaults to installing the GA kernel; however you may select the HWE kernel from the installer bootloader.

    This update also adds Raspberry Pi 3 as a supported image target for Ubuntu Server, alongside the existing Raspberry Pi 2 image.

    As usual, this point release includes many updates, and updated installation media has been provided so that fewer updates will need to be downloaded after installation. These include security updates and corrections for other high-impact bugs, with a focus on maintaining stability and compatibility with Ubuntu 18.04 LTS.

    Kubuntu 18.04.2 LTS, Ubuntu Budgie 18.04.2 LTS, Ubuntu MATE 18.04.2 LTS, Lubuntu 18.04.2 LTS, Ubuntu Kylin 18.04.2 LTS, and Xubuntu 18.04.2 LTS are also now available. More details can be found in their individual release notes:


    Maintenance updates will be provided for 5 years for Ubuntu Desktop, Ubuntu Server, Ubuntu Cloud, and Ubuntu Base. All the remaining flavours will be supported for 3 years.

    To get Ubuntu 18.04.2

    In order to download Ubuntu 18.04.2, visit:


    Users of Ubuntu 16.04 will be offered an automatic upgrade to 18.04.2 via Update Manager. For further information about upgrading, see:


    As always, upgrades to the latest version of Ubuntu are entirely free of charge.

    We recommend that all users read the 18.04.2 release notes, which document caveats and workarounds for known issues, as well as more in-depth notes on the release itself. They are available at:


    If you have a question, or if you think you may have found a bug but aren’t sure, you can try asking in any of the following places:

    #ubuntu on irc.freenode.net

    Help Shape Ubuntu

    If you would like to help shape Ubuntu, take a look at the list of ways you can participate at:


    About Ubuntu

    Ubuntu is a full-featured Linux distribution for desktops, laptops, clouds and servers, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

    Professional services including support are available from Canonical and hundreds of other companies around the world. For more information about support, visit:


    More Information

    You can learn more about Ubuntu and about this release on our website listed below:


    To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:


    Originally posted to the ubuntu-announce mailing list on Fri Feb 15 02:52:36 UTC 2019 by Adam Conrad, on behalf of the Ubuntu Release Team

    on February 15, 2019 07:25 AM

    February 14, 2019

    S01E23 – 2/3 do cluster de tiagos

    Podcast Ubuntu Portugal

    Neste episódio convidámos o Tiago Carreira, e assim conseguimos 2/3 do cluster de Tiagos presente na FOSDEM 2019, para nos falar sobre a sua experiência na FOSDEM, mas acima de tudo veio contar-nos como correu o Config Managment Camp em Ghent. Já sabes: Ouve, subscreve e partilha!

    • https://seclists.org/oss-sec/2019/q1/119
    • https://brauner.github.io/2019/02/12/privileged-containers.html
    • https://bugs.launchpad.net/ubuntu/xenial/+source/pciutils/+bug/1815212
    • https://cfgmgmtcamp.eu/
    • https://sintra2019.ubucon.org/
    • https://www.openstack.org/coa


    Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

    Atribuição e licenças

    A imagem de capa: prilfish e está licenciada como CC BY 2.0.

    A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

    cujo texto integral pode ser lido aqui

    Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

    on February 14, 2019 11:32 PM

    G+ Takeout

    Jonathan Riddell

    Google+ does rather killoff the notion I had of Google as a highly efficient company who always produce top quality work.  Even using the takeout website to download the content from Google+ I found a number of obvious bugs and poor features.  But I did get my photos in the end so for old times sakes here’s a random selection.

    A marketing campaign that failed to take off

    Sprints in Munich thanks to the city council’s KDE deployment were always fun.

    Launching KDE neon with some pics of my office and local castle.

    One day I took a trip with Nim to Wales and woke up in somewhere suspiciously like the Shire from Lord of the Rings

    KDE neon means business

    Time to go surfing. This ended up as a music video.

    That’s about it.  Cheereo Google+, I’ve removed you from www.kde.org, one social media platform too many for this small world.

    Facebooktwitterlinkedinby feather
    on February 14, 2019 04:50 PM

    February 13, 2019

    I wanted to drop a quick note to you all that I have written a new Forbes article called Six Hallmarks of Successful Crowdfunding Campaigns.

    From the piece:

    While the newness of crowdfunding may have worn off, this popular way to raise funds has continued to spark interest, especially to entrepreneurs and startups.  For some, it is a panacea; a way to raise funds quickly and easily, with an extra dose of marketing and awareness thrown in. Sadly, the reality of what is needed to make a crowdfunding campaign a success is often missing in all the excitement.

    I have some experience with crowdfunding in a few different campaigns. Back in 2013 I helped shape one of the largest crowdfunding campaigns at the time, the Ubuntu Edge, which had a $32million goal and ended up raising $12,814,216. While it didn’t hit the mark, the campaign set records for the funds raised. My second campaign was for the Global Learning XPRIZE, which had a $500,000 goal and we raised $942,223. Finally, I helped advise ZBiotics with their $25,000 campaign, and they raised $52,732.

    Today I want to share some lessons learned along the way with each of these campaigns. Here are six considerations you should weave into your crowdfunding strategy…

    In it I cover these six key principles:

    1. Your campaign is a cycle: plan it out
    2. Your pitch needs to be short, sharp, and clear on the value
    3. Focus on perks people want (and try to limit shipping)
    4. Testimonials and validation builds confidence
    5. Content is king (and marketing is queen)
    6. Incentivize your audience to help

    You can read the piece by clicking here.

    You may also want to see some of my other articles that relate to the different elements of doing crowdfunding well:

    Good luck with your crowdfunding initiatives and let me know how you get on!

    The post Forbes Piece: Six Hallmarks of Successful Crowdfunding Campaigns appeared first on Jono Bacon.

    on February 13, 2019 11:17 PM

    Encrypt all the things

    Dimitri John Ledkov

    xkcd #538: Security
    Went into blogger settings and enabled TLS on my custom domain blogger blog. So it is now finally a https://blog.surgut.co.uk However, I do use feedburner and syndicate that to the planet. I am not sure if that is end-to-end TLS connections, thus I will look into removing feedburner between my blog and the ubuntu/debian planets. My experience with changing feeds in the planets is that I end up spamming everyone. I wonder, if I should make a new tag and add that one, and add both feeds to the planet config to avoid spamming old posts.

    Next up went into gandi LiveDNS platform and enabled DNSSEC on my domain. It propagated quite quickly, but I believe my domain is now correctly signed with DNSSEC stuff. Next up I guess, is to fix DNSSEC with captive portals. I guess what we really want to have on "wifi" like devices, is to first connect to wifi and not set it as default route. Perform captive portal check, potentially with a reduced DNS server capabilities (ie. no EDNS, no DNSSEC, etc) and only route traffic to the captive portal to authenticate. Once past the captive portal, test and upgrade connectivity to have DNSSEC on. In the cloud, and on the wired connections, I'd expect that DNSSEC should just work, and if it does we should be enforcing DNSSEC validation by default.

    So I'll start enforcing DNSSEC on my laptop I think, and will start reporting issues to all of the UK banks if they dare not to have DNSSEC. If I managed to do it, on my own domain, so should they!

    Now I need to publish CAA Records to indicate that my sites are supposed to be protected by Let's Encrypt certificates only, to prevent anybody else issuing certificates for my sites and clients trusting them.

    I think I think I want to publish SSHFP records for the servers I care about, such that I could potentially use those to trust the fingerprints. Also at the FOSDEM getdns talk it was mentioned that openssh might not be verifying these by default and/or need additional settings pointing at the anchor. Will need to dig into that, to see if I need to modify something about this. It did sound odd.

    Generated 4k RSA subkeys for my main key. Previously I was using 2k RSA keys, but since I got a new yubikey that supports 4k keys I am upgrading to that. I use yubikey's OpenGPG for my signing, encryption, and authentication subkeys - meaning for ssh too. Which I had to remember how to use `gpg --with-keygrip -k` to add the right "keygrip" to `~/.gnupg/sshcontrol` file to get the new subkey available in the ssh agent. Also it seems like the order of keygrips in sshcontrol file matters. Updating new ssh key in all the places is not fun I think I did github, salsa and launchpad at the moment. But still need to push the keys onto the many of the installed systems.

    Tried to use FIDO2 passwordless login for Windows 10, only to find out that my Dell XPS appears to be incompatible with it as it seems that my laptop does not have TPM. Oh well, I guess I need to upgrade my laptop to have a TPM2 chip such that I can have self-unlocking encrypted drives, and like OTP token displayed on boot and the like as was presented at this FOSDEM talk.

    Now that cryptsetup 2.1.0 is out and is in Debian and Ubuntu, I guess it's time to reinstall and re-encrypt my laptop, to migrate from LUKS1 to LUKS2. It has a bigger header, so obviously so much better!

    Changing phone soon, so will need to regenerate all of the OTP tokens. *sigh* Does anyone backup all the QR codes for them, to quickly re-enroll all the things?

    BTW I gave a talk about systemd-resolved at FOSDEM. People didn't like that we do not enable/enforce DNS over TLS, or DNS over HTTPS, or DNSSEC by default. At least, people seemed happy about not leaking queries. But not happy again about caching.

    I feel safe.

    ps. funny how xkcd uses 2k RSA, not 4k.
    on February 13, 2019 11:09 PM

    February 11, 2019

    linux.conf.au 2019

    Robert Ancell

    Along with a number of other Canonical staff I recently attended linux.conf.au 2019 in Christchurch, New Zealand. I consider this the major Australia/New Zealand yearly conference that covers general open source development. This year the theme of the conference was "Linux of Things" and many of the talks had an IoT connection.

    One of the premium swag items was a Raspberry Pi Zero. It is unfortunate that this is not a supported Ubuntu Core device (CPU a generation too old) as this would have been a great opportunity to show an Ubuntu Core device in action. I did prepare a lightning talk showing some Ubuntu Core development on a Raspberry Pi 3, but this sadly didn't make the cut. You can see it in blog form.

    LCA consistently has high quality talks, so choosing what to attend is hard. Mostly everything was recorded and is viewable on their YouTube channel. Here is some highlights that I saw:

    STM32 Development Boards (literally) Falling From The Sky (video) - This talk was about tracking and re-purposing hardware from weather balloons. I found it interesting as it made me think about the amount of e-waste that is likely to be generated as IoT increases and ways in that it can be re-cycled, particularly with open source software.

    Plastic is Forever: Designing Tomu's Injection-Molded Case (video) and SymbiFlow - The next generation FOSS FPGA toolchain (video) - FPGA development is something that has really struggled to break into the mainstream. I think this is mostly down to two things - a lack of a quality open source toolchain and cheap hardware. These talks make it seem like we're getting really close with the SymbiFlow toolchain and hardware like the Fomu. I think we'll get some really interesting new developments when we get something close to the Rasberry Pi/Arduino experience and I'm looking forward to writing some code in the FPGA and IoT space, hopefully soon!

    The Tragedy of systemd (video) - It's the conflict that just keeps giving 😭 Benno talked about how regardless of how systemd came to exist the value of modern middleware is valuable. I had thought the majority had come to this conclusion but it seems this is still an idea that needs selling. I think the talk was effective in doing that.

    Sequencing DNA with Linux Cores and Nanopores (video) - This was a live (!) demonstration of doing DNA sequencing on the speakers lunch. This was done using the MinION - a USB DNA sequencer. As well as being able to complete the task what impressed me was this was done on a laptop and no special software was required. Given this device costs something around $1000 and is easy to use this opens up DNA analysis to the open source world.

    Around the world in 80 Microamps - ESP32 and LoRa for low-power IoT (video) - This discussed real world cases of building IoT / automation solutions using battery power (e.g. solar not suitable). It covered how it's very hard to run a Linux based solution for a long time on a battery, but technology is slowly improving. Turns out the popularity of e-scooters is making bigger and cheaper batteries available.

    Christchurch has recently started trialing Lime scooters. These were super popular with a hacker crowd and quickly accumulated around the venue. I planned to scooter from the airport to the venue but sadly that day there weren't any nearby, so I walked half way and scootered the rest. They're super fun and useful so I recommend you try them if you are visiting a city that has them. 🙂

    on February 11, 2019 10:47 PM

    February 10, 2019

    I love Gitlab. I have written about it, I contribute (sporadically) with some code and I am a big fan of their CI/CD system (ask my colleagues!). Still, they need to improve on their mobile side.

    I travel often, and be able to work on issues and pipelines on the go is something essential for me. Unfortunately, Gitlab’s UX on small screens is far from ideal (while it has improved over the years).

    Enter Glasnost

    My good friend Giovanni has developed a new opensource mobile client for Gitlab, with a lot of cool features: Glasnost!

    glasnost logo

    In his words:

    Glasnost is a free, currently maintained, platform independent and opensource mobile application that is aiming to visualize and edit the most important entities handled by Gitlab.

    Among the others features, I’d like to highlight support for multiple Gitlab hosts (so you can work both on your company’s Gitlab and on Gitlab.com at the same time), two different themes (a light one and a dark one), a lite version for when your data connection is stuck on edge, and support for fingerprint authentication.

    The application is still in an early phase of development, but it already has enough features to be used daily. I am sure Giovanni would love some feedback and suggestions, so please go on the Glasnost’s issues tracker or leave a feedback on the PlayStore.

    If you feel a bit more adventurous, you can contribute to the application itself: it is written in React+Redux with Expo: the code is hosted on Gitlab (of course).

    Enjoy yet another client for Gitlab, and let Giovanni knows what you think!

    playstore logo

    For any comment, feedback, critic, write me on Twitter (@rpadovani93) or drop an email at riccardo@rpadovani.com.


    on February 10, 2019 06:45 PM

    February 09, 2019

    A very common question that comes up on IRC or elsewhere by people trying to use the gtk-rs GTK bindings in Rust is how to modify UI state, or more specifically GTK widgets, from another thread.

    Due to GTK only allowing access to its UI state from the main thread and Rust actually enforcing this, unlike other languages, this is less trivial than one might expect. To make this as painless as possible, while also encouraging a more robust threading architecture based on message-passing instead of shared state, I’ve added some new API to the glib-rs bindings: An MPSC (multi-producer/single-consumer) channel very similar to (and based on) the one in the standard library but integrated with the GLib/GTK main loop.

    While I’ll mostly write about this in the context of GTK here, this can also be useful in other cases when working with a GLib main loop/context from Rust to have a more structured means of communication between different threads than shared mutable state.

    This will be part of the next release and you can find some example code making use of this at the very end. But first I’ll take this opportunity to also explain why it’s not so trivial in Rust first and also explain another solution.

    Table of Contents

    1. The Problem
    2. One Solution: Safely working around the type system
    3. A better solution: Message passing via channels

    The Problem

    Let’s consider the example of an application that has to perform a complicated operation and would like to do this from another thread (as it should to not block the UI!) and in the end report back the result to the user. For demonstration purposes let’s take a thread that simply sleeps for a while and then wants to update a label in the UI with a new value.

    Naively we might start with code like the following

    let label = gtk::Label::new("not finished");
    // Clone the label so we can also have it available in our thread.
    // Note that this behaves like an Rc and only increases the
    // reference count.
    let label_clone = label.clone();
    thread::spawn(move || {
        // Let's sleep for 10s

    This does not compile and the compiler tells us (between a wall of text containing all the details) that the label simply can’t be sent safely between threads. Which is absolutely correct.

    error[E0277]: `std::ptr::NonNull<gobject_sys::GObject>` cannot be sent between threads safely
      --> src/main.rs:28:5
    28 |     thread::spawn(move || {
       |     ^^^^^^^^^^^^^ `std::ptr::NonNull<gobject_sys::GObject>` cannot be sent between threads safely
       = help: within `[closure@src/bin/basic.rs:28:19: 31:6 label_clone:gtk::Label]`, the trait `std::marker::Send` is not implemented for `std::ptr::NonNull<gobject_sys::GObject>`
       = note: required because it appears within the type `glib::shared::Shared<gobject_sys::GObject, glib::object::MemoryManager>`
       = note: required because it appears within the type `glib::object::ObjectRef`
       = note: required because it appears within the type `gtk::Label`
       = note: required because it appears within the type `[closure@src/bin/basic.rs:28:19: 31:6 label_clone:gtk::Label]`
       = note: required by `std::thread::spawn`

    In, e.g. C, this would not be a problem at all, the compiler does not know about GTK widgets and generally all GTK API to be only safely usable from the main thread, and would happily compile the above. It would the our (the programmer’s) job then to ensure that nothing is ever done with the widget from the other thread, other than passing it around. Among other things, it must also not be destroyed from that other thread (i.e. it must never have the last reference to it and then drop it).

    One Solution: Safely working around the type system

    So why don’t we do the same as we would do in C and simply pass around raw pointers to the label and do all the memory management ourselves? Well, that would defeat one of the purposes of using Rust and would require quite some unsafe code.

    We can do better than that and work around Rust’s type system with regards to thread-safety and instead let the relevant checks (are we only ever using the label from the main thread?) be done at runtime instead. This allows for completely safe code, it might just panic at any time if we accidentally try to do something from wrong thread (like calling a function on it, or dropping it) and not just pass the label around.

    The fragile crate provides a type called Fragile for exactly this purpose. It’s a wrapper type like Box, RefCell, Rc, etc. but it allows for any contained type to be safely sent between threads and on access does runtime checks if this is done correctly. In our example this would look like this

    let label = gtk::Label::new("not finished");
    // We wrap the label clone in the Fragile type here
    // and move that into the new thread instead.
    let label_clone = fragile::Fragile::new(label.clone());
    thread::spawn(move || {
        // Let's sleep for 10s
        // To access the contained value, get() has
        // to be called and this is where the runtime
        // checks are happening

    Not many changes to the code and it compiles… but at runtime we of course get a panic because we’re accessing the label from the wrong thread

    thread '<unnamed>' panicked at 'trying to access wrapped value in fragile container from incorrect thread.', ~/.cargo/registry/src/github.com-1ecc6299db9ec823/fragile-0.3.0/src/fragile.rs:57:13

    What we instead need to do here is to somehow defer the change of the label to the main thread, and GLib provides various API for doing exactly that. We’ll make use of the first one here but it’s mostly a matter of taste (and trait bounds: the former takes a FnOnce closure while the latter can be called multiple times and only takes FnMut because of that).

    let label = gtk::Label::new("not finished");
    // We wrap the label clone in the Fragile type here
    // and move that into the new thread instead.
    let label_clone = fragile::Fragile::new(label.clone());
    thread::spawn(move || {
        // Let's sleep for 10s
        // Defer the label update to the main thread.
        // For this we get the default main context,
        // the one used by GTK on the main thread,
        // and use invoke() on it. The closure also
        // takes ownership of the label_clone and drops
        // it at the end. From the correct thread!
        glib::MainContext::default().invoke(move || {

    So far so good, this compiles and actually works too. But it feels kind of fragile, and that’s not only because of the name of the crate we use here. The label passed around in different threads is like a landmine only waiting to explode when we use it in the wrong way.

    It’s also not very nice because now we conceptually share mutable state between different threads, which is the underlying cause for many thread-safety issues and generally increases complexity of the software considerable.

    Let’s try to do better, Rust is all about fearless concurrency after all.

    A better solution: Message passing via channels

    As the title of this post probably made clear, the better solution is to use channels to do message passing. That’s also a pattern that is generally preferred in many other languages that focus a lot on concurrency, ranging from Erlang to Go, and is also the the recommended way of doing this according to the Rust Book.

    So how would this look like? We first of all would have to create a Channel for communicating with our main thread.

    As the main thread is running a GLib main loop with its corresponding main context (the loop is the thing that actually is… a loop, and the context is what keeps track of all potential event sources the loop has to handle), we can’t make use of the standard library’s MPSC channel. The Receiver blocks or we would have to poll in intervals, which is rather inefficient.

    The futures MPSC channel doesn’t have this problem but requires a futures executor to run on the thread where we want to handle the messages. While the GLib main context also implements a futures executor and we could actually use it, this would pull in the futures crate and all its dependencies and might seem like too much if we only ever use it for message passing anyway. Otherwise, if you use futures also for other parts of your code, go ahead and use the futures MPSC channel instead. It basically works the same as what follows.

    For creating a GLib main context channel, there are two functions available: glib::MainContext::channel() and glib::MainContext::sync_channel(). The latter takes a bound for the channel, after which sending to the Sender part will block until there is space in the channel again. Both are returning a tuple containing the Sender and Receiver for this channel, and especially the Sender is working exactly like the one from the standard library. It can be cloned, sent to different threads (as long as the message type of the channel can be) and provides basically the same API.

    The Receiver works a bit different, and closer to the for_each() combinator on the futures Receiver. It provides an attach() function that attaches it to a specific main context, and takes a closure that is called from that main context whenever an item is available.

    The other part that we need to define on our side then is how the messages should look like that we send through the channel. Usually some kind of enum with all the different kinds of messages you want to handle is a good choice, in our case it could also simply be () as we only have a single kind of message and without payload. But to make it more interesting, let’s add the new string of the label as payload to our messages.

    This is how it could look like for example

    enum Message {
    let label = gtk::Label::new("not finished");
    // Create a new sender/receiver pair with default priority
    let (sender, receiver) = glib::MainContext::channel(glib::PRIORITY_DEFAULT);
    // Spawn the thread and move the sender in there
    thread::spawn(move || {
        // Sending fails if the receiver is closed
        let _ = sender.send(Message::UpdateLabel(String::from("finished")));
    // Attach the receiver to the default main context (None)
    // and on every message update the label accordingly.
    let label_clone = label.clone();
    receiver.attach(None, move |msg| {
        match msg {
            Message::UpdateLabel(text) => label_clone.set_text(text.as_str()),
        // Returning false here would close the receiver
        // and have senders fail

    While this is a bit more code than the previous solution, it will also be more easy to maintain and generally allows for clearer code.

    We keep all our GTK widgets inside the main thread now, threads only get access to a sender over which they can send messages to the main thread and the main thread handles these messages in whatever way it wants. There is no shared mutable state between the different threads here anymore, apart from the channel itself.

    on February 09, 2019 01:25 PM

    February 07, 2019

    KDE at FOSDEM 2019

    Jonathan Riddell

    February means FOSDEM, the largest gathering of free software developers in the continent. I drove for two days down the winding roads and even onto a train and out again to take the bits needed to run the stall there. Fortunately my canoeing friend Poppy was there for car karaoke and top Plasma dev David got picked up along the way to give us emotional support watching Black Mirror Bandersnatch with its multiple endings.

    The beer flowed freely at Delerium but disaster(!) the venue for Saturday did not exist!  So I did some hasty scouting to find a new one before returning for more beer.

    Rather than place us next to Gnome the organisers put us next to our bestie friends Nextcloud which was nice and after some setup the people came and kept on coming.  Saturday was non stop on the stall but fortunately we had a good number of volunteers to talk to our fans and future fans.

    Come Home to KDE in 2019 was the theme.  You’ve been distro hopping.  Maybe bought a macbook because you got bored of the faff with Linux. But now it’s time to re-evaluate.  KDE Plasma is lightweight, full features, simple and beautiful.  Our applications are world class.  Our integration with mobile via KDE Connect is unique and life changing.

    I didn’t go to many talks because I was mostly stuck on the stall but an interesting new spelling library nuspell looks like something we should add into our frameworks, and Tor is helping people evade governments and aiding the selling of the odd recreational drug too.


    At 08:30 not many helpers or punters about but the canoeists got the show going.

    In full flow on the Saturday Wolthera does a live drawing show of Krita while Boud is on hand for queries and selfies.

    The Saturday meal after a quick change of venue was a success where we were joined by our friends Nextcloud and the Lawyers of Freedom.

    Staying until the following day turns out to allow a good Sunday evening to actually chat and discuss the merits of KDE, the universe and everything.  With waffles.

    Facebooktwitterlinkedinby feather
    on February 07, 2019 03:37 PM

    S01E22 – Geeks aos molhos

    Podcast Ubuntu Portugal

    Neste episódio falamos do FOSDEM e da nossa experiência este ano. Se foste, partilha a tua experiência, se não foste cuidado, vais ficar com vontade de ir para o ano. Já sabes: Ouve, subscreve e partilha!

    • https://fosdem.org/2019/
    • https://fosdem.org/2019/schedule/event/keynotes_welcome/
    • https://fosdem.org/2019/schedule/event/keynote_fifty_years_unix/
    • https://fosdem.org/2019/schedule/event/matrix_french_state/
    • https://fosdem.org/2019/schedule/event/dns_over_http/
    • https://fosdem.org/2019/schedule/event/dns_privacy_panel/
    • https://fosdem.org/2019/schedule/event/containers_lxd_update/
    • https://fosdem.org/2019/schedule/event/crostini/
    • https://fosdem.org/2019/schedule/event/full_software_freedom/
    • https://fosdem.org/2019/schedule/event/behind_snapcraft/
    • https://fosdem.org/2019/schedule/event/nextcloud/
    • https://volunteers.fosdem.org/


    Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

    Atribuição e licenças

    A imagem de capa: Georgia Aquarium e está licenciada como CC BY 2.5.

    A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

    cujo texto integral pode ser lido aqui

    Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

    on February 07, 2019 02:41 PM

    February 06, 2019

    What’s the OOPS ID?

    Brian Murray

    The other day gnumeric crashed on me and like a good Ubuntu user, I submitted the crash report to the Ubuntu Error Tracker. Naturally, I also wanted to see the crash report in the Error Tracker and find out if other people had experienced the crash. It used to be an ordeal to find the OOPS ID associated with a specific crash, you’d have to read multiple lines of the systemd journal using ‘journalctl -u whoopsie.service’ and find the right OOPS ID for the crash about which you are interested.

    $ journalctl -u whoopsie.service
    -- Logs begin at Fri 2019-02-01 09:36:47 PST, end at Wed 2019-02-06 08:41:02 PST. --
    Feb 02 07:08:46 impulse whoopsie[4358]: [07:08:46] Parsing /var/crash/_usr_bin_gnumeric.1000.crash.
    Feb 02 07:08:46 impulse whoopsie[4358]: [07:08:46] Uploading /var/crash/_usr_bin_gnumeric.1000.crash.
    Feb 02 07:08:48 impulse whoopsie[4358]: [07:08:48] Sent; server replied with: No error
    Feb 02 07:08:48 impulse whoopsie[4358]: [07:08:48] Response code: 200
    Feb 02 07:08:48 impulse whoopsie[4358]: [07:08:48] Reported OOPS ID 7120987c-26fc-11e9-9efd-fa163ee63de6
    Feb 02 07:11:11 impulse whoopsie[4358]: [07:11:11] Sent; server replied with: No error
    Feb 02 07:11:11 impulse whoopsie[4358]: [07:11:11] Response code: 200

    However, I recently made a change to whoopsie to write the OOPS ID to the corresponding .uploaded file in /var/crash. So now I can just read the .uploaded file to find the OOPS ID.

    $ sudo cat /var/crash/_usr_bin_gnumeric.1000.uploaded

    This is currently only available in Disco Dingo, which will become the 19.04 release of Ubuntu, but if you are interested in having it in another release let me know or update the bug.

    on February 06, 2019 05:11 PM

    February 05, 2019

    Snapcraft 3.1

    Sergio Schvezov

    snapcraft 3.1 is now available on the stable channel of the Snap Store. This is a new minor release building on top of the foundations laid out from the snapcraft 3.0 release. If you are already on the stable channel for snapcraft then all you need to do is wait for the snap to be refreshed. The full release notes are replicated here below Build Environments It is now possible, when using the base keyword, to once again clean parts.
    on February 05, 2019 05:48 PM

    February 03, 2019

    En este nuevo programa, segundo de la tercera temporada Ubuntu y Otras Hierbas, con Francisco Molinero y Javier Teruelo y con la inestimable colaboración de Lidia Montero y José Manuel Blanco,  se adentra en las complejas relaciones entre los ciclos formativos y el software libre y sus potenciales salidas y futuros (y abre algún melón sorpresa).
    ¿Conseguirá ser ésta la Nueva Generación?

    El podcast esta disponible para escuchar en:
    on February 03, 2019 01:15 PM

    February 02, 2019

    Knot Boards

    Ted Gould

    Each year Cub Scouts has a birthday party for Scouting in February, which is called the Blue & Gold banquet. We have a tradition that at the banquet were we thank all of our volunteers who help make the Cub Scout Pack run. For the Den Leaders, who are so critical to the program, I like to do something special that helps them to run a better program for the scouts. For 2018 (notice I'm a little behind) I decided to make all of the Den Leaders for our Pack knot boards.

    SVG file for the knotboard design

    When I was a Scout I remember my mom making knot boards. Back then we had a piece of paper with the various knots that was varnished onto a piece of plywood, which had a rope attached to it. High technology for the time, but today I'm a member of TheLab.ms makerspace and have access to a laser cutter. While these knot boards are the same in spirt, we can do some very cool things with big toys.

    Laser actively cutting boards with a cool sparkle

    First step is to pull out Inkscape and design the graphics. I grabbed a rope border from Open Clipart and grabbed some knot graphics from a Scouting PDF (which I can't find a link to). I put those together to create the basic design along with labes for the knots. I also added a place for each Scout to sign their name as a Thank you to the Den Leader. I then make some small circles for the laser cutter to cut out holes for the ropes. I made a long oblong region on the right so the board would have a handle and a post to tie the hitches around. Then lastly I added the outline to cut out the board.

    To get the design into the laser cutter I exported it from Inkscape in two graphics. I exported the cut lines as a DXF and I exported the etching as a 300 DPI PNG. The cut lines were simpler and the laser cutter software was able to handle those and create simple controls for the cutter. The knots on the other hand were more complex vector objects and the laser cutter software couldn't handle them. Inkscape could, so I had it do the rendering to a bitmap. The laser cutter can then setup scans that use the bitmap data which worked very well.

    For the boards I used ¼th" Lauan Plywood which I was able to get in 2'x2' sheets at Lowe's. Those sheets have a nice grain on both sides. I also liked being able to get sheets that were exactly the size I needed to fit into the laser cutter. Saved me a step. I'm certain the knot boards would be great in many other woods and other materials.

    Cut knot boards sitting the bed of the laser cutter

    After cutting out the knot boards I needed short lengths of rope to be able to insert into the holes. I couldn't find anywhere that would sell me short pieces of rope. I felt like I needed a Monty Python sketch. To make short lengths of the paracord I looped it in a circle with the circumference as the length I needed. Then I took a blowtorch and cut the circle. This also sealed the ends of the paracord.

    Final knot boards with ropes in the holes

    on February 02, 2019 12:00 AM

    February 01, 2019

    Some time ago we started alerting publishers when their stage-packages received a security update since the last time they built a snap. We wanted to create the right balance for the alerts and so the service currently will only alert you when there are new security updates against your stage-packages. In this manner, you can choose not to rebuild your snap (eg, since it doesn’t use the affected functionality of the vulnerable package) and not be nagged every day that you are out of date.

    As nice as that is, sometimes you want to check these things yourself or perhaps hook the alerts into some form of automation or tool. While the review-tools had all of the pieces so you could do this, it wasn’t as straightforward as it could be. Now with the latest stable revision of the review-tools, this is easy:

    $ sudo snap install review-tools
    $ review-tools.check-notices \
    {'review-tools': {'656': {'libapt-inst2.0': ['3863-1'],
                              'libapt-pkg5.0': ['3863-1'],
                              'libssl1.0.0': ['3840-1'],
                              'openssl': ['3840-1'],
                              'python3-lxml': ['3841-1']}}}

    The review-tools are a strict mode snap and while it plugs the home interface, that is only for convenience, so I typically disconnect the interface and put things in its SNAP_USER_COMMON directory, like I did above.

    Since now it is super easy to check a snap on disk, with a little scripting and a cron job, you can generate a machine readable report whenever you want. Eg, can do something like the following:

    $ cat ~/bin/check-snaps
    set -e
    snaps="review-tools/stable rsync-jdstrand/edge"
    tmpdir=$(mktemp -d -p "$HOME/snap/review-tools/common")
    cleanup() {
        rm -fr "$tmpdir"
    trap cleanup EXIT HUP INT QUIT TERM
    cd "$tmpdir" || exit 1
    for i in $snaps ; do
        snap=$(echo "$i" | cut -d '/' -f 1)
        channel=$(echo "$i" | cut -d '/' -f 2)
        snap download "$snap" "--$channel" >/dev/null
    cd - >/dev/null || exit 1
    /snap/bin/review-tools.check-notices "$tmpdir"/*.snap

    or if  you already have the snaps on disk somewhere, just do:

    $ /snap/bin/review-tools.check-notices /path/to/snaps/*.snap

    Now can add the above to cron or some automation tool as a reminder of what needs updates. Enjoy!

    on February 01, 2019 08:53 PM

    January 31, 2019

    S11E99 – Listener Get Together

    Ubuntu Podcast from the UK LoCo

    We’re having a Get Together in Reading, UK on Saturday March 16th 2019. The exact venue is not decided yet, but will be in Reading town centre.

    We’d like to gauge how many people might come, so please sign in and mark yourself as wanting to come.

    It’s Season 11 Episode 99 of the Ubuntu Podcast! Mark Johnson is connected and speaking to your brain.

    Back in November 2018 we conducted a poll on Twitter to see what kind of social event our listeners were interested in. Overwhelmingly (and perhaps unsurprisingly) you chose a Pub/Restaurant meet, so that’s what we’re doing.

    It’ll be a relaxed social event with the presenters having a beer or fruit-based drink, and maybe some food to be decided later.

    Do let us know if you’d like to come by marking yourself attending here.

    That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

    on January 31, 2019 03:00 PM

    I used to think monthly logs are too much effort, but I decided to give it a go and it ended up being easy and very non-intrusive to my workflow. Above picture taken at Rhodes memorial while riding bike around Cape Town.

    2019-01-01 Published Debian Package of the Day #59: bastet (highvoltage.tv / YouTube)

    2019-01 01: Start working on updating Planet Debian policy

    2019-01-02: Read 191/349 pages of Python Interviews – Discussions with Python experts

    2019-01-03: Update Planet Debian policy, make it live and announce it

    2019-01-03: Continued discussion of xfce-screensaver‘s future in Debian (ITP: #911115)

    2019-01-03: Sponsor package: camlimages (4.2.6-1) (mentors.debian.net), grant dm upload rights for uploader

    2019-01-03: Various discussions regarding DebConf20 bids, following Lisbon recon team live updates

    2019-01-04: Finish reading Python Interviews – Discussions with Python experts

    2019-01-04: Adopt aalib package, upload new package with vcs now on salsa.debian.org

    2019-01-04: Upload new upstream version of toot (0.20.0-1) to debian unstable

    2019-01-04: Upload new upstream version of bundlewrap (3.5.3-1) to debian unstable

    2019-01-04: Upload new upstream version of flask-restful (0.3.7-1) to debian unstable

    2019-01-04: Upload new upstream version of gnome-shell-extension-dash-to-panel (17-1) to debian unstable

    2019-01-04: Sponsor package: pass-otp (1.2.0-1) (e-mail request)

    2019-01-04: Troubleshoot pythonqt, give up in frustration and take a break for the weekend instead

    2019-01-07: Backport firmware-nonfree (20180825+dfsg-2~aimsppa1) for AIMS Desktop (mostly for RTL8237AU support)

    2019-01-07: Upload new package xfce4-screensaver (0.1.3-1) to debian experimental

    2019-01-08: Update calamares-settings-debian (10.0.15) with preliminary buster artwork and work around stale fstab file left behind by live-build and upload to debian unstable (10.0.15-1). Fix description typo that will be released with next upload (Closes: #918222)

    2019-01-09: Fix a whole bunch of AIMS Desktop build related problems and build updated stretch/buster images for testing.

    2019-01-10: Upload new upstream version of calamares (3.2.3-1) to debian unstable

    2019-01-10: Upload new upstream version of python aniso8601 (4.1.0-1) to debian unstable

    2019-01-10: Work on initial announcement for Buster artwork selection

    2019-01-13: Upload xfce4-screensaver (0.13-2) (Closes: #919151) (Whoops, accidental upload to unstable, file #919348 to prevent it migrating to testing)

    2019-01-11: Initial upload of fonts-quicksand (0.2016.1) to debian unstable (Closes: #918995)

    Calamares on Debian Live using new buster artwork

    2019-01-15: Test new calamares with buster artwork on debian-live weekly builds

    2019-01-16: Upload new upstream version of dash-to-panel gnome shell extention (18-1) to debian unstable

    2019-01-16: Release new calamares-settings-debian that fixes setups that contain swap partitions + full disk encryption (10.0.16), upload to debian unstable (10.0.16-1) which also fixes a typo (Closes: #918222)

    2019-01-17: Upload new live-installer (57) to remove calamares-settings-debian after live install from d-i

    2019-01-17: DebConf meeting with DC20 bid teams to ask questions from DebConf Committee and improve bid pages

    2019-01-18: Prepare artwork upload for debian-installer, upload rootskel-gtk (1.41) to debian unstable

    2019-01-18: Spent lots of time going through debian-installer code, started working on some skeleton concepts for some ideas that I have that I’ll publicly publish some other time

    2019-01-19: Started working on some proof-of-concept system installer code in python3, worked mostly on structure and module importing. More on this much later

    2019-01-20: Worked on keyboard, localisation and partitioning modules for a concept distro-installer

    2019-01-21: Uploaded fonts-quicksand (0.2016-2) with some minor fixes to correct lintian warnings

    2019-01-21: Upload calamares (3.2.3-2), add libpwquality-dev and remove patch to use sudo instead of pkexec (not that pkexec seems to work everywhere)

    2019-01-22: Sign gpg key of local Debianite who should soon apply for DD

    2019-01-22: Adopt and upload preload (0.6.4-3) (Closes: #646216)

    2019-01-22: Reviewed mentors.debian.net package siconos (4.2.0+git20181026.0ee5349+dfsg-1) (needs a little more work)

    2019-01-22: Review mentors.debian.net package libcxx-serial (1.2.1-1) (needs a little more work)

    2019-01-22: Sponsor package budgie-extras (0.7.0-1) (mentors.debian.net request) (Closes: #917724)

    2019-01-23: Sponsor package osmose-emulator (1.4-1) (mentors.debian.net request) (Closes: #918507)

    2019-01-23: Sponsor package dhcpcd5 (7.0.8-1) (mentors.debian.net request) (Closes: #914070)

    2019-01-23: Review mentors.debian.net package pcapy (0.11.4-1) (needs some more work)

    2019-01-23: Sponsor package blastem (0.6.1-1) (mentors.debian.net request) (Closes: #919541)

    2019-01-23: Review mentors.debian.net package owlr (5.2.0-1) (needs some more work)

    2019-01-23: Upload preload (0.6.4-4) to debian experimental (Closes: #920197, #861937, #893374, #697071)

    2019-01-23: Fix some bootloader related stuff when using d-i on AIMS Desktop, add README entry in GRUB.

    2019-01-24: Sponsor package dhcpcd5 (7.0.8-2) (e-mail request)

    2019-01-24: Upload preload (0.6.4-5~exp1) to debian experimental (attempt to fix some niggles)

    2019-01-25: Start traveling from Cape Town to Brussels for Debconf video sprint and FOSDEM (thanks to Debian for covering travelling expenses)

    2019-01-29: Sponsor packages for DD with expired: key git-review (1.27.0-1), q-text-as-data (1.7.4+2018.12.21+git+28f776ed46-1), swift (2.19.0-4)

    Thanks to Linuxbe for providing the hackspace, drinks and kind hospitality for the DebConf video sprint!

    2019-01-28: Day 1 of DebConf video sprint. Work on tillytally (now ToeTally), start to unhardcode test cards, implement planned cards and initial discussion with integrating with voctomix-core.

    ToeTally is a tally display system that will be mounted above DebConf video cameras, replacing existing tally lights that work over serial, it should make it easier for the director to co-ordinate the stream. It’s also very early work in progress.

    2019-01-29: Learn about raspberry pi netbooting, figure, tweak and build debian(ish) images using rpi23-gen-image (for later use on video tally system)

    2019-01-30: Spent some time with Andy from video team properly speccing out Tally project, renamed from TillyTally to ToeTally.

    2019-01-31: Started working on cross-building scripts for building tally system (arm64) boot environment on x86 that will eventually be deployed using existing video team Ansible setup.

    on January 31, 2019 12:15 PM

    Rise and fall of libclamav

    Scott Kitterman

    Because I was bored and needed to procrastinate, I decided to look at the history of packages using libclamav over the last several releases. This is binary reverse-depends in main on i386:


    libclamav soname


























    I started working on clamav around Etch (in Ubuntu, so it’s not an exact match) and transitions were a blast back then. Every single soname bump needed significant sourceful changes. It killed quite a number of projects. Of the four we still have in Debian (dansguardian, havp, icap, and python-clamav) only the icap modules aren’t essentially dead upstream.

    I guess API stability counts for something if you want people to use your library.

    P.S. None of the people working on clamav today are the same as when we had 3 soname bumps in one release cycle.


    on January 31, 2019 06:36 AM

    January 30, 2019

    Chicken McNuggets

    Stuart Langridge

    Back in the old days, when things made sense, you could buy Chicken McNuggets in boxes of 6, 9, and 201. So if you were really hungry, you could buy, for example, 30: two boxes of 9 and two boxes of 6. If you weren’t that hungry you’re a bit scuppered; there’s no combination of 6, 9, and 20 which adds up to, say, 14. What if you were spectacularly hungry, but also wanted to annoy the McDonalds people? What’s the largest order of Chicken McNuggets which they could not fulfil?

    Well, that’s how old I am today. Happy birthday to me.

    Tomorrow I’m delivering my talk about The UX of Text at BrumPHP, and there may be a birthday drink or two afterwards. So if you’re in the area, do drop by.

    Best to not talk about politics right now. It was bad two years ago and it’s worse now. We’re currently in the teeth of Brexit. I thought this below from Jon Worth was a useful summary of what the next steps are, but this is no long-term thing; this shows what might happen in the next few days, which is as far out as can be planned. I have no idea what I’ll be thinking when writing my birthday post next year. I’m pretty worried.

    Right, back to work. I’d rather be planning a D&D campaign, but putting together a group to do that is harder than it looks.

    1. yes, yes, now you can get four in a Happy Meal, but that’s just daft. Who only wants four chicken nuggets?
    on January 30, 2019 04:26 PM

    January 29, 2019

    Upgraded system on my server

    Marcin Juszkiewicz

    My current server is few years old. And now runs plain Debian.


    I started using that server during my work at Canonical. So it got Ubuntu installed. According to OVH panel it was 13.04 release. Then 13.10, 14.04 and finally 16.04 landed. In pain. Took me two days to get it working again (mail issues).

    At that time I decided that it will not get any Ubuntu update. The plan was to upgrade to proper Debian release. And Buster will get frozen soon…

    One day I took a list of installed packages and started “ubuntu:xenial” container. Test shown will it be big work to do such upgrade. Turned out that not that much.

    Today I saw a post saying that php 7.1 goes into “security fixes only” mode. And I had 7.0 in use… So decided that ok, this is the time.

    Let’s go with upgrade

    Logged in, added Debian repository, APT keys and started with installation of 4.19 kernel. And rebooted to it.

    Machine started without issues so I started upgrade. Used aptitude as usual. There were 10-20 conflicts to solve and then package installation started.

    Few file conflicts was on a way but APT handled most of them without issues. Two or free packages I had to take care by hand.

    Next step was replacing remaining Ubuntu packages with Debian ones. Or removing them completely. Easy, smooth work.

    Getting services running

    After copying php-fpm config files from 7.0 to 7.3 release my blog went online.

    Then some edits to Courier auth daemon config files (adding “marker”) and mails started flowing in both directions. But if you got mail that my mail account was not found on a server then send it again.

    Finally reboot. To make sure that everything works. Fingers crossed, “reboot”. Came back online like always. No issues.

    Why Debian?

    Someone may ask why not Fedora or RHEL or CentOS? I work at Red Hat now, right?

    Yes, I do. But Debian is operating system I know most. It’s tools etc. Also upgrade was possible to do online. Otherwise I would have to start with reinstalation.

    Now I have only one machine running Ubuntu. My wife’s laptop. But it is “no way” zone. It works for her and we have an agreement that I do not touch it. Unless requested.

    on January 29, 2019 08:29 AM

    January 28, 2019

    So I created a subreddit for interesting G+ refugees. Emphasis on interesting.

    Come and play. Be suave. Don’t be a dick.

    Edward Morbius

    Kee Hinckley

    Rugger Ducky

    Sarah Lester

    Ward A

    Tim S

    Matthew H

    Yoko F Thunders

    Dave Thompson

    Grumpy Cat

    catty _big

    Dan Ramos

    Di Cleverly

    and the many more I know I’ve forgotten because I need an fud. And invite your friends!

    on January 28, 2019 08:48 PM

    January 27, 2019

    Ubuntu Core 18 is out and one of the features that it packs with it is a set of snapd interfaces to access GPIO pins on the Raspberry Pi 2/3 in a fully confined snap. This enables one to just flash Ubuntu Core 18 on a micro sd card, boot, install a snap (which I author), connect a few interfaces and start controlling relays attached to a Raspberry Pi 2/3.

    If you don't have Ubuntu Core 18 already installed, you can see the install instructions here

    To get started (assuming you have Ubuntu Core 18 installed and have working ssh access to the Pi), you need to install a snap that exposes the said functionality over the network (local)

    $ snap install pigpio

    The above command installed the pigpio server, which automatically starts in the background. The server could take as much as 30 seconds to start, you have been warned.

    We also need to allow the newly installed snap to access a few GPIO pins

    $ snap connect pigpio:gpio pi:bcm-gpio-4
    $ snap connect pigpio:gpio pi:bcm-gpio-5
    $ snap connect pigpio:gpio pi:bcm-gpio-6
    $ snap connect pigpio:gpio pi:bcm-gpio-12
    $ snap connect pigpio:gpio pi:bcm-gpio-13
    $ snap connect pigpio:gpio pi:bcm-gpio-17
    $ snap connect pigpio:gpio pi:bcm-gpio-18
    $ snap connect pigpio:gpio pi:bcm-gpio-19
    $ snap connect pigpio:gpio pi:bcm-gpio-20
    $ snap connect pigpio:gpio pi:bcm-gpio-21
    $ snap connect pigpio:gpio pi:bcm-gpio-22
    $ snap connect pigpio:gpio pi:bcm-gpio-23
    $ snap connect pigpio:gpio pi:bcm-gpio-24
    $ snap connect pigpio:gpio pi:bcm-gpio-26

    The above pin numbers might look strange, but if you read a bit about the Raspberry Pi 3's GPIO pinout, you will realize, I only selected the "basic" pins, you are however free to connect all GPIO pin interfaces.

    The pigpio snap that we installed above exposes the GPIO functionality over WAMP protocol and http. The HTTP implementation is very basic and allows to "turn on" and "turn off" a GPIO pin and get current state(s) of the pins.

    Note: below commands assumes you have httpie installed (snap install http).

    To get the state of all pins

        $ http POST http://raspberry_pi_ip:5021/call procedure=io.crossbar.pigpio-wamp.get_states

    If we only want the state of a specific pin

        $ http POST http://raspberry_pi_ip:5021/call procedure=io.crossbar.pigpio-wamp.get_state args:='[4]'

    To "turn on" a pin

        $ http POST http://raspberry_pi_ip:5021/call procedure=io.crossbar.pigpio-wamp.turn_on args:='[4]'

    To "turn off"

        $ http POST http://raspberry_pi_ip:5021/call procedure=io.crossbar.pigpio-wamp.turn_off args:='[4]'

    I am skipping talking about the WAMP based API for this, to keep this blogpost short, I must add though, that the WAMP implementation is much more powerful than the http one, especially because it has "event publishing", imagine multiple people controlling a single GPIO pin from different clients, we publish an event that can be subscribed to, hence ensuring all client apps stay in sync. I'll talk about this in a different blog post. In a later post, I will also be talking about making the GPIO pins accessible over the internet.

    For me personally, I have a few projects for home and one for my co-working space that I plan to accomplish using this.

    The code lives on github

    on January 27, 2019 09:00 PM

    A Cease Fire Was Called

    Stephen Michael Kellat

    For the moment there has been a cease fire called in the budget impasse between the US President and the US Congress. A report in the Washington Post indicates that my employer took some operational damage from this. If neogitations over the next three weeks are not successful we'll be back in a bad situation yet again. There are no indications of any breakthroughs yet.

    I do want to thank every packager out there who makes Ubuntu happen. For as horribly as our computers operate at work it would be great if we could switch them to Ubuntu. Unfortunately it looks like I would get RHEL if I dared ask to move to having Linux but, then again, I'm not in IT procurement but front-line customer service to America's taxpayers.

    Erie Looking Productions is picking up a little as we finally got at least one inquiry. If we get enough inquiries, I might end up financially comfortable enough to walk away from the current job. I've been doing odd efforts buying T-Bills, T-Notes, and even picking up oddball stocks just to ensure I have a cushion of "unearned income" coming in. The term "unearned income" is a term of art from work, of course, relating to money received like royalty payments, interest payments, investment income, etc.

    And in case you were wondering, I was one of the people who got called back to work without pay. Eventually I'll see a back pay check. The biggest worry on the staff is that we'll be back in this situation again by February 15th when the cease fire is over. We're not necessarily pessimists but rather have seen this movie play out similarly before.

    The good news in all of this is that I finally converted one of my bzr repositories on Launchpad over to a git repository. The bzr repository started complaining about how big all the blobs were inside it. You can see the outcome here: https://code.launchpad.net/~skellat/+git/pelican-large/.

    on January 27, 2019 04:45 AM

    January 25, 2019

    binfmt-support 2.2.0

    Colin Watson

    I’ve released binfmt-support 2.2.0. These are the major changes since 2.1.8:

    • Remove support for the old procfs interface, which has been unused since Linux 2.4.13 and which caused trouble in environments where we can’t use modprobe. Thanks to Bastian Blank.
    • Sort formats by name in the output of update-binfmts --display.
    • Building binfmt-support now requires Autoconf >= 2.63.
    • Add a new --unimport action, which is the inverse of --import.
    • Don’t enable formats on import or disable them on unimport unless /proc/sys/fs/binfmt_misc is already mounted. This avoids causing cleanup problems in chroots.
    • --fix-binary yes is incompatible with detectors. Warn the user if they try to use both at once. Thanks to Stefan Agner.

    In the corresponding Debian upload (2.2.0-1), I’ve changed README.Debian to recommend using update-binfmts --unimport <name> in the prerm rather than a more complicated update-binfmts --package <package> --remove <name> <path> command. I don’t intend to push for existing packages to switch over to this before buster, though, since the stricter package relationships needed to arrange for a new enough version of binfmt-support to be present when the prerm runs would make the upgrade path more complicated, and it isn’t an urgent change.

    on January 25, 2019 11:21 AM

    January 24, 2019

    In my years working on the Ubuntu project I’ve seen quite a lot of bug reports about people encountering failures when trying to upgrade from one release of Ubuntu to another. The most common issue, in my opinion (I have no numbers), is a system having a PPA or other 3rd party provider of packages enabled and that archive having packages which cause a failure to calculate the upgrade.

    I’ve recently made some changes to ubuntu-release-upgrader which should improve this situation. The dist-upgrader has had support for an environmental variable, RELEASE_UPGRADER_ALLOW_THIRD_PARTY, for quite a while but it didn’t actually work because do-release-upgrade and check-new-release-gtk didn’t pass the variable to the dist-upgrader. This has now been resolved and actually helps with two things. One is keeping PPAs enabled during the release upgrade process, the other provides better support for users who have their own mirror of the archive. For example, I mirror some releases of Ubuntu and when upgrading have to always respond to the dialog about using an internal mirror and saying yes to rewrite my sources.list file. But now if I use ‘RELEASE_UPGRADER_ALLOW_THIRD_PARTY=1 do-release-upgrade’ I won’t see that dialog!

    The other change to ubuntu-release-upgrader makes the dist-upgrader check to see if the package provider actually supports the release to which you are upgrading. As an example the team-xbmc Kodi PPA does not support Ubuntu 18.10. Without my recent change to ubuntu-release-upgrader the upgrade process would just cancel because the PPA didn’t have a release file, this seemed like a silly reason for the whole upgrade to quit! Now the release upgrader will disable the archive that doesn’t support the release to which the system is upgrading and continue to try and calculate the upgrade.

    Both of these options are available for upgrades from 18.10 to 19.04 and from 18.04 to 18.10 although 18.04 changes are still in -proposed. So if you have some PPAs enabled and want an easier upgrade process be sure to use the RELEASE_UPGRADER_ALLOW_THIRD_PARTY environment variable and feel free to let me know how it goes.

    on January 24, 2019 08:11 PM

    January 23, 2019

    Azazel Hanzaki

    Kurt von Finck

    Azazel Hanzaki

    Frederica Mussolini

    Best track on the album.

    Disagree? Fine. I’ll have corn and peanuts for dinner so my feces is ready for your consumption.

    on January 23, 2019 09:47 PM

    Are you using Kubuntu 18.10, our current Stable release? Or are you already running our daily development builds?

    We currently have Plasma 5.14.90 (Plasma 5.15 Beta)  available in our Beta PPA for Kubuntu 18.10 and in our daily Disco ISO images.

     First, upgrade your development release

    Update directly from Discover, or use the command line:

    sudo apt update && sudo apt full-upgrade -y

    And reboot. If you cannot reboot from the application launcher,

    systemctl reboot

    from the terminal.

    Add the PPA to Kubuntu 18.10 then upgrade

    sudo add-apt-repository ppa:kubuntu-ppa/beta
    sudo apt update && sudo apt full-upgrade -y

    Then reboot. If you cannot reboot from the application launcher,

    systemctl reboot

    from the terminal.

    Kubuntu is part of the KDE community, so this testing will benefit both Kubuntu as well as upstream KDE Plasma software, which is used by many other distributions too.

    • If you believe you might have found a packaging bug, you can use your launchpad.net account is required to post testing feedback to the Kubuntu team. 
    • If you believe you have found a bug in the underlying software, then bugs.kde.org is the best place to file your bug report.

    Please review the changelog.

    [Test Case]

    * General tests:
    – Does plasma desktop start as normal with no apparent regressions over 5.14.5?
    – General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

    * Specific tests:
    – Check the changelog:
    – Identify items with front/user facing changes capable of specific testing. e.g. “weather plasmoid fetches BBC weather data.”
    – Test the ‘fixed’ functionality.

    Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt-based package management is advisable.

    Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

    We need your help to get this important beta release in shape for Kubuntu 19.04 as well as added to our backports.

    Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

    on January 23, 2019 12:38 AM

    January 21, 2019

    The horror

    You happen to update your system (In my case, I use Tumbleweed or Gentoo) and there’s new version of Perl, at some point there’s the realization that you’re using local::lib, and the pain unfolds: A shell is started, and you find a dreaded:

    Cwd.c: loadable library and perl binaries are mismatched (got handshake key 0xdb00080, needed 0xdb80080)

    Which means: that the module (Cwd in this case) is not compatible (Because it’s an XS module) with your current version of perl, installed on your system: Likely it was compiled for a previous version, leadin to those binaries mismatching

    Don’t panic!

    In the past, I used to reinstall my full local::lib directory, however after hanging out and asking few questions on #toolchain on irc.perl.org, I got to write (or rather hack, an ugly hack) a quick perl script to walk the local::lib packages that were installed already, only looking at the specified directory… it worked well, gave me the list of what I needed so I could reinstall later, however Grinnz pointed me to his perl-migrate-modules script, to which after chatting a bit, he added the from switch, which allows people to reinstall all the modules that were present in an old local::lib directory:

    The light

    # Migrate modules from an old (inactive) local::lib directory
    $ perl-migrate-modules --from ~/perl5-old/lib/perl5 /usr/bin/perl

    Hope you find it useful :)

    on January 21, 2019 12:00 AM

    Here at SUSE we heavily use Open Build Service, and often while collaborating on a project (In my case, openQA) one has to add a new package as a dependency from time to time, or has to do a backport for an older SLE or openSUSE Leap release

    It boils down to the following steps, in this case I wanted to change the project to build against SUSE:SLE-12-SPX:Update instead of SUSE:SLE-12-SPX:GM which is a build target that will get the updates while the GM doesn’t, all this, because I wanted to add openvswitch to the project, so that we could use new features in our openQA deployments.

    To do this, after setting up the obs account, it boils down to:

    1- Branch your project 2- Link your packages 2- Modify the project metadata if needed 3- Modify the project config if errors related to multiple choices appear (PostgreSQL will be there for sure!) 4- Grab a cup of coffee/tea/water and do some reading while waiting for the build

    # Branch the project
    osc branch devel:openQA:SLE-12
    # Link the new package
    osc linkpac openSUSE:Factory  openvswitch
    # More and more packages will say that their dependencies cannot be resolved, this is
    # you might spend some time here adding bunch of dependencies :)
    osc linkpac openSUSE:Factory  dpdk devel:openQA:SLE-12
    osc linkpac openSUSE:Factory  python-six devel:openQA:SLE-12

    By this point you might get error messages on the webUI stating that:

    $a_package: have choice for $conflicting_package needed by $a_package: $options

    As an example, it might happen that you see postgres-server there, having postgres96-server and postgres94-server as $options, you’ve got to choose your destiny!.

    When you find this, it’s time to edit the project configuration:

    # Since
    osc meta prjconf devel:openQA:SLE-12 -e
    # An editor will open and you will be able to change stuff
    # Remember that you need write permissions on the project!
    Prefer: postgresql96-devel
    Prefer: postgresql96-server
    Prefer: python-dateutil

    Modify the project metadata to use :Updates instead of :GM, and change architectures if you need to do so.

    # same as before: An editor will open, and you will be able to edit stuff
     osc meta prjconf devel:openQA:SLE-12 -e
      <repository name="SLE_12_SP4">
        <path project="SUSE:SLE-12-SP4:Update" repository="standard"/>

    After this, a project rebuild will take place, sit down and give some more reading :)

    on January 21, 2019 12:00 AM

    January 19, 2019

    The Lubuntu community has grown exponentially since our switch to LXQt. With new users, contributors, and Lubuntu enthusiasts among many other people who have decided to join our community, we are finding the need to scale the project further than the unwritten technically-led oligarchy that we currently have in the Lubuntu project. Therefore, we are […]
    on January 19, 2019 08:11 PM