June 24, 2018

En esta ocasión, Francisco MolineroFrancisco Javier Teruelo y Marcos Costalescharlamos sobre nuestra experiencia personal con Ubuntu 18.04.

Capítulo 8º de la segunda temporada

El podcast esta disponible para escuchar en:
on June 24, 2018 02:32 PM

On Deep Work

David Tomaschik

I recently stumbled upon Azeria’s blog post The Importance of Deep Work & The 30-hour Method For Learning a New Skill, and it seriously struck a chord with me. Over the past year or so, I’ve struggled with a lack of personal satisfaction in my life and my work. I tried various things to address the issue, but could not figure out a root cause until I read her article, and then it clicked with me.

Even though I was constantly busy at work, I never felt like I was getting the things done that mattered to me: security research, tackling difficult technical challenges, focused security work. Instead I was constantly in meetings, switching tasks, dealing with email, and other work that felt like I was just barely keeping afloat at the office.

I’ve since read Cal Newport’s Deep Work: Rules for Focused Success in a Distracted World, and now I have an understanding of why I’ve had these feelings and, much more importantly, what to do about them. I’ll start by saying that the book is not one I ever thought I would be reading. It sounds like, and is, half self-help book and half business strategy book, neither of which are categories I usually give much attention. But Newport is also a professor of Computer Science, the book was recommended by Azeria, and I felt like I needed to try something different, so I gave it a shot.

The first third of the book is spent defining “deep work” and “shallow work” and convincing you that it’s worth pursuing “deep work”. I nearly gave up on the book at this point because my unhappiness with how things were already going had already convinced me of the value of deep work, so I figured I didn’t need a book to tell me I was doing things wrong, but I stuck with it, and I think it ended up being worth it.

Deep work is creative work that produces new value and requires that your stretch your brain to its limits. It is also the work that is best done in a state of flow (uninterrupted work focused entirely on one task at hand), and is the work that helps to build and grow the pathways in the brain. In my case, deep work includes things like security research, tool building, and learning new skills.

Shallow work is work that doesn’t require the full use of your brain, or that can be easily interrupted and resumed later, such as logistical tasks. In my case, this includes “doing email”, most meetings, and a lot of the collaboration I do with team mates. This is not to dismiss shallow work as unimportant, but it is different and done with a different mindset. It is also easier to get to shallow work with less mental friction, which leads to a tendency to go to shallow work.

All of this discussion is useless to me if I don’t actually make some changes based on what I’ve learned. I also don’t expect the “deep work” mindset to be a silver bullet to fix the problems I’m having. Some of the sources are likely outside that position, and going “all in” on the four rules set out by Newport would be difficult in my current corporate culture.

I am going to try some things though:

  • Schedule at least 3 blocks of 3+ hours a week for Deep Work. During this time period, I will not check email, respond to (or read) instant messages, etc.
  • Reduce the frequency with which I check email to ~3 times per day.
  • Use separate browser windows for deep work, so I can hide the windows that have the distractions.
  • Schedule time for personal projects as deep work.

Some problems I’ll still have:

  • My team works in a highly collaborative fashion. Realtime communication is expected. I’ll need to find some way to sequester myself.
  • I work in an open office floorplan, which has so many distractions that even shallow work is difficult. Finding somewhere to hide and do “deep work” means sacrificing my desktop and it’s large screens.
  • A corporate culture where anyone can schedule a meeting anytime and expect you to show up.

I’m going to try an increased effort on deep work and following some of the principles from the book, as well as better efforts to track how I spend my time. I’ll report back in 6 months time on whether or not I feel more productive, am happier with my work, and have actually been able to stick to these things.

on June 24, 2018 07:00 AM

June 22, 2018

I’m a maker, baby

Benjamin Mako Hill

 

What does the “maker movement” think of the song “Maker” by Fink?

Is it an accidental anthem or just unfortunate evidence of the semantic ambiguity around an overloaded term?

on June 22, 2018 11:34 PM

June 21, 2018

S11E15 – Fifteen Minutes - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This week we get the Hades Canyon NUC fully working and play Pillars of Eternity II. We discuss the falling value of BitCoin, backdoored Docker images and Microsoft getting into hot water over their work with US Immigration and Customs Enforcement. Plus we round up the community news.

It’s Season 11 Episode 15 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on June 21, 2018 03:01 PM

June 20, 2018

Plans for DebCamp18

Jonathan Carter

Dates

I’m going to DebCamp18! I should arrive at NCTU around noon on Saturday, 2018-07-21.

My Agenda

  • DebConf Video: Research if/how MediaDrop can be used with existing Debian video archive backends (basically, just a bunch of files on http).
  • DebConf Video: Take a better look at PeerTube and prepare a summary/report for the video team so that we better know if/how we can use it for publishing videos.
  • Debian Live: I have a bunch of loose ideas that I’d like to formalize before then. At the very least I’d like to file a bunch of paper cut bugs for the live images that I just haven’t been getting to. Live team may also need some revitalization, and better co-ordination with packagers of the various desktop environments in terms of testing and release sign-offs. There’s a lot to figure out and this is great to do in person (might lead to a DebConf BoF as well).
  • Debian Live: Current live weekly images have Calamares installed, although it’s just a test and there’s no indication yet on whether it will be available on the beta or final release images, we’ll have to do a good assessment on all the consequences and weigh up what will work out the best. I want to put together an initial report with live team members who are around.
  • AIMS Desktop: Get core AIMS meta-packages in to Debian… no blockers on this but just haven’t had enough quite time to do it (And thanks to AIMS for covering my travel to Hsinchu!)
  • Get some help on ITPs that have been a little bit more tricky than expected:
    • gamemode – Adjust power saving and cpu governor settings when launching games
    • notepadqq – A linux clone of notepad++, a popular text editor on Windows
    • Possibly finish up zram-tools which I just don’t get the time for. It aims to be a set of utilities to manage compressed RAM disks that can be used for temporary space, compressed in-memory swap, etc.
  • Debian Package of the Day series: If there’s time and interest, make some in-person videos with maintainers about their packages.
  • Get to know more Debian people, relax and socialize!
on June 20, 2018 08:32 AM

June 19, 2018

Several months ago, I gave the closing keynote address at LibrePlanet 2018. The talk was about the thing that scares me most about the future of free culture, free software, and peer production.

A video of the talk is online on Youtube and available as WebM video file (both links should skip the first 3m 19s of thanks and introductions).

Here’s a summary of the talk:

App stores and the so-called “sharing economy” are two examples of business models that rely on techniques for the mass aggregation of distributed participation over the Internet and that simply didn’t exist a decade ago. In my talk, I argue that the firms pioneering these new models have learned and adapted processes from commons-based peer production projects like free software, Wikipedia, and CouchSurfing.

The result is an important shift: A decade ago,  the kind of mass collaboration that made Wikipedia, GNU/Linux, or Couchsurfing possible was the exclusive domain of people producing freely and openly in commons. Not only is this no longer true, new proprietary, firm-controlled, and money-based models are increasingly replacing, displacing, outcompeting, and potentially reducing what’s available in the commons. For example, the number of people joining Couchsurfing to host others seems to have been in decline since Airbnb began its own meteoric growth.

In the talk, I talk about how this happened and what I think it means for folks of that are committed to working in commons. I talk a little bit about the free culture and free software should do now that mass collaboration, these communities’ most powerful weapon, is being used against them.

I’m very much interested in feedback provided any way you want to reach me including in person, over email, in comments on my blog, on Mastodon, on Twitter, etc.


Work on the research that is reflected and described in this talk was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). Some of the initial ideas behind this talk were developed while working on this paper (official link) which was led by Maximilian Klein and contributed to by Jinhao Zhao, Jiajun Ni, Isaac Johnson, and Haiyi Zhu.

on June 19, 2018 06:03 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 202 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased to 190 hours per month thanks to a few new sponsors who joined to benefit from Wheezy’s Extended LTS support.

We are currently in a transition phase. Wheezy is no longer supported by the LTS team and the LTS team will soon take over security support of Debian 8 Jessie from Debian’s regular security team.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on June 19, 2018 08:27 AM

Pros v Joes CTF is a CTF that holds a special place in my heart. Over the years, I’ve moved from playing in the 1st CTF as a day-of pickup player (signing up at the conference) to a Blue Team Pro, to core CTF staff. It’s been an exciting journey, and Red Teaming there is about the only role I haven’t held. (Which is somewhat ironic given that my day job is a red team lead.) As Blue teams have just formed, and I’m not currently attached to any single team, I wanted to share my thoughts on the evolution of Blue teaming in this unique CTF. In many ways, this will resemble the Blue Team player’s guide I wrote about 3 years ago, but will be based on the evolution of the game and of the industry itself. That post remains relevant, and I encourage you to read it as well.

Basics

Let’s start by a refresher of the basics, as they exist today. The gameplay is a two day game, with teams being completely “blue” (defensive) on the first day, and teams moving to a “purple” stance (defending their own network, and able to attack each other as well) on the second day. During the first day, there’s a dedicated red team providing the offensive incentive to the blue teams, as well as a grey team representing the users/customers of the blue team services.

Each blue team consists of eight players and two pros. The role of the pros is increasingly mentorship and less “hands on keyboard”, fitting with the Pros v Joes mission of providing education & mentorship.

Scoring

Scoring was originally based entirely on Health & Welfare checks (i.e., service up and responding) and flags that can be captured from the hosts. Originally, there were “integrity” flags (submitted by blue) and offense flags (submitted by red).

As of 2017, scoring included health & welfare (service uptime), beacons (red cell contacting the scoreboard from the server to prove that it is compromised), flags (in theory anyway), and an in-game marketplace that could have both positive and negative effects. 2018 scoring details have not yet been released, but check the 2018 rules when published.

The Environment

The environment changes every year, but it’s a highly heterogenous network with all of the typical services you would find in a corporate network. At a minimum, you’re likely to see:

  • Typical web services (CMS, etc.)
  • Mail Server
  • Client machines
  • Active Directory
  • DNS Server

The operating systems will vary, and will include older and newer OSs of both Windows and Linux varities. There has also always been a firewall under the control of each team segregating that team’s network from the rest of the network. These have been both Cisco ASA firewalls as well as pfSense firewalls.

Each player connects to the game environment using OpenVPN based on configurations and credentials provided by Dichotomy.

Preparation

There has been an increasing amount of preparation involved in each of the years I have participated in PvJ. This preparation has essentially come in two core forms:

  1. Learning about the principles of hardening systems and networks.
  2. Preparing scripts, tools, and toolkits for use during the game.

Fundamentals

It turns out that a lot of the fundamental knowledge necessary in securing a network are just basically system administration fundamentals. Understanding how the system works and how systems interact with each other provides much of the basics of information security.

On both Windows and Linux, it is useful to understand:

  • How to install & update software and operating system updates
  • How to change permissions of files
  • How to start and stop services
  • How to set up a host-based firewall
  • Basic Shell Commands
  • User administration

Understanding basic networking is also useful, including:

  • TCP vs UDP
  • Stateful vs stateless firewalls
  • Using tcpdump and Wireshark to debug and understand network traffic

Knowing some kind of scripting language as well can be very useful, especially if your team prepares some scripts in advance for common operations. Languages that I’ve found useful include:

  • Bash
  • Powershell
  • Python

Player Toolkit

Obviously, if you’re playing in a CTF, you’ll need a computer. Many of the tools you’ll want to use are either designed for Linux or are more commonly used on Linux, so almost everyone will want to have some sort of a Linux environment available. I suggest that you use whatever operating system you are most comfortable with as your “bare metal” operating system, so if that’s Windows, you’ll want to run a Linux virtual machine.

If you use a Macbook (which seems to be the most common choice at a lot of security conferences), you may want both a Windows VM and a Linux VM, as the Windows Server administration tools (should you choose to use them) only run on Windows clients. It’s also been reported that TunnelBlick is the best option for an OpenVPN Client on MacOS.

As to choice of Linux distribution, if you don’t have any personal preference, I would suggest using Kali Linux. It’s not that Kali has anything you can’t get on other distributions, but it’s well-known in the security industry, well documented, and based on Debian Linux, which makes it well-supported and a close cousin of Ubuntu Linux that many have worked with before.

There are some tools that are absolutely necessary and you should familiarize yourself with them in advance:

  • nmap for network enumeration
  • SSH for connecting to Linux Machines
  • RDP for connecting to Windows Machines
  • git, if your team will use it for managing configurations or scripts
  • OpenVPN for connecting to the game environment

Other tools you’ll probably want to get some experience with:

  • metasploit for going offensive
  • Some kind of directory enumeration tool (Dirbuster, WebBorer)
  • sqlmap for SQL injection

Useful Resources

Game Strategy

Every team has their own general strategy to the game, but there are a few things I’ve found that seem to make gameplay go more smoothly for the team:

  • During initial hardening, have one team member working on the firewall. Multiple players configuring the firewall is a recipe for lockouts or confusion.
  • Communicate, communicate, communicate. Ask questions when needed, and make sure it’s clear who’s working on what.
  • Document everything you do. You don’t need to log every command (though it’s not a bad idea), but you should be able to answer some questions about the hosts in your network:
    • What hosts exist?
    • What are the passwords for the accounts?
    • Have the passwords been changed from the defaults?
    • What services are scored?
    • What hardening steps have been applied?

Dos & Don’ts

  • DO make sure you have a wired ethernet port on your laptop, or a USB to ethernet adapter and an ethernet cable.
  • DO make sure you’ve set up OpenVPN on your host OS (not in a VM) and you’ve tested it before game day.
  • DO make sure you’ve read the rules. DON’T try to cheat, Gold team will figure it out and make you pay.
  • DO make an effort to try new things. This game is a learning experience, and you miss 100% of the shots you don’t take.
  • DO ask questions. DON’T be afraid of looking stupid – everyone in the security industry has things to learn, and the whole point of this event is that you can learn. You might even stump the pros.

Making the Most of It

Like so many things in life, the PvJ CTF is a case where you get out of it what you put into it. If you think you can learn it all by osmosis or being on the same team but without making effort, it’s unlikely to work out. PvJ gives you an enthusiastic team, mentors willing to help, and a top-notch environment to try things out that you might not have the resources for in your environment.

To all the players: Good luck, learn new things, and have fun!

on June 19, 2018 07:00 AM

June 18, 2018

Welcome to the Ubuntu Weekly Newsletter, Issue 532 for the week of June 10 – 16, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Che Dean
  • Wild Man
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on June 18, 2018 10:00 PM

June 15, 2018

As the last man standing as a fellowship representative in FSFE, I propose to give a report at the community meeting at RMLL.

I'm keen to get feedback from the wider community as well, including former fellows, volunteers and anybody else who has come into contact with FSFE.

It is important for me to understand the topics you want me to cover as so many things have happened in free software and in FSFE in recent times.

last man standing

Some of the things people already asked me about:

  • the status of the fellowship and the membership status of fellows
  • use of non-free software and cloud services in FSFE, deviating from the philosophy that people associate with the FSF / FSFE family
  • measuring both the impact and cost of campaigns, to see if we get value for money (a high level view of expenditure is here)

What are the issues you would like me to address? Please feel free to email me privately or publicly. If I don't have answers immediately I would seek to get them for you as I prepare my report. Without your support and feedback, I don't have a mandate to pursue these issues on your behalf so if you have any concerns, please reply.

Your fellowship representative

on June 15, 2018 07:28 AM

June 14, 2018

Previously: v4.16.

Linux kernel v4.17 was released last week, and here are some of the security things I think are interesting:

Jailhouse hypervisor

Jan Kiszka landed Jailhouse hypervisor support, which uses static partitioning (i.e. no resource over-committing), where the root “cell” spawns new jails by shrinking its own CPU/memory/etc resources and hands them over to the new jail. There’s a nice write-up of the hypervisor on LWN from 2014.

Sparc ADI

Khalid Aziz landed the userspace support for Sparc Application Data Integrity (ADI or SSM: Silicon Secured Memory), which is the hardware memory coloring (tagging) feature in Sparc M7. I’d love to see this extended into the kernel itself, as it would kill linear overflows between allocations, since the base pointer being used is tagged to belong to only a certain allocation (sized to a multiple of cache lines). Any attempt to increment beyond, into memory with a different tag, raises an exception. Enrico Perla has some great write-ups on using ADI in allocators and a comparison of ADI to Intel’s MPX.

new kernel stacks cleared on fork

It was possible that old memory contents would live in a new process’s kernel stack. While normally not visible, “uninitialized” memory read flaws or read overflows could expose these contents (especially stuff “deeper” in the stack that may never get overwritten for the life of the process). To avoid this, I made sure that new stacks were always zeroed. Oddly, this “priming” of the cache appeared to actually improve performance, though it was mostly in the noise.

MAP_FIXED_NOREPLACE

As part of further defense in depth against attacks like Stack Clash, Michal Hocko created MAP_FIXED_NOREPLACE. The regular MAP_FIXED has a subtle behavior not normally noticed (but used by some, so it couldn’t just be fixed): it will replace any overlapping portion of a pre-existing mapping. This means the kernel would silently overlap the stack into mmap or text regions, since MAP_FIXED was being used to build a new process’s memory layout. Instead, MAP_FIXED_NOREPLACE has all the features of MAP_FIXED without the replacement behavior: it will fail if a pre-existing mapping overlaps with the newly requested one. The ELF loader has been switched to use MAP_FIXED_NOREPLACE, and it’s available to userspace too, for similar use-cases.

pin stack limit during exec

I used a big hammer and pinned the RLIMIT_STACK values during exec. There were multiple methods to change the limit (through at least setrlimit() and prlimit()), and there were multiple places the limit got used to make decisions, so it seemed best to just pin the values for the life of the exec so no games could get played with them. Too much assumed the value wasn’t changing, so better to make that assumption actually true. Hopefully this is the last of the fixes for these bad interactions between stack limits and memory layouts during exec (which have all been defensive measures against flaws like Stack Clash).

Variable Length Array removals start

Following some discussion over Alexander Popov’s ongoing port of the stackleak GCC plugin, Linus declared that Variable Length Arrays (VLAs) should be eliminated from the kernel entirely. This is great because it kills several stack exhaustion attacks, including weird stuff like stepping over guard pages with giant stack allocations. However, with several hundred uses in the kernel, this wasn’t going to be an easy job. Thankfully, a whole bunch of people stepped up to help out: Gustavo A. R. Silva, Himanshu Jha, Joern Engel, Kyle Spiers, Laura Abbott, Lorenzo Bianconi, Nikolay Borisov, Salvatore Mesoraca, Stephen Kitt, Takashi Iwai, Tobin C. Harding, and Tycho Andersen. With Linus Torvalds and Martin Uecker, I also helped rewrite the max() macro to eliminate false positives seen by the -Wvla compiler option. Overall, about 1/3rd of the VLA instances were solved for v4.17, with many more coming for v4.18. I’m hoping we’ll have entirely eliminated VLAs by the time v4.19 ships.

That’s in for now! Please let me know if you think I missed anything. Stay tuned for v4.18; the merge window is open. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on June 14, 2018 11:23 PM

If you have two LXD containers, mycontainer1 and mycontainer2, then you can reference each other with those handy *.lxd hostnames like this,

$ lxc exec mycontainer1 -- sudo --user ubuntu --login
ubuntu@mycontainer1:~$ ping mycontainer2.lxd
PING mycontainer2.lxd(mycontainer2.lxd (fd42:cba6:557e:1a5a:24e:3eff:fce2:8d3)) 56 data bytes
64 bytes from mycontainer2.lxd (fd42:cba6:557e:1a5a:24e:3eff:fce2:8d3): icmp_seq=1 ttl=64 time=0.125 ms
^C
--- mycontainer2.lxd ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms
ubuntu@mycontainer1:~$

Those hostnames are provided automatically by LXD when you use a default private bridge like lxdbr0. They are provided by the dnsmasq service that LXD starts for you, and it’s a service that binds specifically on that lxdbr0 network interface.

LXD does not make changes to the networking of the host, therefore you cannot use those hostnames from your host,

ubuntu@mycontainer1:~$ exit
$ ping mycontainer2.lxd
ping: unknown host mycontainer2.lxd
Exit 2

In this post we are going to see how to set up the host on Ubuntu 18.04 (any Linux distribution that uses systemd-resolve) so that the host can access the container hostnames.

The default configuration per systemd of the lxdbr0 bridge on the host is

$ systemd-resolve --status lxdbr0
Link 2 (lxdbr0)
      Current Scopes: none
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no

The goal is to add the appropriate DNS server entries to appear in that configuration.

Let’s get first the IP address of LXD’s dnsmasq server for the network interface lxdbr0.

$ ip addr show dev lxdbr0
2: lxdbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether fe:2b:da:d9:49:4a brd ff:ff:ff:ff:ff:ff
inet 10.10.10.1/24 scope global lxdbr0
valid_lft forever preferred_lft forever
inet6 fd42:6a89:42d0:60b::1/64 scope global 
valid_lft forever preferred_lft forever
inet6 fe80::10cf:51ff:fe05:5383/64 scope link 
valid_lft forever preferred_lft forever

The IP address of the lxdbr0 interface in this case is 10.10.10.1 and that is the IP of LXD’s DNS server.

Now we can move on by configuring the host to consult LXD’s DNS server.

Temporary network configuration

Run the following command to configure temporarily the interface and add the DNS service details.

$ sudo systemd-resolve --interface lxdbr0 --set-dns 10.10.10.1 --set-domain lxd

In this command,

  1. we specify the network interface lxdbr0
  2. we set the DNS server to the IP address of the lxdbr0, the interface that dnsmasq is listening on.
  3. we set the domain to lxd, as the hostnames are of the form mycontainer.lxd.

Now, the configuration looks like

$ systemd-resolve --status lxdbr0
Link 2 (lxdbr0)
      Current Scopes: DNS
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no
         DNS Servers: 10.10.10.1
          DNS Domain: lxd

You can now verify that you can, for example, get the IP address of the container by name:

$ host mycontainer1.lxd
mycontainer.lxd has address 10.10.10.88
mycontainer.lxd has IPv6 address fd42:8196:99f3:52ad:216:3eff:fe0f:bacb
$

Note: The first time that you try to resolve such a hostname, it will take a few seconds for systemd-resolved to complete the resolution. You will get the result shown above, but the command will not return immediately. The reason is that systemd-resolved is waiting to get a resolution from your default host’s DNS server, and you are waiting for that resolution to timeout. The next attempts will be cached and return immediately.

You can also revert these settings with the following command,

$ systemd-resolve --interface lxdbr0 --revert
$ systemd-resolve --status lxdbr0
Link 3 (lxdbr0)
      Current Scopes: none
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no
$

In general, this is a temporary network configuration and nothing has been saved to a file. When we reboot the computer, the configuration is gone.

Permanent network configuration

We are going to set up systemd to run automatically the temporary network configuration whenever LXD starts. That is, as soon as lxdbr0 is up, our additional script will run and configure the per-link network.

First, create the following auxiliary script files.

$ cat /usr/local/bin/lxdhostdns_start.sh 
#!/bin/sh

LXDINTERFACE=lxdbr0
LXDDOMAIN=lxd
LXDDNSIP=`ip addr show lxdbr0 | grep -Po 'inet \K[\d.]+'`

/usr/bin/systemd-resolve --interface ${LXDINTERFACE} \
                         --set-dns ${LXDDNSIP} \
                         --set-domain ${LXDDOMAIN}

$ cat /usr/local/bin/lxdhostdns_stop.sh 
#!/bin/sh

LXDINTERFACE=lxdbr0

/usr/bin/systemd-resolve --interface ${LXDINTERFACE} --revert

Second, make them executable.

$ sudo chmod +x /usr/local/bin/lxdhostdns_start.sh /usr/local/bin/lxdhostdns_stop.sh

Third, create the following systemd service file.

$ sudo cat /lib/systemd/system/lxd-host-dns.service 
[Unit]
Description=LXD host DNS service
After=lxd-containers.service

[Service]
Type=oneshot
ExecStart=/usr/local/bin/lxdhostdns_start.sh
RemainAfterExit=true
ExecStop=/usr/local/bin/lxdhostdnsi_stop.sh
StandardOutput=journal

[Install]
WantedBy=multi-user.target

This file

  • will activate after the lxd-containers.service service (therefore, lxdbr0 is up).
  • it is a oneshot (runs until completion before the next service).
  • it runs the respective scripts on ExecStart and ExecStop.
  • the RemainAfterExit is true, which means that it appears as running in systemd.
  • if something is wrong, it will be reported in the journal.
  • it gets installed in the multi-user target (same as the LXD service).

Fourth, now we reload systemd and enable the new service. The service is enabled so that when we reboot, it will start automatically.

$ sudo systemctl daemon-reload
$ sudo systemctl enable lxd-host-dns.service
Created symlink /etc/systemd/system/multi-user.target.wants/lxd-host-dns.service → /lib/systemd/system/lxd-host-dns.service.
$

Note: This should work better than the old (next section) instructions. Those old instructions would fail if the lxdbr0 network interface was not up. Still, I am not completely happy with this new section. It appears that when you explicitly start or stop the new service, the action may not run. To be tested.

 

(old section, not working) Permanent network configuration

In systemd, we can add per network interface configuration by adding a file in /etc/systemd/network/.

It should be a file with the extension .network, and the appropriate content.

Add the following file

$ cat /etc/systemd/network/lxd.network 
[Match]
Name=lxdbr0

[Network]
DNS=10.100.100.1
Domains=lxd

We chose the name lxd.network for the filename. As long as it has the .network extension, we are fine.

The [Match] section matches the name of the network interface, which is lxdbr0. The rest will only apply if the network interface is indeed lxdbr0.

The [Network] section has the specific network settings. We set the DNS to the IP of the LXD DNS server. And the Domains to the domain suffix of the hostnames. The lxd in Domains is the suffix that is configured in LXD’s DNS server.

Now, let’s restart the host and check the network configuration.

$ systemd-resolve --status
...
Link 2 (lxdbr0)
      Current Scopes: DNS
       LLMNR setting: yes
MulticastDNS setting: no
      DNSSEC setting: no
    DNSSEC supported: no
         DNS Servers: 10.100.100.1
                      fe80::a405:eade:4376:3817
          DNS Domain: lxd

Everything looks fine. By doing the configuration this way, systemd-resolve also picked up automatically the IPv6 address.

Conclusion

We have seen how to setup the host on a LXD installation so that processes on the host are able to see the hostnames of the containers. For Ubuntu 18.04 or any distribution that uses systemd for the DNS client needs.

If you use Ubuntu 16.04, then it requires a different way involving the dnsmasq-base configuration. There are instructions on this on the Internet, ask if you cannot find them.

on June 14, 2018 06:32 PM

This show was recorded in front of a live studio audience at FOSS Talk Live on Saturday 9th June 2018! We take you on a 40 year journey through our time trumpet and contribute to some open source projects for the first time and discuss the outcomes.

It’s Season 11 Episode 14.5 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this live show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on June 14, 2018 02:00 PM

Active Searching

Stephen Michael Kellat

I generally am not trying to shoot for terse blog posts. That being said, my position at work is getting increasingly untenable since we're in a position of being physically unable to accomplish our mission goals prior to funding running out at 11:59:59 PM Eastern Time on September 30th. Conflicting imperatives were set and frankly we're starting to hit the point that neither are getting accomplished regardless of how many warm bodies we're throwing at the problem. It isn't good either when my co-workers who have any military experience are sounding out KBR, Academi, and Perspecta.

I'm actively seeking new opportunities. In lieu of a fancy resume in LaTeX, I put forward the relevant details at https://www.linkedin.com/in/stephenkellat/. I can handle LaTeX, though, as seen by the example here that has some copyright-restricted content stripped from it: http://erielookingproductions.info/saybrook-example.pdf.

Ideas for things I could do:

  • Return to being a librarian
  • Work in an Emergency Operations Center (I am Incident Command System trained plus ran through the FEMA EOC basics training)
  • Work as a dispatcher (General class licensed ham radio operator)
  • Teach since I do "point of need" education now over the phone such as spending 30 minutes or more explaining to people how the "Estimated Tax Penalty" in the Internal Revenue Code works, for example
  • Work in a journalistic endeavor as I previously worked as a print news reporter and helmed an audio podcast for 6 years
  • Help coordinate interactions between programmers and regulators (Would you want to be in the uncomfortable position Mr. Zuckerberg was in front of the US Congress without support?)

If your project/work/organization/endeavor/skunkworks is looking for a new team player I may prove a worthwhile addition. You more than likely could pay me more than my current employer does.

on June 14, 2018 02:00 AM

June 13, 2018

It’s been quite a while since the last post about Mesa backports, so here’s a quick update on where we are now.

Ubuntu 18.04 was released with Mesa 18.0.0 which was built against libglvnd. This complicates things a bit when it comes to backporting Mesa to 16.04, because the packaging has changed a bit due to libglvnd and would break LTS->LTS upgrades without certain package updates.

So we first need to make sure 18.04 gets Mesa 18.0.5 (which is the last of the series, so no version bumps expected until the backport from 18.10) along with an updated libglvnd which bumps the Breaks/Replaces on old package versions to ensure that xenial -> bionic upgrade will go smoothly once 18.0.5 is backported to xenial, which will in fact be in -proposed soon.

What this also means is that the only release getting new Mesa backports via the x-updates PPA from now on is 18.04. And I’ve pushed Mesa 18.1.1 there today, enjoy!

on June 13, 2018 01:08 PM

June 12, 2018

This last weekend I was at FOSS Talk Live 2018. It was fun. And it led me into various thoughts of how I’d like there to be more of this sort of fun in and around the tech community, and how my feelings on success have changed a bit …

on June 12, 2018 09:07 AM

June 11, 2018

Welcome to the Ubuntu Weekly Newsletter, Issue 531 for the week of June 3 – 9, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on June 11, 2018 09:56 PM

Recently the news broke that Microsoft are acquiring GitHub. Effusive opinions flowed from all directions: some saw the acquisition as a sensible fit for Microsoft to better support developers, and some saw it as a tyrant getting their grubby fingers on open source’s ecosystem.

I am thrilled for Microsoft and GitHub for many reasons, and there will be a bright future ahead because of it, but I have been thinking more about the reaction some of the critics have had to this, and why.

I find it fascinating that there still seems to be a deep-seated discomfort in some about Microsoft and their involvement in open source. I understand that this is for historical reasons, and many moons ago Microsoft were definitely on the offensive against open source. I too was critical of Microsoft and their approach back in those days. I may have even said ‘M$’ instead of ‘MS’ (ugh.)

Things have changed though. Satya Nadella, their CEO, has had a profound impact on the company: they are a significant investor and participant in open source across a multitude of open source projects, they hire many open source developers, run their own open source projects (e.g. VSCode), and actively sponsor and support many open source conferences, events, and initiatives. I know many people who work at Microsoft and they love the company and their work there. These are not microserfs: they are people like you and me.

Things have changed, and I have literally never drunk Kool-aid; this or any other type. Are they perfect? No, but they don’t claim to be. But is the Microsoft of today a radically different company to the Microsoft of the late nineties. No doubt.

Still though, this cynicism exists in some. Some see them as a trojan horse and ask if we can really trust them?

A little while ago I had a discussion with someone who was grumbling about Microsoft. After poking around his opinion, what shook out was that his real issue was not with Microsoft’s open source work (he was supportive of this), but it was with the fact that they still produce proprietary software and use software patents in departments such as Windows and Office.

Put bluntly, he believed Microsoft are ethically unfit as a company because of these reasons, and these reasons were significant enough to diminish their open source work almost entirely.

Ethics?

Now, I am always fascinated when people use the word “ethics” in a debate. Often it smacks of holier-than-thou hyperbole as opposed to an objective assessment of what is actually right and wrong. Also, it seems that when some bring up “ethics” the discussion takes a nosedive and those involved become increasingly uninterested in other opinions (as I am sure we will see beautifully illustrated in the comments on this post 😉 )

In this case though, I think ethics explains a lot about the variance of views on this and why we should seek to understand those who differ with us. Let me explain why.

Many of the critics of proprietary software are people who believe that it is ethically unsound. They believe that the production and release of proprietary software is a fundamentally pernicious act; that it is harmful to society and the individuals within it.

I have spent my entire career, the last 20 years, working in the open source world. I have run a number of open source communities, some large, some small. I am a live and let live kind of guy and I have financially supported organizations I don’t 100% agree with but who I think do interesting work. This includes the Free Software Foundation, Software Freedom Conservancy, and EFF, I have a close relationship with the Linux Foundation, and have worked with a number of companies on all sides of the field. Without wishing to sound like an egotistical clod, I believe I have earned my open source stripes.

Here’s the thing though, and some of you won’t like this: I don’t believe proprietary software is unethical. Far from it.

Clearly murder, rape, human trafficking, child abuse, and other despicable acts are unethical, but I also consider dishonesty, knowingly lying, taking advantage of people, and other similar indiscretions are unethical. I am not an expert in ethics and I don’t claim to be a perfectly ethical person, but by my reasoning unethical acts are a power imbalance that is forced on people without their consent.

Within my ethical code, software doesn’t get a look in. Not even close.

I don’t see proprietary software as a power imbalance. Sure, there are very dominant companies with proprietary platforms that people need to use (such as at your employer), and there are companies who have monopolies and tremendous power imbalances in the market. My ethical objection there though is with the market, not with the production of closed source software.

Now, before some of you combust. Let me be clear on this: I am deeply passionate about open source and free software and I do believe that proprietary software is sub-optimal in many situations. Heck, at least 60% of my clients are companies Ia m working with to build and deliver open source workflow.

In many situations, open source provides a much better model for collaboration, growth, security, community development, and other elements. Open source provides an incredible environment for people to shine: our broader open source ecosystem is littered with examples of under-represented groups doing great work and building fantastic careers and reputations. Open source and free software is one of the most profound technological revolutions, and it will generate great value and goodwill for many years to come.

Here lies the rub though: when I look at a company that ships proprietary products, I don’t see an unethical company, I see a company that has chosen a different model. I don’t believe the people working there are evil, that they are doing harm, and that they have mendacious intent. Is their model of building software sub-optimal? Probably, but it needs further judgement: open source clearly works in some areas (e.g. infrastructure software), but has struggled to catch on commercially in other areas (e.g. consumer software).

Put simply, open source does not guarantee success and proprietary software does not guarantee evil.

Be Productive

Throughout the course of my career I have always tried to understand other people’s views and build relationships even if we see things differently.

As an example, earlier I mentioned I have financially supported the Free Software Foundation and Software Freedom Conservancy. Over the years I have had my disagreements with both RMS and Bradley Kuhn, largely based on this different perspective to the ethics of software, but I respect that they come from a different position. I don’t believe they are “wrong” in their views. I believe the position they come from is different to mine. Let a thousand roses bloom: produce an ecosystem in which everyone can play a role and the best ideas will generally play out.

What is critical to me is taking a decent approach to this.

We don’t get anywhere by labelling those who work at or run companies with proprietary products as evil and as part of a shadowy cabal. We also don’t get anywhere by labelling those who do consider free software to be a part of their ethical code as “libtards” or something similarly derogatory. We need to learn more about other people’s views rather than purely focusing on out-arguing people. Sure, have fun with other people’s views, poke fun at them, but it should all be within the spirit of productive discourse.

Either way, no matter where you draw your line, or whatever your view is on the politique du jour, open source, community development, and open innovation is changing the world. We are succeeding, but we can do even greater work if we build bridges, not firebomb them. Be nice, people.

The post Closed Source and Ethics: Good, Bad, Or Ugly? appeared first on Jono Bacon.

on June 11, 2018 09:39 PM

FOSS Talk Live 2018

Stuart Langridge

The poster for FOSS Talk Live 2018, in situ in the window of the Harrison

Saturday 9th June 2018 marked FOSS Talk Live 2018, an evening of Linux UK podcasts on stage at The Harrison pub near Kings Cross, London. It’s in its third year now, and each year has improved on the last. This year there were four live shows: Late Night Linux …

on June 11, 2018 04:50 PM

Early June Update

Stephen Michael Kellat

In no particular order:

  • The world hasn't ended yet. The summit hasn't started between my CEO at work and the North Korean leader, either.
  • I set up a Gogs instance at http://git.erielookingproductions.info with help from the gogsgit snap. The next step is to figure out setting up HTTPS on it. Yes, Alan, there is a firewall.
  • Updated the LaTeX doc that serve as "my website" over at http://erielookingproductions.info.
  • Somehow I still have a job. The internal dashboards are extremely scary-looking. I haven't heard back on any outside applications. The profile at LinkedIn has probably every little detail I can stuff in it.
  • I'm still using LaTeX to layout the worship booklets for the church's mission activity at the local nursing home. Consider how graphics-heavy it is with all the scans from the hymnals, LaTeX actually makes it pretty easy compared to fussy with MS Word let alone LibreOffice Writer. Granted, I end up producing PDF files that range around 25-35 megabytes each fortnight but we get usable materials. So far it has worked for over a year as I've tightened up the workflow and have become more adept at using LaTeX routinely in a humanities role.
  • Work has me stuck at "week to week" life to where I don't know if I even have a work schedule for the next week. That makes planning a bit rough. OggCamp 18 is on my mind as I try to figure out what the barely-communicated needs of the enterprise are compared to my role in meeting them. To use a sad expression, barely anybody is singing from the same hymnal at work whether it is agency executives or first-line managers.
  • Eventually I will get a proper cord-cutting operation in place. Antennae are up. One receiver is in place. I need to get the personal video recorder up and going next.
on June 11, 2018 03:26 AM

June 07, 2018

KDE Slimbook 2 Review

KDE Slimbook 2  Outside

The kind folks at Slimbook recently sent me the latest generation of their ultrabook-style laptop line for review, the KDE Slimbook 2. You can hear my thoughts on the latest episode of the Ubuntu Podcast, released on June 7th 2018.

Slimbook are a small laptop vendor based in Spain. All the laptops ship with KDE Neon as the default operating system. In addition to their hardware, they also contribute to and facilitate local Free Software events in their area. I was sent the laptop only for review purposes. There's no other incentive provided, and Slimbook didn't see this blog post before I published it.

Being a small vendor, they don't have the same buying power with OEM vendors as other big name laptop suppliers. This is reflected in the price you pay. You're supporting a company who are themselves supporting Free Software developers and communities.

If you're after the cheapest possible laptop, and don't care about its origin or the people behind the device, then maybe this laptop isn't for you. However, if you like to vote with your wallet, then the KDE Slimbook should absolutely be on your list to seriously consider.

Specs

The device I was sent has the following technical specifications.

  • Core i5-7200 @ 2.5GHz CPU
  • Integrated Intel HD 620 GPU
  • 16GB DDR4 RAM
  • 500GB Samsung 960 EVO SSD
  • Intel 7265 Wireless chipset
  • Bluetooth chipset
  • 1080p Matte finish
  • Full size SD card
  • Heaphone socket and built in mic
  • 720p webcam
  • 1 x USB 3.0 (USB3.1 Gen 1) (Type A), 1 x USB 3.0 (USB3.1 Gen 1) (Type C), 1 x USB 2.0 (Type A)
  • Spanish 'chiclet' style keyboard with power button in top right
  • 3-level keyboard backlight
  • Elan Synaptics touch pad
  • 46Wh battery, TPS S10
  • Power adpater with right-angle plug
  • USB-C dongle

As shipped, mine came in at around ~1098EUR / 956GBP / 1267USD. Much of this can be tweaked, including the keyboard layout, although doing so may extend the lead time on receiving the device. There are plenty of options to tweak, and the site gives a running total as you adjust to taste. There's an i7 version, and I'm told it will soon be possible to order one with a black case, rather than the silver I was shipped. The laptop shipped with one drive, but has capacity for both an M.2 and traditional form factor drive too. So, fully loaded you could order this with 2x1TB SSDs if you're after extra disk space.

Notable is the lack of Ethernet port, which for some is a dealbreaker, even in these days of ubiquitous reliable wifi for many. The solution Slimbook went with is to provide two optional 'dongles'. One connects to USB3 Type A and presents an Ethernet port. The other option connects to the USB C port and provides 3 more USB 3 tradtional ports and an Ethernet socket. Slimbook shipped me the latter, which was super useful for connecting more USB devices, and a LAN cable.

The cable on the dongle is relatively short, but it feels solid, and I had no problems with it in infrequent daily use. One omission on the dongle is the lack of a pass-through USB C port. Once the dongle is attached to the laptop, you've used your only type-c connector. This might not be a problem if you're a luddite like me who had very few USB-C devices, but I imagine that'll be more of an issue going forward. This is an optional dongle though, and you could certainly choose not to get it, but purchase a differenty one to service your requirements.

Software

KDE Slimbook  2 Inside

Default install - KDE Neon

The laptop shipped with KDE Neon. It's no secret to listeners of the Ubuntu Podcast that I'm a bit of a KDE fanboy since I began testing Neon a few months back, and stuck with it on my ThinkPad T450. So I am a little biased in favour of this particular Linux distribution. So I felt very much at home on the Slimbook with KDE.

On other computers I've tweaked the desktop in various ways - it's the KDE raison d'être to expose settings for everything, and I usually tweak a fair number. However on the Slimbook I wanted to try out the default experience. I found the default applications easy to use, well integrated and reliable. I'm writing this blog post in Kwrite, and have noticed features that I would have not expected here, such as the zoomed out code view and popup spelling completion.

I'm pleasantly surprised by the choices made on the software build here. KDE performs well & starts up and wakes from suspend quickly. Everything works out of the box, and the selection of applications is small, but wisely chosen. Unsurprisingly I've augmented the default apps with a few applications I use on a daily basis elsewhere, and they all fit in perfectly. I didn't feel any of the applications I use stood out as alien, or non-KDE originals. The theme and app integration is spot on. If I were a Slimbook customer, I'd happily leave the default install pretty much as-is and thoughouly enjoy the experience.

The software is delivered by the usual Ubuntu 16.04 (Xenial) archives, with the KDE Neon archive delivering updates to the KDE Plasma desktop and suite of applications. In addition two PPAs are enabled. One for TLP and another for screenfetch. Personally on a shipping laptop I'd be inclined not to enable 3rd party PPAs, but perhaps supply documentation which details how the user can enable them if required. PPAs are not stable, and can deliver unexpected updates and experiences to users.

I should also mention in the pack was a tri-fold leaflet titled "Plasma Desktop & You". It details a little about KDE, the community and invites new users to not only enjoy the software, but get involved. It's a nice touch.

Alternative options

Slimbook don't appear to offer other Linux distributions - and given the lid of the laptop has a giant KDE logo engraved on it, that wouldn't make a ton of sense anyway.

However I tested a couple of distros on it via live USB sticks. With Ubuntu 18.04 everything worked, including the USB C Ethernet dongle. For fun I also tried out Trisquel, which also appeared to mostly work including wired network via the dongle, but wifi didn't function. I didn't attempt any other distros, but given how well KDE Neon (based on Ubuntu 16.04), Ubuntu 18.04 worked, I figure any distro-hoppers would have no hardware compatibility issues.

Hardware

Display & Graphics

The 1080p matte finish panel is great. I found it plenty bright and clear enough at maximum brightness. There are over 20 levels of brightness and I found myself using a balanced setting near the middle most of the time, only needing full brightness sometimes when outside. The viewing angles are fine for a single person using it, but don't lend well to having a bunch of people crouched round the laptop.

I ran a few games to see how the integrated GPU performed, and it was surprinsgly okay. My usual tests involved running Mini Metro which got 50fps, Goat Simulator at 720 got me 25fps and Talos Principle at 1080p also clocked in 25fps. This isn't a gaming laptop but if you want to play a few casual games or even run some emulators between work, it's more than up to the task.

Performance

I use a bunch of fairly chunky applications on a daily basis including common electron apps and tools. I also frequently build software locally using various compilers. The Slimbook 2 was a super effective workstation computer for these tasks. It rarely broke into a sweat, with very few occasions where the fan span up. Indeed I can't really tell you how loud the fan is because I so rarely heard it.

It boots quickly, the session starts promptly and application startup isn't a problem. Overall as a workstation, it's fine for any of the tasks I do daily.

Keyboard

KDE Slimbook  2 Keyboard

The keyboard is a common 'chiclet' affair, with a full row of function keys that double as media, wifi, touchpad, brightness hardware control buttons. The arrow cluster is bottom right with home/end/pgup/pgdown as secondary functions on those keys. The up/down arrows are vertically half-size to save space, which I quite like.

The "Super" (Windows) key sports a natty little Tux with the Slimbook logo beneath. Nice touch :)

Touchpad

The touchpad is a decent size and works with single and double touch for click/drag and scrolling. I did find the palm rejection wasn't perfect in KDE. I sometimes found myself nuking chunks of a document while typing as my fat thumbs hit the touchpad, selecting text and overtyping it.

I tried fiddling with the palm rejection options in KDE but didn't quite hit the sweet-spot. I've never been a fan of touchpads at all, and would likely just turn off the device (via Fn-F1) if this continued to annoy me, which it didn't especially.

Audio

As with most ultrabook style laptops the audio is okay, but not great. I played my usual test songs and the audio reproduction via speakers lacked volume, was a bit tinny and lacked bass.

With headphones plugged in, it was fine. I rarely use laptop speakers personally, but tend to use a pair of headphones. Nobody wants to hear what I'm listening to :). It's fine for the odd video conference though.

Battery

The model I had was supplied with a 46Wh battery, a small & lightweight ~40W charger and euro power cable & right angled barrel connector to the laptop. Under normal circumstances with medium workload I would get around 7 hours, sometimes more.

Leaving the laptop on, connected to wifi, with KDE power management switched off and brightness at 30% the system lasted around 8 hours 40 mins. I'd anticipate with a variable workload, with KDE power management switched on, you'd get similar times.

I also tried leaving the laptop playing a YouTube video at 1080p, full screen with wifi switched on and power management suppresed by the browser. The battery gave out after around 5 hours.

The battery takes around 4 hours to re-charge while the laptop is on. This is probably faster if you're not using the laptop at the time, but I didn't test that.

Overall impressions

I've been really happy using the KDE Slimbook 2. The software choices are sensible, and being based on Ubuntu 16.04 meant I could install whatever else I needed outside the KDE ecosystem. The laptop is quiet, feels well built and was a pleasure to use. I'm a little sad to give it back, because I've got used to the form-factor now.

I have only a couple of very minor niggles. The chassis case is a little sharp around the edges, much like the MacBook Air it takes design cues from. Secondly, when suspended the power LED is on the inside of the laptop, above the keyboard. So if like me, you suspend your laptop by closing the lid, you won't know if it suspended properly by looking at the slow blink of the power LED. It's a minor thing, but having been burned (literally) in the past by a laptop which unexpectedly didn't suspend, it's something I'm aware of.

Other than that, it's a cracking machine. I'd be happy to use this on a daily basis. If you're in the market for a new laptop, and want to support a Linux vendor, this device should totally be on your list. Thanks so much to Slimbook for shipping the device over and letting me have plenty of time to play with it!

on June 07, 2018 04:28 PM

June 06, 2018

Imagine that you have a package to build. Sometimes it takes minutes. Other one takes hours. And then you run htop and see that your machine is idle during such build… You may ask “Why?” and the answer would be simple: multiple cpu cores.

On x86-64 developers usually have from two to four cpu cores. Can be double of that due to HyperThreading. And that’s all. So for some weird reason they go for using make -jX where X is half of their cores. Or completely forget to enable parallel builds.

And then I came with ARM64 system. With 8 or 24 or 32 or 48 or even 96 cpu cores. And have to wait and wait and wait for package to build…

So next step is usually similar — edit of debian/rules file and adding --parallel argument to dh call. Or removal of --max-parallel option. And then build makes use of all those shiny cpu cores. And it goes quickly…

UPDATE: Riku Voipio told me that Debhelper 10 does parallel builds by default. If you set ‘debian/compat’ value to at least ’10’.

on June 06, 2018 10:46 AM

June 05, 2018

FSFE has been running the Public Money Public Code (PMPC) campaign for some time now, requesting that software produced with public money be licensed for public use under a free software license. You can request a free box of stickers and posters here (donation optional).

Many non-profits and charitable organizations receive public money directly from public grants and indirectly from the tax deductions given to their supporters. If the PMPC argument is valid for other forms of government expenditure, should it also apply to the expenditures of these organizations too?

Where do we start?

A good place to start could be FSFE itself. Donations to FSFE are tax deductible in Germany, the Netherlands and Switzerland. Therefore, the organization is partially supported by public money.

Personally, I feel that for an organization like FSFE to be true to its principles and its affiliation with the FSF, it should be run without any non-free software or cloud services.

However, in my role as one of FSFE's fellowship representatives, I proposed a compromise: rather than my preferred option, an immediate and outright ban on non-free software in FSFE, I simply asked the organization to keep a register of dependencies on non-free software and services, by way of a motion at the 2017 general assembly:

The GA recognizes the wide range of opinions in the discussion about non-free software and services. As a first step to resolve this, FSFE will maintain a public inventory on the wiki listing the non-free software and services in use, including details of which people/teams are using them, the extent to which FSFE depends on them, a list of any perceived obstacles within FSFE for replacing/abolishing each of them, and for each of them a link to a community-maintained page or discussion with more details and alternatives. FSFE also asks the community for ideas about how to be more pro-active in spotting any other non-free software or services creeping into our organization in future, such as a bounty program or browser plugins that volunteers and staff can use to monitor their own exposure.

Unfortunately, it failed to receive enough votes (minutes: item 24, votes: 0 for, 21 against, 2 abstentions)

In a blog post on the topic of using proprietary software to promote freedom, FSFE's Executive Director Jonas Öberg used the metaphor of taking a journey. Isn't a journey more likely to succeed if you know your starting point? Wouldn't it be even better having a map that shows which roads are a dead end?

In any IT project, it is vital to understand your starting point before changes can be made. A register like this would also serve as a good model for other organizations hoping to secure their own freedoms.

For a community organization like FSFE, there is significant goodwill from volunteers and other free software communities. A register of exposure to proprietary software would allow FSFE to crowdsource solutions from the community.

Back in 2018

I'll be proposing the same motion again for the 2018 general assembly meeting in October.

If you can see something wrong with the text of the motion, please help me improve it so it may be more likely to be accepted.

Offering a reward for best practice

I've observed several discussions recently where people have questioned the impact of FSFE's campaigns. How can we measure whether the campaigns are having an impact?

One idea may be to offer an annual award for other non-profit organizations, outside the IT domain, who demonstrate exemplary use of free software in their own organization. An award could also be offered for some of the individuals who have championed free software solutions in the non-profit sector.

An award program like this would help to showcase best practice and provide proof that organizations can run successfully using free software. Seeing compelling examples of success makes it easier for other organizations to believe freedom is not just a pipe dream.

Therefore, I hope to propose an additional motion at the FSFE general assembly this year, calling for an award program to commence in 2019 as a new phase of the PMPC campaign.

Please share your feedback

Any feedback on this topic is welcome through the FSFE discussion list. You don't have to be a member to share your thoughts.

on June 05, 2018 08:40 PM

June 04, 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

distro-tracker

With the disappearance of many alioth mailing lists, I took the time to finish proper support of a team email in distro-tracker. There’s no official documentation yet but it’s already used by a bunch of team. If you look at the pkg-security team on tracker.debian.org it has used “pkg-security” as its unique identifier and it has thus inherited from team+pkg-security@tracker.debian.org as an email address that can be used in the Maintainer field (and it can be used to communicate between all team subscribers that have the contact keyword enabled on their team subscription).

I also dealt with a few merge requests:

I also filed ticket #7283 on rt.debian.org to have local_part_suffix = “+” for tracker.debian.org’s exim config. This will let us bounce emails sent to invalid email addresses. Right now all emails are delivered in a Maildir, valid messages are processed and the rest is silently discarded. At the time of processing, it’s too late to send bounces back to the sender.

pkg-security team

This month my activity is limited to sponsorship of new packages:

  • grokevt_0.5.0-2.dsc fixing one RC bug (missing build-dep on python3-distutils)
  • dnsrecon_0.8.13-1.dsc (new upstream release)
  • recon-ng_4.9.3-1.dsc (new upstream release)
  • wifite_2.1.0-1.dsc (new upstream release)
  • aircrack-ng (add patch from upstream git)

I also interacted multiple times with Samuel Henrique who started to work on the Google Summer of Code porting Kali packages to Debian. He mainly worked on getting some overview of the work to do.

Misc Debian work

I reviewed multiple changes submitted by Hideki Yamane on debootstrap (on the debian-boot mailing list, and also in MR 2 and MR 3). I reviewed and merged some changes on live-boot too.

Extended LTS

I spent a good part of the month dealing with the setup of the Wheezy Extended LTS program. Given the lack of interest of the various Debian teams, it’s hosted on a Freexian server and not on any debian.org infrastructure. But the principle is basically the same as Debian LTS except that the package list is reduced to the set of packages used by Extended LTS sponsors. But the updates prepared in this project are freely available for all.

It’s not too late to join the program, you can always contact me at deblts@freexian.com with a source package list that you’d like to see supported and I’ll send you back an estimation of the cost.

Thanks to an initial contribution from Credativ, Emilio Pozuelo Monfort has prepared a merge request making it easy for third parties to host their own security tracker that piggy-back on Debian’s one. For Extended LTS, we thus have our own tracker.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on June 04, 2018 04:56 PM

June 03, 2018

TPM 2.0 in qemu

Serge Hallyn

If you want to test software which exploits TPM 2.0 functionality inside the qemu-kvm emulator, this can be challenging because the software stack is still quite new. Here is how I did it.

First, you need a new enough qemu. The version on Ubuntu xenial does not suffice. The 2.11 version in Ubuntu bionic does. I believe the 2.10 version in artful is also too old, but might be mis-remembering haven’t tested that lately.

The two pieces of software I needed were libtpms and swtpm. For libtpms I used the tpm2-preview.rev146.v2 branch, and for swtpm I used the tpm2-preview.v2 branch.

apt -y install libtool autoconf tpm-tools expect socat libssl-dev
git clone https://github.com/stefanberger/libtpms
( cd libtpms &&
  git checkout tpm2-preview.rev146.v2 &&
  ./bootstrap.sh &&
  ./configure --prefix=/usr --with-openssl --with-tpm2 &&
  make && make install)
git clone https://github.com/stefanberger/swtpm
(cd swtpm &&
  git checkout tpm2-preview.v2 &&
  ./bootstrap.sh &&
  configure --prefix=/usr --with-openssl --with-tpm2 &&
  make &&
  make install)

For each qemu instance, I create a tpm device. The relevant part of the script I used looks like this:

#!/bin/bash

i=0
while [ -d /tmp/mytpm$i ]; do
let i=i+1
done
tpm=/tmp/tpm$i

mkdir $tpm
echo "Starting $tpm"
sudo swtpm socket --tpmstate dir=$tpm --tpm2 \
             --ctrl type=unixio,path=/$tpm/swtpm-sock &
sleep 2 # this should be changed to a netstat query

next_vnc() {
    vncport=0
    port=5900
    while nc -z 127.0.0.1 $port; do
        port=$((port + 1))
        vncport=$((vncport + 1))
    done
    echo $vncport
}

nextvnc=$(next_vnc)
sudo kvm -drive file=${disk},format=raw,if=virtio,cache=none -chardev socket,id=chrtpm,path=/$tpm/swtpm-sock -tpmdev emulator,id=tpm0,chardev=chrtpm -device tpm-tis,tpmdev=tpm0 -vnc :$nextvnc -m 2048
on June 03, 2018 03:41 AM

Last week, I was presented with a message saying my Ubuntu partition was already full. I digged up what was going on, I found out Google Chrome was flooding /var/log/syslog. I had 40GB of logs with a weird error message:

Jun  1 12:22:35 machina update-notifier.desktop[5460]: [15076:15076:0100/000000.062848:ERROR:zygote_linux.cc(247)] Error reading message from browser: Socket operation on non-socket (88)

I’m pretty certain it was related with Chrome because the only time the system stabilized enough was when I killed its process. I two extensions, an ad blocker and Todoist, so I don’t think it was related to any suspicious extension.

Anyway, the sprint ended in the previous Friday so I thought it might be a good time to format the computer. Maybe try a new Linux distro, other than Ubuntu. And this is the story of how I spent close to a day figuring out which Linux distributions would work on my hardware.

Here’s the thing: In April, I tried installing Ubuntu and failed miserably. The installer would freeze loading. It was not faulty image. The same happened again yesterday. Then I decided to give Manjaro Linux a try. It wouldn’t work properly. Sometimes the boot would freeze, others it would actually work. “That’s it, I’m moving to Fedora.” The installer work, but no start-up would work. So I investigated and wrote nomodeset in the grub options (when you enter the grub menu, just press e to get access to boot options). It worked, the system was booting., But at 800x600. And in no way I was able to easily install nouveau (open source drivers) or the proprietary.

It was then time to give a go to Ubuntu again. And I didn’t want to give up on the 18.04. It is LTS after all and it has to work. My graphics card always brought me issues, specially being on the mid-high end side of things (GTX1060).

The first step was to add the nomodeset command again to the grub options - Hey the installer launched instantly with 800x600 resolution. It’s enough to get it done.

Then I had a major issue: Ubuntu did not work after the first login. The screen turns black and nothing happens. Fine!!! In the next boot, before logging in via GDM, I pressed ALT + F2 to enter TTY mode and then ran the set of commands to enable the graphics card to work properly and set other important flags to allow the system to reboot, shutdown and not crash when opening the settings screen after booting up (these three errors have always been fixed after adding these options:

I need to add acpi=force into grub options again (run sudo vi /etc/default/grub and edit the line with GRUB_CMDLINE_LINUX), add my mouse configuration manually - it’s a Mad Catz R.A.T. 3.

At the time, I decided to try and remove the nomodeset option in the GRUB_CMDLINE_LINUX and the resolution got back to normal. The next step was to install NVidia’s graphics, using the proprietary tab (in Gnome, you can call that panel running software-properties --open-tab=4.

And that’s it. It just worked. Everything was back to normal. Potentially the same kind of solution would have worked in Fedora but since I’ve been using Ubuntu for the past 3 years now, getting help has become much easier, as well as debugging error logs.

Since version 17.04 and since I got this computer, installing Linux has always been difficult Hope this story is useful for someone with an high-end laptop that has encountered issues with NVidia graphics card’s drivers. Hopefully these commands will at least help you getting one step further into your working system

Thanks for reading,

gsilvapt

on June 03, 2018 12:00 AM

June 02, 2018

Launchpad news, May 2018

Launchpad News

Here’s a brief changelog for this month.

Build farm

  • Send fast_cleanup: True to virtualised builds, since they can safely skip the final cleanup steps

Code

  • Add spam controls to code review comments (#1680746)
  • Only consider the most recent nine successful builds when estimating recipe build durations (#1770121)
  • Make updated code import emails more informative

Infrastructure

  • Upgrade to Twisted 17.9.0
  • Get the test suite passing on Ubuntu 18.04 LTS
  • Allow admins to configure users such that unsigned email from them will be rejected, as a spam defence (#1714967)

Snappy

  • Prune old snap files that have been uploaded to the store; this cleaned up about 5TB of librarian space
  • Make the snap store client cope with a few more edge cases (#1766911)
  • Allow branches in snap store channel names (#1754405)

Soyuz (package management)

  • Add DistroArchSeries.setChrootFromBuild, allowing setting a chroot from a file produced by a live filesystem build
  • Disambiguate URLs to source package files in the face of filename clashes in imported archives
  • Optimise SourcePackagePublishingHistory:+listing-archive-extra (#1769979)

Miscellaneous

  • Disable purchasing of new commercial subscriptions; existing customers have been contacted, and people with questions about this can contact Canonical support
  • Various minor revisions to the terms of service from Canonical’s legal department, and a clearer data privacy policy
on June 02, 2018 07:44 PM
En esta ocasión, Francisco MolineroFrancisco Javier Teruelo y Marcos Costalescharlamos sobre los siguientes temas:

  • La adopción de iPads y Chromebooks en educación.
  • Qué supondrá poder ejecutar aplicaciones Android en Ubuntu Phone.

Capítulo 7º de la segunda temporada

El podcast esta disponible para escuchar en:
on June 02, 2018 01:29 PM

May 30, 2018

MAAS 2.4.0 (final) released!

Andres Rodriguez

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 (final) is now available!
This new MAAS release introduces a set of exciting features and improvements that improve performance, stability and usability of MAAS.
MAAS 2.4.0 will be immediately available in the PPA, but it is in the process of being SRU’d into Ubuntu Bionic.
PPA’s Availability
MAAS 2.4.0 is currently available for Ubuntu Bionic in ppa:maas/stable for the coming week.
sudo add-apt-repository ppa:maas/stable
sudo apt-get update
sudo apt-get install maas
What’s new?
Most notable MAAS 2.4.0 changes include:
  • Performance improvements across the backend & UI.
  • KVM pod support for storage pools (over API).
  • DNS UI to manage resource records.
  • Audit Logging
  • Machine locking
  • Expanded commissioning script support for firmware upgrades & HBA changes.
  • NTP services now provided with Chrony.
For the full list of features & changes, please refer to the release notes:
on May 30, 2018 06:05 PM

May 29, 2018

18.04 Upgrades

Mythbuntu

While Mythbuntu as a separate Ubuntu flavor ceases to exist. Many people continue to use our packaging and have asked questions about 18.04. This page attempts to answer some of these questions.

  • What happens if I upgrade to 18.04?
    • We've always recommended a backup and clean install for upgrades, but if you do this everything should continue functioning. You will need to reenable the MythTV Updates repositories
  • How do I upgrade to 18.04?
    • We've always recommended a backup and clean install when moving to a new version of the underlying OS (such as 18.04) and continue to recommend this. If you still want to attempt the upgrade, you can follow the steps here
  • Where can I get support?
    • Support can be attained from numerous locations. Check our support page for more info.
  • Where can I get updated MythTV packages?
  • I found a bug. Where should I report this?
    • Bugs should be filed upstream with MythTV. See our support page for more info
on May 29, 2018 07:15 PM

CORRECTION 2016.04.23 - It was previously stated that 16.04 is a point release to 14.04. This was due to a silly copy&paste issue from our previous release statement for 14.04. The Mythbuntu 16.04 release is a flavor of Ubuntu 16.04. We're sorry for any confusion this has caused.


Mythbuntu 16.04 has been released. This is our third LTS release and will be supported until shortly after the 18.04 release.

The Mythbuntu team would like to thank our ISO testers for helping find critical bugs before release. You guys rock!

With this release, we are providing torrents only. It is very important to note that this release is only compatible with MythTV 0.28 systems. The MythTV component of previous Mythbuntu releases can be be upgraded to a compatible MythTV version by using the Mythbuntu Repos. For a more detailed explanation, see here.

You can get the Mythbuntu ISO from our downloads page.

Highlights

Underlying system

  • Underlying Ubuntu updates are found here

MythTV


We appreciated all comments and would love to hear what you think. Please make comments to our mailing list, on the forums (with a tag indicating that this is from 16.04 or xenial), or in #ubuntu-mythtv on Freenode. As previously, if you encounter any issues with anything in this release, please file a bug using the Ubuntu bug tool (ubuntu-bug PACKAGENAME) which automatically collects logs and other important system information, or if that is not possible, directly open a ticket on Launchpad (http://bugs.launchpad.net/mythbuntu/16.04/).

Upgrade Nodes

If you have enabled the mysql tweaks in the Mythbuntu Control Center these will need to be disabled prior to upgrading. Once upgraded, these can be reenabled.

Known issues

on May 29, 2018 07:03 PM

What’s that (gitlab) BOT?

Marco Trevisan (Treviño)

Since some time in both some freenode ubuntu-related and gnome channels, people might have been bothered (or not :)), but the presence of this IRC bot (named ubot5-ng in freenode):

Since people asked, as I’ve set in the /whois, I’m the man behind it, and it’s actually running for some time from a snap inside a cloud instance I manage and hosted by Canonical.

This was just a quick-hack (so take it as it is) I did as I was annoyed by  not to getting the bug infos when linking the the always increasing references to GNOME or Debian projects. The source-code is here, while configuration files (can provide samples if curious) are just enabling the minimum necessary for having this joining the channels and disabling all the other plugins.

However, it currently supports parsing issues and merge proposals for Github and various Gitlab instances (gitlab itself, Freedesktop, GNOME, and Debian Salsa)

Yeah, I know:

  • There are other bot options, but I just wanted to hack something quickly
  • It should be moved to git, cleaned up removing the unused bugzilla stuff
  • Supybot should be replaced with its new fork Limonria
  • I should host the code in a GNOME gitlab project together with the configuration (without the API tokens, of course)
  • Jonas asked for colors 😀

I’ll probably do this once I’ve some free time (hard to find, in between my travels), but in the mean time, in case this bothers you, let me know, if instead want it to join other channels, tell me too 🙂


EDIT 31/05: added Freedesktop gitlab too

on May 29, 2018 03:32 PM

May 24, 2018

Starting Thursday, May 24th the about-to-be released 2019 new edition of my book, Ubuntu Unleashed, will be listed in InformIT’s Summer Coming Soon sale, which goes through May 29th. The discount is 40% off print and 45% off eBooks, no discount code will be required. Here’s the link: InformIT Summer Sale.

on May 24, 2018 04:59 AM

May 23, 2018

During the last few weeks of the 18.04 (Bionic Beaver) cycle, we had 2 people drop by in our development channel trying to respond to the call for testers from the Development and QA Teams.

It quickly became apparent to me that I was having to repeat myself in order to make it “basic” enough for someone who had never tested for us, to understand what I was trying to put across.

After pointing to the various resources we have, and other flavours use – it transpired that they both would have preferred something a bit easier to start with.

So I asked them to write it for us all.

Rather than belabour my point here, I’ve asked both of them to write a few words about what they needed and what they have achieved for everyone.

Before they get that chance – I would just like to thank them both for the hours of work they have put in drafting, tweaking and getting the pages into a position where we can tell you all of their existence.

You can see the fruits of their labour at our updated web page for Testers and the new pages we have at the New Tester wiki.

Kev
On behalf of the Xubuntu Development and QA Teams.

“I see the whole idea of OS software and communities helping themselves as a breath of fresh air in an ever more profit obsessed world (yes, I am a cynical old git).

I really wanted to help, but just didn’t think that I had any of the the skills required, and the guides always seemed to assume a level of knowledge that I just didn’t have.

So, when I was asked to help write a ‘New Testers’ guide for my beloved Xubuntu I absolutely jumped at the chance, knowing that my ignorance was my greatest asset.

I hope what resulted from our work will help those like me (people who can easily learn but need to be told pretty much everything from the bottom up) to start testing and enjoy the warm, satisfied glow of contributing to their community.
Most of all, I really enjoyed collaborating with some very nice people indeed.”
Leigh Sutherland

“I marvel at how we live in an age in which we can collaborate and share with people all over the world – as such I really like the ideas of free and open source. A long time happy Xubuntu user, I felt the time to be involved, to go from user-only to contributor was long overdue – Xubuntu is a community effort after all. So, when the call for testing came last March, I dove in. At first testing seemed daunting, complicated and very technical. But, with leaps and bounds, and the endless patience and kindness of the Xubuntu-bunch over at Xubuntu-development, I got going. I felt I was at last “paying back”. When flocculant asked if I would help him and Leigh to write some pages to make the information about testing more accessible for users like me, with limited technical skills and knowledge, I really liked the idea. And that started a collaboration I really enjoyed.

It’s my hope that with these pages we’ve been able to get across the information needed by someone like I was when I started -technical newby, noob- to simply get set up to get testing.

It’s also my hope people like you will tell us where and how these pages can be improved, with the aim to make the first forays into testing as gentle and easy as possible. Because without testing we as a community can not make xubuntu as good as we’d want it to be.”
Willem Hobers

on May 23, 2018 04:49 PM

May 21, 2018

Are you using Kubuntu 18.04, our current LTS release?

We currently have the Plasma 5.12.5 LTS bugfix release available in our Updates PPA, but we would like to provide the important fixes and translations in this release to all users via updates in the main Ubuntu archive. This would also mean these updates would be provide by default with the 18.04.1 point release ISO expected in late July.

The Stable Release Update tracking bug can be found here: https://bugs.launchpad.net/ubuntu/+source/plasma-desktop/+bug/1768245

A launchpad.net account is required to post testing feedback as bug comments.

The Plasma 5.12.5 changelog can be found at: https://www.kde.org/announcements/plasma-5.12.4-5.12.5-changelog.php

[Test Case]

* General tests:
– Does plasma desktop start as normal with no apparent regressions over 5.12.4?
– General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

* Specific tests:
– Check the changelog:
– Identify items with front/user facing changes capable of specific testing. e.g. “weather plasmoid fetches BBC weather data.”
– Test the ‘fixed’ functionality.

Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt based package management is advisable.

Details on how to enable the propose repository can be found at: https://wiki.ubuntu.com/Testing/EnableProposed.

Unfortunately that page illustrates Xenial and Ubuntu Unity rather than Bionic in Kubuntu. Using Discover or Muon, use Settings > More, enter your password, and ensure that Pre-release updates (bionic-proposed) is ticked in the Updates tab.

Or from the commandline, you can modify the software sources manually by adding the following line to /etc/apt/sources.list:

deb http://archive.ubuntu.com/ubuntu/ bionic-proposed restricted main multiverse universe

It is not advisable to upgrade all available packages from proposed, as many will be unrelated to this testing and may NOT have been sufficiently verified as updates to assume safe. So the safest but a little involved method would be to use Muon (or even synaptic!) to select each upgradeable packages with a version containing 5.12.5-0ubuntu0.1 (5.12.5.1-0ubuntu0.1 for plasma-discover due to an additional update).

Please report your findings on the bug report. If you need some guidance on how to structure your report, please see https://wiki.ubuntu.com/QATeam/PerformingSRUVerification. Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

We need your help to get this important bug-fix release out the door to all of our users.

Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

on May 21, 2018 03:36 PM

May 18, 2018

Sometimes we are on connections that have a dynamic ip. This will add your current external ip to ~/.external-ip.

Each time the script is run, it will dig an OpenDNS resolver to grab your external ip. If it is different from what is in ~/.external-ip it will echo the new ip. Otherwise it will return nothing.

#!/bin/sh
# Check external IP for change
# Ideal for use in a cron job
#
# Usage: sh check-ext-ip.sh
#
# Returns: Nothing if the IP is same, or the new IP address
#          First run always returns current address
#
# Requires dig:
#    Debian/Ubuntu: apt install dnsutils
#    Solus: eopkg install bind-utils
#    CentOS/Fedora: yum install bind-utils
#
# by Sina Mashek <sina@sinacutie.stream>
# Released under CC0 or Public Domain, whichever is supported

# Where we will store the external IP
EXT_IP="$HOME/.external-ip"

# Check if dig is installed
if [ "$(command -v dig)" = "" ]; then
    echo "This script requires 'dig' to run"

    # Load distribution release information
    . /etc/os-release

    # Check for supported release; set proper package manager and package name
    if [ "$ID" = "debian" ] || [ "$ID" = "ubuntu" ]; then
        MGR="apt"
        PKG="dnsutils"
    elif [ "$ID" = "fedora" ] || [ "$ID" = "centos" ]; then
        MGR="yum"
        PKG="bind-utils"
    elif [ "$ID" = "solus" ]; then
        MGR="eopkg"
        PKG="bind-utils"
    else
        echo "Please consult your package manager for the correct package"
        exit 1
    fi

    # Will run if one of the above supported distributions was found
    echo "Installing $PKG ..."
    sudo "$MGR" install "$PKG"
fi

# We check our external IP directly from a DNS request
GET_IP="$(dig +short myip.opendns.com @resolver1.opendns.com)"

# Check if ~/.external-ip exists
if [ -f "$EXT_IP" ]; then
    # If the external ip is the same as the current ip
    if [ "$(cat "$EXT_IP")" = "$GET_IP" ]; then
        exit 0
    fi
# If it doesn't exist or is not the same, grab and save the current IP
else
    echo "$GET_IP" > "$EXT_IP"
fi
on May 18, 2018 09:00 PM

Sometimes we are on connections that have a dynamic ip. This will add your current external ip to ~/.external-ip.

Each time the script is run, it will dig an OpenDNS resolver to grab your external ip. If it is different from what is in ~/.external-ip it will echo the new ip. Otherwise it will return nothing.

#!/bin/sh
# Check external IP for change
# Ideal for use in a cron job
#
# Usage: sh check-ext-ip.sh
#
# Returns: Nothing if the IP is same, or the new IP address
#          First run always returns current address
#
# Requires dig:
#    Debian/Ubuntu: apt install dnsutils
#    Solus: eopkg install bind-utils
#    CentOS/Fedora: yum install bind-utils
#
# by Sina Mashek <sina@sinacutie.stream>
# Released under CC0 or Public Domain, whichever is supported

# Where we will store the external IP
EXT_IP="$HOME/.external-ip"

# Check if dig is installed
if [ "$(command -v dig)" = "" ]; then
    echo "This script requires 'dig' to run"

    # Load distribution release information
    . /etc/os-release

    # Check for supported release; set proper package manager and package name
    if [ "$ID" = "debian" ] || [ "$ID" = "ubuntu" ]; then
        MGR="apt"
        PKG="dnsutils"
    elif [ "$ID" = "fedora" ] || [ "$ID" = "centos" ]; then
        MGR="yum"
        PKG="bind-utils"
    elif [ "$ID" = "solus" ]; then
        MGR="eopkg"
        PKG="bind-utils"
    else
        echo "Please consult your package manager for the correct package"
        exit 1
    fi

    # Will run if one of the above supported distributions was found
    echo "Installing $PKG ..."
    sudo "$MGR" install "$PKG"
fi

# We check our external IP directly from a DNS request
GET_IP="$(dig +short myip.opendns.com @resolver1.opendns.com)"

# Check if ~/.external-ip exists
if [ -f "$EXT_IP" ]; then
    # If the external ip is the same as the current ip
    if [ "$(cat "$EXT_IP")" = "$GET_IP" ]; then
        exit 0
    fi
# If it doesn't exist or is not the same, grab and save the current IP
else
    echo "$GET_IP" > "$EXT_IP"
fi
on May 18, 2018 09:00 PM

Hello Planet GNOME!

Marco Trevisan (Treviño)

Hey guys, although I’ve been around for a while hidden in the patches, some months ago (already!?!) I did my application to join the GNOME Foundation, and few days after – thanks to some anonymous votes – I got approved :), and thus I’m officially part of the family!

So, thanks again, and sorry for my late “hello” 🙂

on May 18, 2018 03:46 PM

May 16, 2018

Overview

I'm presenting here the technical aspects of setting up a small-scale testing lab in my basement, using as little hardware as possible, and keeping costs to a minimum. For one thing, systems needed to be mobile if possible, easy to replace, and as flexible as possible to support various testing scenarios. I may wish to bring part of this network with me on short trips to give a talk, for example.

One of the core aspects of this lab is its use of the network. I have former experience with Cisco hardware, so I picked some relatively cheap devices off eBay: a decent layer 3 switch (Cisco C3750, 24 ports, with PoE support in case I'd want to start using that), a small Cisco ASA 5505 to act as a router. The router's configuration is basic, just enough to make sure this lab can be isolated behind a firewall, and have an IP on all networks. The switch's config is even simpler, and consists in setting up VLANs for each segment of the lab (different networks for different things). It connects infrastructure (the MAAS server, other systems that just need to always be up) via 802.1q trunks; the servers are configured with IPs on each appropriate VLAN. VLAN 1 is my "normal" home network, so that things will work correctly even when not supporting VLANs (which means VLAN 1 is set to the the native VLAN and to be untagged wherever appropriate). VLAN 10 is "staging", for use with my own custom boot server. VLAN 15 is "sandbox" for use with MAAS. The switch is only powered on when necessary, to save on electricity costs and to avoid hearing its whine (since I work in the same room). This means it is usually powered off, as the ASA already provides many ethernet ports. The telco rack in use was salvaged, and so were most brackets, except for the specialized bracket for the ASA which was bought separately. Total costs for this setup is estimated to about 500$, since everything comes from cheap eBay listings or salvaged, reused equipment.

The Cisco hardware was specifically selected because I had prior experience with them, so I could make sure the features I wanted were supported: VLANs, basic routing, and logs I can make sense of. Any hardware could do -- VLANs aren't absolutely required, but given many network ports on a switch, it tends to avoid requiring multiple switches instead.

My main DNS / DHCP / boot server is a raspberry pi 2. It serves both the home network and the staging network. DNS is set up such that the home network can resolve any names on any of the networks: using home.example.com or staging.example.com, or even maas.example.com as a domain name following the name of the system. Name resolution for the maas.example.com domain is forwarded to the MAAS server. More on all of this later.

The MAAS server has been set up on an old Thinkpad X230 (my former work laptop); I've been routinely using it (and reinstalling it) for various tests, but that meant reinstalling often, possibly conflicting with other projects if I tried to test more than one thing at a time. It was repurposed to just run Ubuntu 18.04, with a MAAS region and rack controller installed, along with libvirt (qemu) available over the network to remotely start virtual machines. It is connected to both VLAN 10 and VLAN 15.

Additional testing hardware can be attached to either VLAN 10 or VLAN 15 as appropriate -- the C3750 is configured so "top" ports are in VLAN 10, and "bottom" ports are in VLAN 15, for convenience. The first four ports are configured as trunk ports if necessary. I do use a Dell Vostro V130 and a generic Acer Aspire laptop for testing "on hardware". They are connected to the switch only when needed.

Finally, "clients" for the lab may be connected anywhere (but are likely to be on the "home" network). They are able to reach the MAAS web UI directly, or can use MAAS CLI or any other features to deploy systems from the MAAS servers' libvirt installation.

Setting up the network hardware

I will avoid going into the details of the Cisco hardware too much; configuration is specific to this hardware. The ASA has a restrictive firewall that blocks off most things, and allows SSH and HTTP access. Things that need access the internet go through the MAAS internal proxy.

For simplicity, the ASA is always .1 in any subnet, the switch is .2 when it is required (and was made accessible over serial cable from the MAAS server). The rasberrypi is always .5, and the MAAS server is always .25. DHCP ranges were designed to reserve anything .25 and below for static assignments on the staging and sandbox networks, and since I use a /23 subnet for home, half is for static assignments, and the other half is for DHCP there.

MAAS server hardware setup

Netplan is used to configure the network on Ubuntu systems. The MAAS server's configuration looks like this:

network:
    ethernets:
        enp0s25:
            addresses: []
            dhcp4: true
            optional: true
    bridges:
        maasbr0:
            addresses: [ 10.3.99.25/24 ]
            dhcp4: no
            dhcp6: no
            interfaces: [ vlan15 ]
        staging:
            addresses: [ 10.3.98.25/24 ]
            dhcp4: no
            dhcp6: no
            interfaces: [ vlan10 ]
    vlans:
        vlan15:
            dhcp4: no
            dhcp6: no
            accept-ra: no
            id: 15
            link: enp0s25
        vlan10:
            dhcp4: no
            dhcp6: no
            accept-ra: no
            id: 10
            link: enp0s25
    version: 2
Both VLANs are behind bridges as to allow setting virtual machines on any network. Additional configuration files were added to define these bridges for libvirt (/etc/libvirt/qemu/networks/maasbr0.xml):
<network>
<name>maasbr0</name>
<bridge name="maasbr0">
<forward mode="bridge">
</forward></bridge></network>
Libvirt also needs to be accessible from the network, so that MAAS can drive it using the "pod" feature. Uncomment "listen_tcp = 1", and set authentication as you see fit, in /etc/libvirt/libvirtd.conf. Also set:

libvirtd_opts="-l"

In /etc/default/libvirtd, then restart the libvirtd service.


dnsmasq server

The raspberrypi has similar netplan config, but sets up static addresses on all interfaces (since it is the DHCP server). Here, dnsmasq is used to provide DNS, DHCP, and TFTP. The configuration is in multiple files; but here are some of the important parts:
dhcp-leasefile=/depot/dnsmasq/dnsmasq.leases
dhcp-hostsdir=/depot/dnsmasq/reservations
dhcp-authoritative
dhcp-fqdn
# copied from maas, specify boot files per-arch.
dhcp-boot=tag:x86_64-efi,bootx64.efi
dhcp-boot=tag:i386-pc,pxelinux
dhcp-match=set:i386-pc, option:client-arch, 0 #x86-32
dhcp-match=set:x86_64-efi, option:client-arch, 7 #EFI x86-64
# pass search domains everywhere, it's easier to type short names
dhcp-option=119,home.example.com,staging.example.com,maas.example.com
domain=example.com
no-hosts
addn-hosts=/depot/dnsmasq/dns/
domain-needed
expand-hosts
no-resolv
# home network
domain=home.example.com,10.3.0.0/23
auth-zone=home.example.com,10.3.0.0/23
dhcp-range=set:home,10.3.1.50,10.3.1.250,255.255.254.0,8h
# specify the default gw / next router
dhcp-option=tag:home,3,10.3.0.1
# define the tftp server
dhcp-option=tag:home,66,10.3.0.5
# staging is configured as above, but on 10.3.98.0/24.
# maas.example.com: "isolated" maas network.
# send all DNS requests for X.maas.example.com to 10.3.99.25 (maas server)
server=/maas.example.com/10.3.99.25
# very basic tftp config
enable-tftp
tftp-root=/depot/tftp
tftp-no-fail
# set some "upstream" nameservers for general name resolution.
server=8.8.8.8
server=8.8.4.4


DHCP reservations (to avoid IPs changing across reboots for some systems I know I'll want to reach regularly) are kept in /depot/dnsmasq/reservations (as per the above), and look like this:

de:ad:be:ef:ca:fe,10.3.0.21

I did put one per file, with meaningful filenames. This helps with debugging and making changes when network cards are changed, etc. The names used for the files do not match DNS names, but instead are a short description of the device (such as "thinkpad-x230"), since I may want to rename things later.

Similarly, files in /depot/dnsmasq/dns have names describing the hardware, but then contain entries in hosts file form:

10.3.0.21 izanagi

Again, this is used so any rename of a device only requires changing the content of a single file in /depot/dnsmasq/dns, rather than also requiring renaming other files, or matching MAC addresses to make sure the right change is made.


Installing MAAS

At this point, the configuration for the networking should already be completed, and libvirt should be ready and accessible from the network.

The MAAS installation process is very straightforward. Simply install the maas package, which will pull in maas-rack-controller and maas-region-controller.

Once the configuration is complete, you can log in to the web interface. Use it to make sure, under Subnets, that only the MAAS-driven VLAN has DHCP enabled. To enable or disable DHCP, click the link in the VLAN column, and use the "Take action" menu to provide or disable DHCP.

This is necessary if you do not want MAAS to fully manage all of the network and provide DNS and DHCP for all systems. In my case, I am leaving MAAS in its own isolated network since I would keep the server offline if I do not need it (and the home network needs to keep working if I'm away).

Some extra modifications were made to the stock MAAS configuration to change the behavior of deployed systems. For example; I often test packages in -proposed, so it is convenient to have that enabled by default, with the archive pinned to avoid accidentally installing these packages. Given that I also do netplan development and might try things that would break the network connectivity, I also make sure there is a static password for the 'ubuntu' user, and that I have my own account created (again, with a static, known, and stupidly simple password) so I can connect to the deployed systems on their console. I have added the following to /etc/maas/preseed/curtin_userdata:


late_commands:
[...]
  pinning_00: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Package: *' >> /etc/apt/preferences.d/proposed"]
  pinning_01: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin: release a={{release}}-proposed' >> /etc/apt/preferences.d/proposed"]
  pinning_02: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin-Priority: -1' >> /etc/apt/preferences.d/proposed"]
apt:
  sources:
    proposed.list:
      source: deb $MIRROR {{release}}-proposed main universe
write_files:
  userconfig:
    path: /etc/cloud/cloud.cfg.d/99-users.cfg
    content: |
      system_info:
        default_user:
          lock_passwd: False
          plain_text_passwd: [REDACTED]
      users:
        - default
        - name: mtrudel
          groups: sudo
          gecos: Matt
          shell: /bin/bash
          lock-passwd: False
          passwd: [REDACTED]


The pinning_ entries are simply added to the end of the "late_commands" section.

For the libvirt instance, you will need to add it to MAAS using the maas CLI tool. For this, you will need to get your MAAS API key from the web UI (click your username, then look under MAAS keys), and run the following commands:

maas login local   http://localhost:5240/MAAS/  [your MAAS API key]
maas local pods create type=virsh power_address="qemu+tcp://127.0.1.1/system"

The pod will be given a name automatically; you'll then be able to use the web interface to "compose" new machines and control them via MAAS. If you want to remotely use the systems' Spice graphical console, you may need to change settings for the VM to allow Spice connections on all interfaces, and power it off and on again.


Setting up the client

Deployed hosts are now reachable normally over SSH by using their fully-qualified name, and specifying to use the ubuntu user (or another user you already configured):

ssh ubuntu@vocal-toad.maas.example.com

There is an inconvenience with using MAAS to control virtual machines like this, they are easy to reinstall, so their host hashes will change frequently if you access them via SSH. There's a way around that, using a specially crafted ssh_config (~/.ssh/config). Here, I'm sharing the relevant parts of the configuration file I use:

CanonicalDomains home.example.com
CanonicalizeHostname yes
CanonicalizeFallbackLocal no
HashKnownHosts no
UseRoaming no
# canonicalize* options seem to break github for some reason
# I haven't spent much time looking into it, so let's make sure it will go through the
# DNS resolution logic in SSH correctly.
Host github.com
  Hostname github.com.
Host *.maas
  Hostname %h.example.com
Host *.staging
  Hostname %h.example.com
Host *.maas.example.com
  User ubuntu
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null

Host *.staging.example.com
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
Host *.lxd
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(lxc list -c s4 $(basename %h .lxd) | grep RUNNING | cut -d' ' -f4) %p
Host *.libvirt
  StrictHostKeyChecking no
  UserKnownHostsFile /dev/null
  ProxyCommand nc $(virsh domifaddr $(basename %h .libvirt) | grep ipv4 | sed 's/.* //; s,/.*,,') %p

As a bonus, I have included some code that makes it easy to SSH to local libvirt systems or lxd containers.

The net effect is that I can avoid having the warnings about changed hashes for MAAS-controlled systems and machines in the staging network, but keep getting them for all other systems.

Now, this means that to reach a host on the MAAS network, a client system only needs to use the short name with .maas tacked on:

vocal-toad.maas
And the system will be reachable, and you will not have any warning about known host hashes (but do note that this is specific to a sandbox environment, you definitely want to see such warnings in a production environment, as it can indicate that the system you are connecting to might not be the one you think).

It's not bad, but the goal would be to use just the short names. I am working around this using a tiny script:

#!/bin/sh
ssh $@.maas

And I saved this as "sandbox" in ~/bin and making it executable.

And with this, the lab is ready.

Usage

To connect to a deployed system, one can now do the following:


$ sandbox vocal-toad
Warning: Permanently added 'vocal-toad.maas.example.com,10.3.99.12' (ECDSA) to the list of known hosts.
Welcome to Ubuntu Cosmic Cuttlefish (development branch) (GNU/Linux 4.15.0-21-generic x86_64)
[...]
ubuntu@vocal-toad:~$
ubuntu@vocal-toad:~$ id mtrudel
uid=1000(mtrudel) gid=1000(mtrudel) groups=1000(mtrudel),27(sudo)

Mobility

One important point for me was the mobility of the lab. While some of the network infrastructure must remain in place, I am able to undock the Thinkpad X230 (the MAAS server), and connect it via wireless to an external network. It will continue to "manage" or otherwise control VLAN 15 on the wired interface. In these cases, I bring another small configurable switch: a Cisco Catalyst 2960 (8 ports + 1), which is set up with the VLANs. A client could then be connected directly on VLAN 15 behind the MAAS server, and is free to make use of the MAAS proxy service to reach the internet. This allows me to bring the MAAS server along with all its virtual machines, as well as to be able to deploy new systems by connecting them to the switch. Both systems fit easily in a standard laptop bag along with another laptop (a "client").

All the systems used in the "semi-permanent" form of this lab can easily run on a single home power outlet, so issues are unlikely to arise in mobile form. The smaller switch is rated for 0.5amp, and two laptops do not pull very much power.

Next steps

One of the issues that remains with this setup is that it is limited to either starting MAAS images or starting images that are custom built and hooked up to the raspberry pi, which leads to a high effort to integrate new images:
  • Custom (desktop?) images could be loaded into MAAS, to facilitate starting a desktop build.
  • Automate customizing installed packages based on tags applied to the machines.
    • juju would shine there; it can deploy workloads based on available machines in MAAS with the specified tags.
    • Also install a generic system with customized packages, not necessarily single workloads, and/or install extra packages after the initial system deployment.
      • This could be done using chef or puppet, but will require setting up the infrastructure for it.
    • Integrate automatic installation of snaps.
  • Load new images into the raspberry pi automatically for netboot / preseeded installs
    • I have scripts for this, but they will take time to adapt
    • Space on such a device is at a premium, there must be some culling of old images
on May 16, 2018 10:47 PM

Video Channel Updates

Jonathan Carter

Last month, I started doing something that I’ve been meaning to do for years, and that’s to start a video channel and make some free software related videos.

I started out uploading to my YouTube channel which has been dormant for a really long time, and then last week, I also uploaded my videos to my own site, highvoltage.tv. It’s a MediaDrop instance, a video hosting platform written in Python.

I’ll still keep uploading to YouTube, but ultimately I’d like to make my self-hosted site the primary source for my content. Not sure if I’ll stay with MediaDrop, but it does tick a lot of boxes, and if its easy enough to extend, I’ll probably stick with it. MediaDrop might also be a good platform for viewing the Debian meetings videos like the DebConf videos. 

My current topics are very much Debian related, but that doesn’t exclude any other types of content from being included in the future. Here’s what I have so far:

  • Video Logs: Almost like a blog, in video format.
  • Howto: Howto videos.
  • Debian Package of the Day: Exploring packages in the Debian archive.
  • Debian Package Management: Howto series on Debian package management, a precursor to a series that I’ll do on Debian packaging.
  • What’s the Difference: Just comparing 2 or more things.
  • Let’s Internet: Read stuff from Reddit, Slashdot, Quora, blogs and other media.

It’s still early days and there’s a bunch of ideas that I still want to implement, so the content will hopefully get a lot better as time goes on.

I have also quit Facebook last month, so I dusted off my old Mastodon account and started posting there again: https://mastodon.xyz/@highvoltage

You can also subscribe to my videos via RSS: https://highvoltage.tv/latest.xml

Other than that I’m open to ideas, thanks for reading :)

on May 16, 2018 06:19 PM