September 27, 2017

Hello Ubuntu Server!

The purpose of this communication is to provide a status update and highlights for any interesting subjects from the Ubuntu Server Team. If you would like to reach the server team, you can find us at the #ubuntu-server channel on Freenode. Alternatively, you can sign up and use the Ubuntu Server Team mailing list.

Spotlight: cloud-init 17.1 Release

The cloud-init team has released version 17.1. This marks the first release using the new versioning schema.

cloud-init

  • Robert Schweikert is now building cloud-init for openSUSE via the openSUSE build service
  • Integration tests removed dependency on shlex

Bug Work and Triage

IRC Meeting

Ubuntu Server Packages

Below is a summary of uploads to the development and supported releases. Current status of the Debian to Ubuntu merges is tracked on the Merge-o-Matic page. For a full list of recent merges with change logs please see the Ubuntu Server report.

Uploads to the Development Release (Artful)

asterisk, 1:13.17.2~dfsg-1ubuntu1, costamagnagianfranco
byobu, 5.123-0ubuntu1, kirkland
cloud-init, 17.1-0ubuntu1, smoser
cloud-init, 0.7.9-283-g7eb3460b-0ubuntu1, smoser
docker.io, 1.13.1-0ubuntu4, stgraber
golang-gopkg-lxc-go-lxc.v2, 0.0~git20161126.1.82a07a6-0ubuntu7, stgraber
golang-gopkg-lxc-go-lxc.v2, 0.0~git20161126.1.82a07a6-0ubuntu6, stgraber
golang-petname, 2.8-0ubuntu1, kirkland
libseccomp, 2.3.1-2.1ubuntu2, tyhicks
lxc, 2.1.0-0ubuntu1, stgraber
lxd, 2.18-0ubuntu3, stgraber
lxd, 2.18-0ubuntu2, stgraber
lxd, 2.18-0ubuntu1, stgraber
lxd, 2.17-0ubuntu4, stgraber
lxd, 2.17-0ubuntu3, stgraber
samba, 2:4.6.7+dfsg-1ubuntu3, mdeslaur
six, 1.11.0-1, None
tomcat8, 8.5.21-1, None
ubuntu-advantage-tools, 10, nacc
ubuntu-advantage-tools, 9, nacc
vlan, 1.9-3.2ubuntu5, paelzer
Total: 21

Uploads to Supported Releases (Trusty, Xenial, Yakkety, Zesty)

apache2, zesty, 2.4.25-3ubuntu2.3, mdeslaur
apache2, xenial, 2.4.18-2ubuntu3.5, mdeslaur
apache2, trusty, 2.4.7-1ubuntu4.18, mdeslaur
libvirt, trusty, 1.2.2-0ubuntu13.1.23, paelzer
maas, trusty, 1.9.5+bzr4599-0ubuntu1~14.04.2, andreserl
ntp, xenial, 1:4.2.8p4+dfsg-3ubuntu5.7, paelzer
qemu, zesty, 1:2.8+dfsg-3ubuntu2.5, mdeslaur
qemu, xenial, 1:2.5+dfsg-5ubuntu10.16, mdeslaur
qemu, trusty, 2.0.0+dfsg-2ubuntu1.36, mdeslaur
samba, zesty, 2:4.5.8+dfsg-0ubuntu0.17.04.7, mdeslaur
samba, xenial, 2:4.3.11+dfsg-0ubuntu0.16.04.11, mdeslaur
samba, trusty, 2:4.3.11+dfsg-0ubuntu0.14.04.12, mdeslaur
vlan, trusty, 1.9-3ubuntu10.5, paelzer
vlan, xenial, 1.9-3.2ubuntu1.16.04.4, paelzer
vlan, zesty, 1.9-3.2ubuntu2.17.04.3, paelzer
Total: 15

Contact the Ubuntu Server team

on September 27, 2017 02:02 PM

KGraphViewer 2.4.2

Jonathan Riddell

KGraphViewer 2.4.2 has been released.

KGraphViewer is a visualiser for Graphviz’s DOT format of graphs.
https://www.kde.org/applications/graphics/kgraphviewer

Changelog compared to 2.4.0:

  • add missing find dependency macro https://build.neon.kde.org/job/xenial_unstable_kde-extras_kgraphviewer_lintcmake/lastCompletedBuild/testReport/libkgraphviewer-dev/KGraphViewerPart/find_package/
  • Fix broken reloading and broken layout changing due to lost filename https://phabricator.kde.org/D7932
  • kgraphviewer_part.rc: set fallback text for toplevel menu entries
  • desktop-mime-but-no-exec-code
  • Codefix, comparisons were meant to be assignments

KGraphViewer 2.4.1 was made with an incorrect internal version number and should be ignored

It can be used by massif-visualizer to add graphing features.

Download from:
https://download.kde.org/stable/kgraphviewer/2.4.2/

sha256:
49438b4e6cca69d2e658de50059f045ede42cfe78ee97cece35959e29ffb85c9 kgraphviewer-2.4.2.tar.xz

Signed with my PGP key
2D1D 5B05 8835 7787 DE9E E225 EC94 D18F 7F05 997E
Jonathan Riddell <jr@jriddell.org>
kgraphviewer-2.4.2.tar.xz.sig

Facebooktwittergoogle_pluslinkedinby feather
on September 27, 2017 01:23 PM

This article originally appeared on Konstantin’s blog

Happens all the time. You often come across a super cool open source project you would gladly contribute but setting up the development environment and learning to patch and release your fixes puts you off. The Canonical Distribution of Kubernetes (CDK) is not an exception. This set of blog posts will shed some light on the most dark secrets of CDK.

Welcome to the CDK patch journey!

What is your Build & Release workflow? (Figure from xkcd)

Build CDK from source

Prerequisites

You would need to have Juju configured and ready to build charms. We will not be covering that in this blog post. Please, follow the official documentation to setup your environment and build you own first charm with layers.

Build the charms

CDK is made of a few charms, namely:

To build each charm you need to spot the top level charm layer and do a `charm build` on it. The links on the above list will get you to the github repository you will need to clone and build. Lets try this out for easyrsa:

> git clone https://github.com/juju-solutions/layer-easyrsa
Cloning into ‘layer-easyrsa’…
remote: Counting objects: 55, done.
remote: Total 55 (delta 0), reused 0 (delta 0), pack-reused 55
Unpacking objects: 100% (55/55), done.
Checking connectivity… done.
> cd ./layer-easyrsa/
 > charm build
build: Composing into /home/jackal/workspace/charms
build: Destination charm directory: /home/jackal/workspace/charms/builds/easyrsa
build: Processing layer: layer:basic
build: Processing layer: layer:leadership
build: Processing layer: easyrsa (from .)
build: Processing interface: tls-certificates
proof: OK!

The above builds the easyrsa charm and prints the output directory (/home/jackal/workspace/charms/builds/easyrsa in this case).

Building the kubernetes-* charms is slightly different. As you might already know the kubernetes charm layers are already upstream under cluster/juju/layers. Building the respective charms requires you to clone the kubernetes repository and pass the path to each layer to your invocation of charm build. Let’s build the kubernetes worker layer here:

> git clone https://github.com/kubernetes/kubernetes
Cloning into ‘kubernetes’…
remote: Counting objects: 602553, done.
remote: Compressing objects: 100% (57/57), done.
remote: Total 602553 (delta 18), reused 20 (delta 15), pack-reused 602481
Receiving objects: 100% (602553/602553), 456.97 MiB | 2.91 MiB/s, done.
Resolving deltas: 100% (409190/409190), done.
Checking connectivity… done.
> cd ./kubernetes/
> charm build cluster/juju/layers/kubernetes-worker/
build: Composing into /home/jackal/workspace/charms
build: Destination charm directory: /home/jackal/workspace/charms/builds/kubernetes-worker
build: Processing layer: layer:basic
build: Processing layer: layer:debug
build: Processing layer: layer:snap
build: Processing layer: layer:nagios
build: Processing layer: layer:docker (from ../../../workspace/charms/layers/layer-docker)
build: Processing layer: layer:metrics
build: Processing layer: layer:tls-client
build: Processing layer: layer:nvidia-cuda (from ../../../workspace/charms/layers/nvidia-cuda)
build: Processing layer: kubernetes-worker (from cluster/juju/layers/kubernetes-worker)
build: Processing interface: nrpe-external-master
build: Processing interface: dockerhost
build: Processing interface: sdn-plugin
build: Processing interface: tls-certificates
build: Processing interface: http
build: Processing interface: kubernetes-cni
build: Processing interface: kube-dns
build: Processing interface: kube-control
proof: OK!

During charm build all layers and interfaces referenced recursively starting by the top charm layer are fetched and merged to form your charm. The layers needed for building a charm are specified in a layer.yaml file on the root of the charm’s directory. For example, looking at cluster/juju/layers/kubernetes-worker/layer.yaml we see that the kubernetes worker charm uses the following layers and interfaces:

- 'layer:basic'
- 'layer:debug'
- 'layer:snap'
- 'layer:docker'
- 'layer:metrics'
- 'layer:nagios'
- 'layer:tls-client'
- 'layer:nvidia-cuda'
- 'interface:http'
- 'interface:kubernetes-cni'
- 'interface:kube-dns'
- 'interface:kube-control'

Layers is an awesome way to share operational logic among charms. For instance, the maintainers of the nagios layer have a better understanding of the operational needs of nagios but that does not mean that the authors of the kubernetes charms cannot use it.

charm build will recursively lookup each layer and interface at http://interfaces.juju.solutions/ to figure out where the source is. Each repository is fetched locally and squashed with all the other layers to form a single package, the charm. Go ahead a do a charm build with “-l debug” to see how and when a layer is fetched. It is important to know that if you already have a local copy of a layer under $JUJU_REPOSITORY/layers or interface under $JUJU_REPOSITORY/interfaces charm build will use those local forks instead of fetching them from the registered repositories. This enables charm authors to work on cross layer patches. Note that you might need to rename the directory of your local copy to match exactly the name of the layer or interface.

Building Resources

Charms will install Kubernetes but to do so they need to have the Kubernetes binaries. We package these binaries in snaps so that they are self-contained and deployed in any Linux distribution. Building such binaries is pretty straight forward as long as you know where to find them 🙂

Here is the repository holding the Kubernetes snaps: https://github.com/juju-solutions/release.git. The branch we want is rye/snaps:

> git clone https://github.com/juju-solutions/release.git
Cloning into ‘release’…
remote: Counting objects: 1602, done.
remote: Total 1602 (delta 0), reused 0 (delta 0), pack-reused 1602
Receiving objects: 100% (1602/1602), 384.69 KiB | 236.00 KiB/s, done.
Resolving deltas: 100% (908/908), done.
Checking connectivity… done.
> cd release
> git checkout rye/snaps
Branch rye/snaps set up to track remote branch rye/snaps from origin.
Switched to a new branch ‘rye/snaps’

Have a look at the README.md inside the snap directory to see how to build the snaps:

> cd snap/
> ./docker-build.sh KUBE_VERSION=v1.7.4

A number of .snap files should be available after the build.

In similar fashion you can build the snap package holding Kubernetes addons. We refer to this package as cdk-addons and it can be found at: https://github.com/juju-solutions/cdk-addons.git

> git clone https://github.com/juju-solutions/cdk-addons.git
Cloning into ‘cdk-addons’…
remote: Counting objects: 408, done.
remote: Total 408 (delta 0), reused 0 (delta 0), pack-reused 408
Receiving objects: 100% (408/408), 51.16 KiB | 0 bytes/s, done.
Resolving deltas: 100% (210/210), done.
Checking connectivity… done.
 > cd cdk-addons/
> make

The last resource you will need (which is not packaged as a snap) is the container network interface (cni). Lets grab the repository and get to a release tag:

> git clone https://github.com/containernetworking/cni.git
Cloning into ‘cni’…
remote: Counting objects: 4048, done.
remote: Compressing objects: 100% (5/5), done.
remote: Total 4048 (delta 0), reused 2 (delta 0), pack-reused 4043
Receiving objects: 100% (4048/4048), 1.76 MiB | 613.00 KiB/s, done.
Resolving deltas: 100% (1978/1978), done.
Checking connectivity… done. 
> cd cni
> git checkout -f v0.5.1

Build and package the cni resource:

> docker run — rm -e “GOOS=linux” -e “GOARCH=amd64” -v `pwd`:/cni golang /bin/bash -c “cd /cni && ./build”
Building API
Building reference CLI
Building plugins
 flannel
 tuning
 bridge
 ipvlan
 loopback
 macvlan
 ptp
 dhcp
 host-local
 noop
 > cd ./bin
 > tar -cvzf ../cni.tgz *
bridge
cnitool
dhcp
flannel
host-local
ipvlan
loopback
macvlan
noop
ptp
tuning

You should now have a cni.tgz in the root folder of the cni repository.

Two things to note here:

  • We do have a CI for building, testing and releasing charms and bundles. In case you want to follow each step of the build process, you can find our CI scripts here: https://github.com/juju-solutions/kubernetes-jenkins
  • You do not need to build all resources yourself. You can grab the resources used in CDK from the Juju store. Starting from the canonical-kubernetes bundle you can navigate to any charm shipped. Select one from the very end of the bundle page and then look for the “resources” sidebar on the right. Download any of them, rename it properly and you are ready to use it in your release.

Releasing Your Charms

After patching the charms to match your needs, please, consider submitting a pull request to tell us what you have been up to. Contrary to many other projects you do not need to wait for your PR to get accepted before you can actually make your work public. You can immediately release your work under your own namespace on the store. This is described in detail on the official charm authors documentation. The developers team is often using private namespaces to test PoCs and new features. The main namespace where CDK is released from is “containers”.

Yet, there is one feature you need to be aware of when attaching snaps to your charms. Snaps have their own release cycle and repositories. If you want to use the officially released snaps instead of attaching them to the charms, you can use a dummy zero sized file with the correct extension (.snap) in the place of each snap resource. The snap layer will see that that resource is empty and will try to grab the snap from the official repositories. Using the official snaps is recommended, however, in network restricted environments you might need to attach your own snaps while you deploy the charms.

Why is it this way?

Building CDK is of average difficulty as long as you know how where to look. It is not perfect by any standards and it will probably continue this path. The reason is that there are opposing forces shaping the build processes. This should come as no surprise. As Kubernetes changes rapidly and constantly expands, the build and release process should be flexible enough to include any new artefacts. Consider for example the switch from flannel cni to calico. In our case it is a resource and a charm that need to be updated. A monolithic build script would have been more “elegant” to the outsiders (eg, make CDK), but we would have been hiding a lot under the carpet. CI should be part of the culture of any team and should be owned by the team or else you get disconnected from the end product causing delays and friction. Our build and release process might look a bit “dirty” with a lot of moving parts but it really is not that bad! I managed to highlight the build and release steps in a single blog. Positive feedback also comes from our field engineers. Most of the time CDK deploys out of the box. When our field engineers are called it is either because our customers have a special requirement from the software or they have a “unconventional” environment in which Kubernetes needs to be deployed. Having such a simple and flexible build and release process enables our people to solve a problem on-site and release it to the Juju store within a couple of hours.

Next steps

This blog post serves as foundation work for what is coming up. The plan is to go over some easy patches so we further demystify how CDK works.

on September 27, 2017 01:03 PM

September 26, 2017

Packet.net has premium baremetal servers that start at $36.50 per month for a quad-core Atom C2550 with 8GB RAM and 80GB SSD, on a 1Gbps Internet connection. On the other end of the scale, there is an option for a 24-core (two Intel CPUs) system with 256GB RAM and a total of 2.8TB SSD disk space at around $1000 per month.

In this post we are trying out the most affordable baremetal server (type 0 from the list) with Ubuntu and LXD.

Starting the server is quite uneventful. Being baremetal, it takes more time than a VPS. It started, and we are SSHing into it.

$ ssh root@ip.ip.ip.ip
Welcome to Ubuntu 16.04.2 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

root@lxd:~#

Here there is some information about the booted system,

root@lxd:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 16.04.2 LTS
Release: 16.04
Codename: xenial
root@lxd:~#

And the CPU details,

root@lxd:~# cat /proc/cpuinfo 
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 77
model name : Intel(R) Atom(TM) CPU C2550 @ 2.40GHz
stepping : 8
microcode : 0x122
cpu MHz : 1200.000
cache size : 1024 KB
physical id : 0
siblings : 4
core id : 0
cpu cores : 4
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 11
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm sse4_1 sse4_2 movbe popcnt tsc_deadline_timer aes rdrand lahf_lm 3dnowprefetch epb tpr_shadow vnmi flexpriority ept vpid tsc_adjust smep erms dtherm ida arat
bugs :
bogomips : 4800.19
clflush size : 64
cache_alignment : 64
address sizes : 36 bits physical, 48 bits virtual
power management:

... omitting the other three cores ...

Let’s update the package list,

root@lxd:~# apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease
Get:2 http://archive.ubuntu.com/ubuntu xenial-updates InRelease [102 kB]
Get:3 http://archive.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
...

They are using the official Ubuntu repositories instead of caching the packages with local mirrors. In retrospect, not an issue because the Internet connectivity is 1Gbps, bonded from two identical interfaces.

Let’s upgrade the packages and deal with issues. You tend to have issues with upgraded packages that complain that local configuration files are different from what they are expecting.

root@lxd:~# apt upgrade
Reading package lists... Done
Building dependency tree 
Reading state information... Done
Calculating upgrade... Done
The following packages will be upgraded:
 apt apt-utils base-files cloud-init gcc-5-base grub-common grub-pc grub-pc-bin grub2-common
 initramfs-tools initramfs-tools-bin initramfs-tools-core kmod libapparmor1 libapt-inst2.0
 libapt-pkg5.0 libasn1-8-heimdal libcryptsetup4 libcups2 libdns-export162 libexpat1 libgdk-pixbuf2.0-0
 libgdk-pixbuf2.0-common libgnutls-openssl27 libgnutls30 libgraphite2-3 libgssapi3-heimdal libgtk2.0-0
 libgtk2.0-bin libgtk2.0-common libhcrypto4-heimdal libheimbase1-heimdal libheimntlm0-heimdal
 libhx509-5-heimdal libisc-export160 libkmod2 libkrb5-26-heimdal libpython3.5 libpython3.5-minimal
 libpython3.5-stdlib libroken18-heimdal libstdc++6 libsystemd0 libudev1 libwind0-heimdal libxml2
 logrotate mdadm ntp ntpdate open-iscsi python3-jwt python3.5 python3.5-minimal systemd systemd-sysv
 tcpdump udev unattended-upgrades
59 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
Need to get 24.3 MB of archives.
After this operation, 77.8 kB of additional disk space will be used.
Do you want to continue? [Y/n] 
...

First is grub, and the diff shows (now shown here) that it is a minor issue. The new version of grub.cfg changes the system to appear as Debian instead of Ubuntu. Did not investigate into this.

We are then asked where to install grub. We set to /dev/sda and hope that the server can successfully reboot. We note that instead of a 80GB SSD disk as written in the description, we got a 160GB SSD. Not bad.

Setting up cloud-init (0.7.9-233-ge586fe35-0ubuntu1~16.04.2) ...

Configuration file '/etc/cloud/cloud.cfg'
 ==> Modified (by you or by a script) since installation.
 ==> Package distributor has shipped an updated version.
 What would you like to do about it ? Your options are:
 Y or I : install the package maintainer's version
 N or O : keep your currently-installed version
 D : show the differences between the versions
 Z : start a shell to examine the situation
 The default action is to keep your current version.
*** cloud.cfg (Y/I/N/O/D/Z) [default=N] ? N
Progress: [ 98%] [##################################################################################.]

Still through apt upgrade, it complains for /etc/cloud/cloud.cfg. Here is the diff between the installed and packaged versions. We keep the existing file and we do not installed the new packaged generic version (will not boot).

At the end, it complains about

W: Possible missing firmware /lib/firmware/ast_dp501_fw.bin for module ast

Time to reboot the server and check if we messed it up.

root@lxd:~# shutdown -r now

$ ssh root@ip.ip.ip.ip
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage
Last login: Tue Sep 26 15:29:58 2017 from 1.2.3.4
root@lxd:~#

We are good! Note that now it says Ubuntu 16.04.3 while before it was Ubuntu 16.04.2.

LXD is not installed by default,

root@lxd:~# apt policy lxd
lxd:
      Installed: (none)
      Candidate: 2.0.10-0ubuntu1~16.04.1
      Version table:
              2.0.10-0ubuntu1~16.04.1 500
                      500 http://archive.ubuntu.com/ubuntu xenial-updates/main amd64 Packages
              2.0.0-0ubuntu4 500
                      500 http://archive.ubuntu.com/ubuntu xenial/main amd64 Packages

There are two versions, 2.0.0 which is the stock version released initially with Ubuntu 16.04. And 2.0.10, which is currently the latest stable version for Ubuntu 16.04. Let’s install.

root@lxd:~# apt install lxd
...

We are now ready to add the non-root user account.

root@lxd:~# adduser myusername
Adding user `myusername' ...
Adding new group `myusername' (1000) ...
Adding new user `myusername' (1000) with group `myusername' ...
Creating home directory `/home/myusername' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for myusername
Enter the new value, or press ENTER for the default
 Full Name []: 
 Room Number []: 
 Work Phone []: 
 Home Phone []: 
 Other []: 
Is the information correct? [Y/n] Y

root@lxd:~# ssh myusername@localhost
Permission denied (publickey).
root@lxd:~# cp -R ~/.ssh/ ~myusername/
root@lxd:~# chown -R myusername:myusername ~myusername/

We added the new username, then tested that password authentication is indeed disabled. Finally, we copied the authorized_keys file from ~root/ to the new non-root account, and adjusted the ownership of those files.

Let’s log out from the server and log in again as the new non-root account.

$ ssh myusername@ip.ip.ip.ip
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.10.0-24-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

**************************************************************************
# This system is using the EC2 Metadata Service, but does not appear to #
# be running on Amazon EC2 or one of cloud-init's known platforms that #
# provide a EC2 Metadata service. In the future, cloud-init may stop #
# reading metadata from the EC2 Metadata Service unless the platform can #
# be identified. #
# #
# If you are seeing this message, please file a bug against #
# cloud-init at #
# https://bugs.launchpad.net/cloud-init/+filebug?field.tags=dsid #
# Make sure to include the cloud provider your instance is #
# running on. #
# #
# For more information see #
# https://bugs.launchpad.net/bugs/1660385 #
# #
# After you have filed a bug, you can disable this warning by #
# launching your instance with the cloud-config below, or #
# putting that content into #
# /etc/cloud/cloud.cfg.d/99-ec2-datasource.cfg #
# #
# #cloud-config #
# datasource: #
# Ec2: #
# strict_id: false #
**************************************************************************

Disable the warnings above by:
 touch /home/myusername/.cloud-warnings.skip
or
 touch /var/lib/cloud/instance/warnings/.skip
myusername@lxd:~$

This issue is related to our action to keep the existing cloud.cfg after we upgraded the cloud-init package. It is something that packet.net (the provider) should deal with.

We are ready to try out LXD on packet.net.

Configuring LXD

Let’s configure LXD. First, how much free space do we have?

myusername@lxd:~$ df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 136G 1.1G 128G 1% /
myusername@lxd:~$

There is plenty of space, we are using 100GB for LXD.

We are using ZFS as the LXD storage backend, therefore,

myusername@lxd:~$ sudo apt install zfsutils-linux

Now, we set up LXD.

myusername@lxd:~$ sudo lxd init
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs 
Create a new ZFS pool (yes/no) [default=yes]? yes
Name of the new ZFS pool [default=lxd]: lxd 
Would you like to use an existing block device (yes/no) [default=no]? no
Size in GB of the new loop device (1GB minimum) [default=27]: 100
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes

LXD has been successfully configured.
myusername@lxd:~$ lxc list
+------+-------+------+------+------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+-------+------+------+------+-----------+
myusername@lxd:~$

Trying out LXD

Let’s create a container, install nginx and then make the web server accessible through the Internet.

myusername@lxd:~$ lxc launch ubuntu:16.04 web
Creating web
Retrieving image: rootfs: 100% (47.99MB/s) 
Starting web 
myusername@lxd:~$

Let’s see the details of the container, called web.

myusername@lxd:~$ lxc list --columns ns4tS
+------+---------+---------------------+------------+-----------+
| NAME | STATE   | IPV4                | TYPE       | SNAPSHOTS |
+------+---------+---------------------+------------+-----------+
| web  | RUNNING | 10.253.67.97 (eth0) | PERSISTENT | 0         |
+------+---------+---------------------+------------+-----------+
myusername@lxd:~$

We can see the container IP address. The parameter ns4tS simply omits the IPv6 address (‘6’) so that the table will look nice on the blog post.

Let’s enter the container and install nginx.

myusername@lxd:~$ lxc exec web -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@web:~$

We execute in the web container the whole command sudo –login –user ubuntu that gives us a login shell in the container. All Ubuntu containers have a default non-root account called ubuntu.

ubuntu@web:~$ sudo apt update
Hit:1 http://archive.ubuntu.com/ubuntu xenial InRelease

3 packages can be upgraded. Run ‘apt list –upgradable’ to see them.
ubuntu@web:~$ sudo apt install nginx
Reading package lists… Done

Processing triggers for ufw (0.35-0ubuntu2) …
ubuntu@web:~$ sudo vi /var/www/html/index.nginx-debian.html
ubuntu@web:~$ logout

Before installing a package, we must update. We updated and then installed nginx. Subsequently, we touched up a bit the default HTML file to mention packet.net and LXD. Finally, we logged out from the container.

Let’s test that the web server in the container is working.

myusername@lxd:~$ curl 10.253.67.97
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx on Packet.net in an LXD container!</title>
<style>
 body {
 width: 35em;
 margin: 0 auto;
 font-family: Tahoma, Verdana, Arial, sans-serif;
 }
</style>
</head>
<body>
<h1>Welcome to nginx on Packet.net in an LXD container!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
myusername@lxd:~$

The last step is to get Ubuntu to forward any Internet connections from port 80 to the container at port 80. For this, we need the public IP of the server and the private IP of the container (it’s 10.253.67.97).

myusername@lxd:~$ ifconfig 
bond0 Link encap:Ethernet HWaddr 0c:c4:7a:de:51:a8 
      inet addr:147.75.82.251 Bcast:255.255.255.255 Mask:255.255.255.254
      inet6 addr: 2604:1380:2000:600::1/127 Scope:Global
      inet6 addr: fe80::ec4:7aff:fee5:4462/64 Scope:Link
      UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1
      RX packets:144216 errors:0 dropped:0 overruns:0 frame:0
      TX packets:14181 errors:0 dropped:0 overruns:0 carrier:0
      collisions:0 txqueuelen:1000 
      RX bytes:211518302 (211.5 MB) TX bytes:1443508 (1.4 MB)

The interface is a bond, bond0. Two 1Gbps connections are bonded together.

myusername@lxd:~$ PORT=80 PUBLIC_IP=147.75.82.251 CONTAINER_IP=10.253.67.97 sudo -E bash -c 'iptables -t nat -I PREROUTING -i bond0 -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"'
myusername@lxd:~$

Let’s test it out!

That’s it!

on September 26, 2017 07:45 PM

I fixed a bug in Launchpad recently that led me deeper than I expected.

Launchpad uses Buildout as its build system for Python packages, and it’s served us well for many years. However, we’re using 1.7.1, which doesn’t support ensuring that packages required using setuptools’ setup_requires keyword only ever come from the local index URL when one is specified; that’s an essential constraint we need to be able to impose so that our build system isn’t immediately sensitive to downtime or changes in PyPI. There are various issues/PRs about this in Buildout (e.g. #238), but even if those are fixed it’ll almost certainly only be in Buildout v2, and upgrading to that is its own kettle of fish for other reasons. All this is a serious problem for us because newer versions of many of our vital dependencies (Twisted and testtools, to name but two) use setup_requires to pull in pbr, and so we’ve been stuck on old versions for some time; this is part of why Launchpad doesn’t yet support newer SSH key types, for instance. This situation obviously isn’t sustainable.

To deal with this, I’ve been working for some time on switching to virtualenv and pip. This is harder than you might think: Launchpad is a long-lived and complicated project, and it had quite a number of explicit and implicit dependencies on Buildout’s configuration and behaviour. Upgrading our infrastructure from Ubuntu 12.04 to 16.04 has helped a lot (12.04’s baseline virtualenv and pip have some deficiencies that would have required a more complicated bootstrapping procedure). I’ve dealt with most of these: for example, I had to reorganise a lot of our helper scripts (1, 2, 3), but there are still a few more things to go.

One remaining problem was that our Buildout configuration relied on building several different environments with different Python paths for various things. While this would technically be possible by way of building multiple virtualenvs, this would inflate our build time even further (we’re already going to have to cope with some slowdown as a result of using virtualenv, because the build system now has to do a lot more than constructing a glorified link farm to a bunch of cached eggs), and it seems like unnecessary complexity. The obvious thing to do seemed to be to collapse these into a single environment, since there was no obvious reason why it should actually matter if txpkgupload and txlongpoll were carefully kept off the path when running most of Launchpad: so I did that.

Then our build system got very sad.

Hmm, I thought. To keep our test times somewhat manageable, we run them in parallel across 20 containers, and we randomise the order in which they run to try to shake out test isolation bugs. It’s not completely unknown for there to be some oddities resulting from that. So I ran it again. Nope, but slightly differently sad this time. Furthermore, I couldn’t reproduce these failures locally no matter how hard I tried. Oh dear. This was obviously not going to be a good day.

In fact I spent a while on various different guesswork-based approaches. I found bug 571334 in Ampoule, an AMP-based process pool implementation that we use for some job runners, and proposed a fix for that, but cherry-picking that fix into Launchpad didn’t help matters. I tried backing out subsets of my changes and determined that if both txlongpoll and txpkgupload were absent from the Python module path in the context of the tests in question then everything was fine. I tried running strace locally and staring at the output for some time in the hope of enlightenment: that reminded me that the two packages in question install modules under twisted.plugins, which did at least establish a reason they might affect the environment that was more plausible than magic, but nothing much more specific than that.

On Friday I was fiddling about with this again and trying to insert some more debugging when I noticed some interesting behaviour around plugin caching. If I caused the txpkgupload plugin to raise an exception when loaded, the Twisted plugin system would remove its dropin.cache (because it was stale) and not create a new one (because there was now no content to put in it). After that, running the relevant tests would fail as I’d seen in our buildbot. Aha! This meant that I could also reproduce it by doing an even cleaner build than I’d previously tried to do, by removing the cached txpkgupload and txlongpoll eggs and allowing the build system to recreate them. When they were recreated, they didn’t contain dropin.cache, instead allowing that to be created on first use.

Based on this clue I was able to get to the answer relatively quickly. Ampoule has a specialised bootstrapping sequence for its worker processes that starts by doing this:

from twisted.application import reactors
reactors.installReactor(reactor)

Now, twisted.application.reactors.installReactor calls twisted.plugin.getPlugins, so the very start of this bootstrapping sequence is going to involve loading all plugins found on the module path (I assume it’s possible to write a plugin that adds an alternative reactor implementation). If dropin.cache is up to date, then it will just get the information it needs from that; but if it isn’t, it will go ahead and import the plugin. If the plugin happens (as Twisted code often does) to run from twisted.internet import reactor at some point while being imported, then that will install the platform’s default reactor, and then twisted.application.reactors.installReactor will raise ReactorAlreadyInstalledError. Since Ampoule turns this into an info-level log message for some reason, and the tests in question only passed through error-level messages or higher, this meant that all we could see was that a worker process had exited non-zero but not why.

The Twisted documentation recommends generating the plugin cache at build time for other reasons, but we weren’t doing that. Fixing that makes everything work again.

There are still a few more things needed to get us onto pip, but we’re now pretty close. After that we can finally start bringing our dependencies up to date.

on September 26, 2017 03:20 PM

Alibaba Cloud is like Amazon Web Services as they offer quite similar cloud services. They are part of the Alibaba Group, a huge Chinese conglomerate. For example, the retailer component of the Alibaba Group is now bigger than Walmart. Here, we try out the cloud services.

The main reason to select Alibaba Cloud is to get a server running inside China. They also have several data centers outside China, but inside China it is mostly Alibaba Cloud. To get a server running inside mainland China though, you need to go through a registration process where you submit photos of your passport. We ain’t have time for that, therefore we select the closest data center to China, Hong Kong.

Creating an account on Alibaba Cloud

Click to create an account on Alibaba Cloud (update: no referral link). You get $300 credit to use within two months, and up to $50 of that credit can go towards launching virtual private servers. Actually, make that account with the referral now, before continuing with this section below..

When creating the account, there is either the option to verify your email or phone number. Let’s do the email verification.

Let’s check our mails. Where is that email from Alibaba Cloud? Nothing arrived!?!

The usability disaster is almost evident. When you get to this page about the Verification, the text says We need to verify your email. Please input the number you receive. Alibaba Cloud did not already send that email to us. We need to first click on Send to get it to send that email. The text should have said instead something like To use email verification, click Send below, then input the numbercode you have received.

You can pay Alibaba Cloud using either a bank card or Paypal. Let’s try Paypal! Actually, to make use of the $300 credit, it has to be a bank card instead.

We have added a bank card. This bank card has to go through a type verification. Alibaba Cloud will make a small debit (to be refunded later) and you can input either the transaction amount or the transaction code (see screenshot) below in order to verify that you do have access to your bank card.

After a couple of days, you get worried because there is no transaction with the description INTL*?????.ALIYUN.COM at your online banking. What went wrong? And what is this weird transaction with a different description in my bank statement?

Description: INTL*175 LUXEM LU ,44

Debit amount: 0.37€

What is LUXEM, a municipality in Germany, doing on my bank statement? Let’s hope that the processor for Alibaba in Europe is LUXEM, not ALIYUN.

Let’s try as transaction code the number 175. Did not work. Four more tries remaining.

Let’s try the transaction amount, 0.37€. Of course, it did not work. It says USD, nor EURO! Three tries remaining.

Let’s google a bit, Add a payment method documentation on Alibaba Cloud talks only about dollars. A forum post about non-dollar currencies says:

I did not get an authorization charge, therefore there is no X.

Let’s do something really crazy:

We type 0.44 as the transaction amount. IT WORKED!

In retrospect, there is a reference on ,44 in the description, who would have thought that this undocumented info might refer to the dollar amount.

After a week, the micro transaction of 0.37€ was not reimbursed. What’s more, I was also charged with a 2.5€ commission which I am not getting back either.

We are now ready to use the $300 Free Credit!

Creating a server on Alibaba Cloud

When trying to create a server, you may encounter this website, with a hostname YUNDUN.console.aliyun.com. If you get that, you are in the wrong place. You cannot add your SSH key here, nor do you create a server.

Instead, it should say ECS, Elastic Compute Service.

Here is the full menu for ECS,

Under Networks & Security, there is Key Pairs. Let’s add there the SSH public key, not the whole key pair.

First of all, we need to select the appropriate data center. Ok, we change to Hong Kong which is listed in the middle.

But how do we add our own SSH key? There is only an option to Create Key Pair!?! Well, let’s create a pair.

Ah, okay. Although the page is called Create Key Pair, we can actually Import an Existing Key Pair.

Now, click back to Elastic Computer S…/Overview, which shows each data center.

If we were to try to create a server in Mainland China, we get

In that case, we would need to send first a photo of our passport or our driver’s license.

Let’s go back, and select Hong Kong.

We are ready to configure our server.

There is an option of either a Starter Package or an Advanced Purchase. The Starter Package is really cool, you can get a server for only $4.5. But the find print for the $300 credit says that you cannot use the Starter Package here.

So, Advanced Purchase it will be.

There are two pricing models, Subscription and Pay As You Go. Subscription means that you pay monthly, Pay As You Go means that you pay hourly. We go for Subscription.

We select the 1-core, 1GB instance, and we can see the price at $12.29. We also pay separately for the Internet traffic. The cost is shown on an overlay, we still have more options to select before we create the server.

We change the default Security Group to the one shown above. We want our server to be accessible from outside on ports 80 and 443. Also port 22 is added by default, along with the port 3389 (Remote Desktop in Windows).

We select Ubuntu 16.04.  The order of the operating systems is a bit weird. Ideally, the order should reflect the popularity.

There is an option for Server Guard. Let’s try it since it is free. (it requires to install some closed-source package in our Linux. Eventually I did not try it).

The Ultra Cloud Disk is a network share and it is included in the earlier price. The other option would be to select an SSD. It is nice that we can add up to 16 disks to our server.

We are ready to place the order. It correctly shows $0 and mentions the $50 credit. We select not to auto renew.

Now we pay the $0.

And that’s how we start a server. We have received an email with the IP address but can also find the public IP address from the ECS settings.

Let’s have a look at the IP block for this IP address.

ffs.

How to set up LXD on an Alibaba server

First, we SSH to the server. The command looks like ssh root@_public_ip_address_

It looks like real Ubuntu, with real Ubuntu Linux kernel. Let’s update.

root@iZj6c66d14k19wi7139z9eZ:~# apt update
Get:1 http://mirrors.cloud.aliyuncs.com/ubuntu xenial InRelease [247 kB]
Hit:2 http://mirrors.aliyun.com/ubuntu xenial InRelease

...
Get:45 http://mirrors.aliyun.com/ubuntu xenial-security/universe i386 Packages [147 kB] 
Get:46 http://mirrors.aliyun.com/ubuntu xenial-security/universe Translation-en [89.8 kB] 
Fetched 40.8 MB in 24s (1682 kB/s) 
Reading package lists... Done
Building dependency tree 
Reading state information... Done
105 packages can be upgraded. Run 'apt list --upgradable' to see them.
root@iZj6c66d14k19wi7139z9eZ:~#

We upgraded (apt upgrade) and there was a kernel update. We restarted (shutdown -r now) and the newly updated Ubuntu has the updated kernel. Nice!

Let’s check /proc/cpuinfo,

root@iZj6c66d14k19wi7139z9eZ:~# cat /proc/cpuinfo 
processor : 0
vendor_id : GenuineIntel
cpu family : 6
model : 63
model name : Intel(R) Xeon(R) CPU E5-2680 v3 @ 2.50GHz
stepping : 2
microcode : 0x1
cpu MHz : 2494.224
cache size : 30720 KB
physical id : 0
siblings : 1
core id : 0
cpu cores : 1
apicid : 0
initial apicid : 0
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt
bugs :
bogomips : 4988.44
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:

root@iZj6c66d14k19wi7139z9eZ:/proc#

How much free space from the 40GB disk?

root@iZj6c66d14k19wi7139z9eZ:~# df -h /
Filesystem Size Used Avail Use% Mounted on
/dev/vda1   40G 2,2G 36G 6% /
root@iZj6c66d14k19wi7139z9eZ:~#

Let’s add a non-root user.

root@iZj6c66d14k19wi7139z9eZ:~# adduser myusername
Adding user `myusername' ...
Adding new group `myusername' (1000) ...
Adding new user `myusername' (1000) with group `myusername' ...
Creating home directory `/home/myusername' ...
Copying files from `/etc/skel' ...
Enter new UNIX password: 
Retype new UNIX password: 
passwd: password updated successfully
Changing the user information for myusername
Enter the new value, or press ENTER for the default
 Full Name []: 
 Room Number []: 
 Work Phone []: 
 Home Phone []: 
 Other []: 
Is the information correct? [Y/n] 
root@iZj6c66d14k19wi7139z9eZ:~#

Is LXD already installed?

root@iZj6c66d14k19wi7139z9eZ:~# apt policy lxd
lxd:
 Installed: (none)
 Candidate: 2.0.10-0ubuntu1~16.04.2
 Version table:
     2.0.10-0ubuntu1~16.04.2 500
         500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial-updates/main amd64 Packages
         500 http://mirrors.aliyun.com/ubuntu xenial-updates/main amd64 Packages
         100 /var/lib/dpkg/status
     2.0.2-0ubuntu1~16.04.1 500
         500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial-security/main amd64 Packages
         500 http://mirrors.aliyun.com/ubuntu xenial-security/main amd64 Packages
     2.0.0-0ubuntu4 500
         500 http://mirrors.cloud.aliyuncs.com/ubuntu xenial/main amd64 Packages
         500 http://mirrors.aliyun.com/ubuntu xenial/main amd64 Packages
root@iZj6c66d14k19wi7139z9eZ:~#

Let’s install LXD.

root@iZj6c66d14k19wi7139z9eZ:~# apt install lxd

Now, we can add our user account myusername to the groups sudo, lxd.

root@iZj6c66d14k19wi7139z9eZ:~# usermod -a -G lxd,sudo myusername
root@iZj6c66d14k19wi7139z9eZ:~#

Let’s copy the SSH public key from root to the new non-root account.

root@iZj6c66d14k19wi7139z9eZ:~# cp -R /root/.ssh ~myusername/
root@iZj6c66d14k19wi7139z9eZ:~# chown -R myusername:myusername ~myusername/.ssh/
root@iZj6c66d14k19wi7139z9eZ:~#

Now, log out and log in as the new non-root account.

$ ssh myusername@IP.IP.IP.IP
Welcome to Ubuntu 16.04.3 LTS (GNU/Linux 4.4.0-96-generic x86_64)

* Documentation: https://help.ubuntu.com
 * Management: https://landscape.canonical.com
 * Support: https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.

Welcome to Alibaba Cloud Elastic Compute Service !

myusername@iZj6c66d14k19wi7139z9eZ:~$

We are going to install the ZFS utilities so that LXD can use ZFS as a storage backend.

myusername@iZj6c66d14k19wi7139z9eZ:~$ sudo apt install zfsutils-linux
...myusername@iZj6c66d14k19wi7139z9eZ:~$

Now, we can configure LXD. From before, the server had about 35GB free. We are allocating 20GB of that for LXD.

myusername@iZj6c66d14k19wi7139z9eZ:~$ sudo lxd init
sudo: unable to resolve host iZj6c66d14k19wi7139z9eZ
[sudo] password for myusername:  ********
Name of the storage backend to use (dir or zfs) [default=zfs]: zfs
Create a new ZFS pool (yes/no) [default=yes]? yes
Name of the new ZFS pool [default=lxd]: lxd
Would you like to use an existing block device (yes/no) [default=no]? no
Size in GB of the new loop device (1GB minimum) [default=15]: 20
Would you like LXD to be available over the network (yes/no) [default=no]? no
Do you want to configure the LXD bridge (yes/no) [default=yes]? yes
Warning: Stopping lxd.service, but it can still be activated by:
lxd.socket

LXD has been successfully configured.
myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc list
Generating a client certificate. This may take a minute…
If this is your first time using LXD, you should also run: sudo lxd init
To start your first container, try: lxc launch ubuntu:16.04

+——+——-+——+——+——+———–+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+——+——-+——+——+——+———–+
myusername@iZj6c66d14k19wi7139z9eZ:~$

Okay, we can create now our first LXD container. We are creating just a web server.

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc launch ubuntu:16.04 web
Creating web
Retrieving image: rootfs: 100% (6.70MB/s) 
Starting web 
myusername@iZj6c66d14k19wi7139z9eZ:~$

Let’s see the container,

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc list
+------+---------+---------------------+------+------------+-----------+
| NAME | STATE | IPV4 | IPV6 | TYPE | SNAPSHOTS |
+------+---------+---------------------+------+------------+-----------+
| web | RUNNING | 10.35.87.141 (eth0) | | PERSISTENT | 0 |
+------+---------+---------------------+------+------------+-----------+
myusername@iZj6c66d14k19wi7139z9eZ:~$

Nice. We get into the container and install a web server.

myusername@iZj6c66d14k19wi7139z9eZ:~$ lxc exec web -- sudo --login --user ubuntu

To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@web:~$

We executed into the web container the command sudo –login –user ubuntu. The container has a default non-root account ubuntu.

ubuntu@web:~$ sudo apt update
Get:1 http://security.ubuntu.com/ubuntu xenial-security InRelease [102 kB]
Hit:2 http://archive.ubuntu.com/ubuntu xenial InRelease 
...
Reading state information... Done
3 packages can be upgraded. Run 'apt list --upgradable' to see them.
ubuntu@web:~$ sudo apt install nginx
Reading package lists... Done
...
Processing triggers for ufw (0.35-0ubuntu2) ...
ubuntu@web:~$ sudo vi /var/www/html/index.nginx-debian.html 
ubuntu@web:~$ logout
myusername@iZj6c66d14k19wi7139z9eZ:~$ curl 10.35.87.141
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx running in an LXD container on Alibaba Cloud!</title>
<style>
 body {
 width: 35em;
 margin: 0 auto;
 font-family: Tahoma, Verdana, Arial, sans-serif;
 }
</style>
</head>
<body>
<h1>Welcome to nginx running in an LXD container on Alibaba Cloud!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
myusername@iZj6c66d14k19wi7139z9eZ:~$

Obviously, the web server in the container is not accessible from the Internet. We need to do something like add iptables rules to forward appropriately the connection.

Alibaba Cloud gives two IP address per server. One is the public IP address and the other is a private IP address (172.[16-31].*.*). The eth0 interface of the server has that private IP address. This information is important for iptables below.

myusername@iZj6c66d14k19wi7139z9eZ:~$ PORT=80 PUBLIC_IP=my172.IPAddress CONTAINER_IP=10.35.87.141 sudo -E bash -c 'iptables -t nat -I PREROUTING -i eth0 -p TCP -d $PUBLIC_IP --dport $PORT -j DNAT --to-destination $CONTAINER_IP:$PORT -m comment --comment "forward to the Nginx container"'
myusername@iZj6c66d14k19wi7139z9eZ:~$

Let’s load up our site using the public IP address from our own computer:

And that’s it!

Conclusion

Alibaba Cloud is yet another provider for cloud services. They are big in China, actually the biggest in China. They are trying to expand to the rest of the world. There are several teething problems, probably arising from the fact that the main website is in Mandarin and there is no infrastructure for immediate translation to English.

On HN there has been a sort of relaunch a few of months ago. It appears there is interest from them to get international users. What they need is people to attend immediately to issues as they are discovered.

If you want to learn more about LXD, see https://stgraber.org/2016/03/11/lxd-2-0-blog-post-series-012/

 

Update #1

After a day of running a VPS on Alibaba Cloud, I received this email.

From: Alibaba Cloud
Subject: 【Immediate Attention Needed】Alibaba Cloud Fraud Prevention

We have detected a security risk with the card you are using to make purchases. In order to protect your account, please provide your account ID and the following information within one working day via your registered Alibaba Cloud email to compliance_support@aliyun.com for further investigation. 

If you are using a credit card as your payment method, please provide the following information directly. Please provide clear copies of: 

1. Any ONE of the following three forms of government-issued photo identification for the credit card holder or payment account holder of this Alibaba Cloud account: (i) National identification card; (ii) Passport; (iii) Driver's License. 
2. A clear copy of the front side of your credit card in connection with this Alibaba Account; (Note: For security reasons, we advise you to conceal the middle digits of your card number. Please make sure that the card holder's name, card issuing bank and the last four digits of the card number are clearly visible). 
3. A clear copy of your card's bank statement. We will process your case within 3 working days of receiving the information listed above. NOTE: Please do not provide information in this ticket. All the information needed should be sent to this email compliance_support@aliyun.com.

If you fail to provide all the above information within one working day , your instances will be shut down. 

Best regards, 

Alibaba Cloud Customer Service Center

What this means, is that update #2 has to happen now.

 

Update #2

Newer versions of LXD have a utility called lxd-benchmark. This utility spawns, starts and stops containers, and can be used to have an idea how efficient a server may be. I suppose primarily it is used to figure out if there is a regression in the LXD code. Let see it anyway in action here, the clock is ticking.

The new LXD is in a PPA at https://launchpad.net/~ubuntu-lxc/+archive/ubuntu/lxd-stable Let’s install it on Alibaba Cloud.

sudo apt-get install software-properties-common
sudo add-apt-repository ppa:ubuntu-lxc/lxd-stable
sudo apt updatesudo apt upgrade                   # Now LXD will be upgraded.sudo apt install lxd-tools         # Now lxd-benchmark will be installed.

Let’s see the options for lxd-benchmark.

Usage: lxd-benchmark spawn [--count=COUNT] [--image=IMAGE] [--privileged=BOOL] [--start=BOOL] [--freeze=BOOL] [--parallel=COUNT]
 lxd-benchmark start [--parallel=COUNT]
 lxd-benchmark stop [--parallel=COUNT]
 lxd-benchmark delete [--parallel=COUNT]

--count (= 100)
 Number of containers to create
 --freeze (= false)
 Freeze the container right after start
 --image (= "ubuntu:")
 Image to use for the test
 --parallel (= -1)
 Number of threads to use
 --privileged (= false)
 Use privileged containers
 --report-file (= "")
 A CSV file to write test file to. If the file is present, it will be appended to.
 --report-label (= "")
 A label for the report entry. By default, the action is used.
 --start (= true)
 Start the container after creation

First, we need to spawn new containers that we can later start, stop or delete. Ideally, I would expect the terminology to be launch instead of spawn, to keep in sync with the existing container management commands.

Second, there are defaults for each command as shown above. There is no indication yet as to how much RAM you need to spawn the default 100 containers. Obviously it would be more than the 1GB RAM we have on this server. Regarding the disk space, that would be fine because of copy-on-write with ZFS; any newly created LXD container does not use up additional space as they all are based on the space of the first container. Perhaps after a day when unattended-upgrades kicks in, each container would use up some space for any required security updates that get automatically applied.

Let’s try out with 3 containers. We have stopped and deleted the original web container that we have created in this tutorial (lxc stop web ; lxc delete web).

$ lxd-benchmark spawn --count 3
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

Test variables:
 Container count: 3
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 3
 Batch size: 1
 Remainder: 0

[Sep 27 17:31:41.074] Importing image into local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 17:32:12.825] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 17:32:12.825] Batch processing start
[Sep 27 17:32:37.614] Processed 1 containers in 24.790s (0.040/s)
[Sep 27 17:32:42.611] Processed 2 containers in 29.786s (0.067/s)
[Sep 27 17:32:49.110] Batch processing completed in 36.285s
$ lxc list --columns ns4tS
+-------------+---------+---------------------+------------+-----------+
| NAME        | STATE   | IPV4                | TYPE       | SNAPSHOTS |
+-------------+---------+---------------------+------------+-----------+
| benchmark-1 | RUNNING | 10.35.87.252 (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+------------+-----------+
| benchmark-2 | RUNNING | 10.35.87.115 (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+------------+-----------+
| benchmark-3 | RUNNING | 10.35.87.72 (eth0)  | PERSISTENT | 0         |
+-------------+---------+---------------------+------------+-----------+
| web         | RUNNING | 10.35.87.141 (eth0) | PERSISTENT | 0         |
+-------------+---------+---------------------+------------+-----------+
$

We created three extra containers, named benchmark-?, and got them started. There were launched in three batches, which means that one was started after another, not in parallel.

The total time on this server, when the storage backend is zfs, was 36.2 seconds. It is not clear what the numbers in the parenthesis mean at Processed 1 containers in 18.770s (0.053/s).

The total time on this server, when the storage backend was dir, was 68.6 instead.

Let’s stop them!

$ lxd-benchmark stop
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

[Sep 27 18:06:08.822] Stopping 3 containers
[Sep 27 18:06:08.822] Batch processing start
[Sep 27 18:06:09.680] Processed 1 containers in 0.858s (1.165/s)
[Sep 27 18:06:10.543] Processed 2 containers in 1.722s (1.162/s)
[Sep 27 18:06:11.406] Batch processing completed in 2.584s
$

With dir, it was around 2.4 seconds.

And then delete them!

$ lxd-benchmark delete
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

[Sep 27 18:07:12.020] Deleting 3 containers
[Sep 27 18:07:12.020] Batch processing start
[Sep 27 18:07:12.130] Processed 1 containers in 0.110s (9.116/s)
[Sep 27 18:07:12.224] Processed 2 containers in 0.204s (9.814/s)
[Sep 27 18:07:12.317] Batch processing completed in 0.297s
$

With dir, it was 2.5 seconds.

Let’s create three containers in parallel.

$ lxd-benchmark spawn --count=3 --parallel=3
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

Test variables:
 Container count: 3
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 1
 Batch size: 3
 Remainder: 0

[Sep 27 18:11:01.570] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 18:11:01.570] Batch processing start
[Sep 27 18:11:11.574] Processed 3 containers in 10.004s (0.300/s)
[Sep 27 18:11:11.574] Batch processing completed in 10.004s
$

With dir, it was 58.7 seconds.

Let’s push it further and try to hit some memory limits! First, we delete all, and launch 5 in parallel.

$ lxd-benchmark spawn --count=5 --parallel=5
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

Test variables:
 Container count: 5
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 1
 Batch size: 5
 Remainder: 0

[Sep 27 18:13:11.171] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 18:13:11.172] Batch processing start
[Sep 27 18:13:33.461] Processed 5 containers in 22.290s (0.224/s)
[Sep 27 18:13:33.461] Batch processing completed in 22.290s
$

So, 5 containers can start in 1GB of RAM, in just 22 seconds.

We also tried the same with the dir storage backend, and got

[Sep 27 17:24:16.409] Batch processing start
[Sep 27 17:24:54.508] Failed to spawn container 'benchmark-5': Unpack failed, Failed to run: unsquashfs -f -d /var/lib/lxd/storage-pools/default/containers/benchmark-5/rootfs -n -da 99 -fr 99 -p 1 /var/lib/lxd/images/03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee.rootfs: . 
[Sep 27 17:25:11.129] Failed to spawn container 'benchmark-3': Unpack failed, Failed to run: unsquashfs -f -d /var/lib/lxd/storage-pools/default/containers/benchmark-3/rootfs -n -da 99 -fr 99 -p 1 /var/lib/lxd/images/03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee.rootfs: . 
[Sep 27 17:25:35.906] Processed 5 containers in 79.496s (0.063/s)
[Sep 27 17:25:35.906] Batch processing completed in 79.496s

Out of the five containers, it managed to create 3 (No 1, 3, 4). The reason is that unsquashfs needs to run to expand an image, and that process uses a lot of memory. When using zfs, the same process probably does not need that much memory.

Let’s delete the five containers (storage backend: zfs):

[Sep 27 18:18:37.432] Batch processing completed in 5.006s

Let’s close the post with

$ lxd-benchmark spawn --count=10 --parallel=5
Test environment:
 Server backend: lxd
 Server version: 2.18
 Kernel: Linux
 Kernel architecture: x86_64
 Kernel version: 4.4.0-96-generic
 Storage backend: zfs
 Storage version: 0.6.5.6-0ubuntu16
 Container backend: lxc
 Container version: 2.1.0

Test variables:
 Container count: 10
 Container mode: unprivileged
 Startup mode: normal startup
 Image: ubuntu:
 Batches: 2
 Batch size: 5
 Remainder: 0

[Sep 27 18:19:44.706] Found image in local store: 03c2fa6716b5f41684457ca5e1b7316df520715b7fea0378f9306d16fdc646ee
[Sep 27 18:19:44.706] Batch processing start
[Sep 27 18:20:07.705] Processed 5 containers in 22.998s (0.217/s)
[Sep 27 18:20:57.114] Processed 10 containers in 72.408s (0.138/s)
[Sep 27 18:20:57.114] Batch processing completed in 72.408s

We launched 10 containers in two batches of five containers each. The lxd-benchmark command completed successfully, in just 72 seconds. However, after the command completed, each container would start up, get an IP and get working. We hit the memory limit when the second batch of five containers where starting up. The network monitor on the Alibaba Cloud management console shows 100% CPU utilization, and it is not possible to access the server over SSH. Let’s delete the server from the management console and wind down this trial of Alibaba Cloud.

lxd-benchmark is quite useful and can be used to get practical understanding as to how many containers can make it on a server and much more.

Update #3

I just restarted the server from the management console and connected using SSH.

Here are the ten containers from Update #2,

$ lxc list --columns ns4
+--------------+---------+------+
| NAME         | STATE   | IPV4 |
+--------------+---------+------+
| benchmark-01 | STOPPED |      |
+--------------+---------+------+
| benchmark-02 | STOPPED |      |
+--------------+---------+------+
| benchmark-03 | STOPPED |      |
+--------------+---------+------+
| benchmark-04 | STOPPED |      |
+--------------+---------+------+
| benchmark-05 | STOPPED |      |
+--------------+---------+------+
| benchmark-06 | STOPPED |      |
+--------------+---------+------+
| benchmark-07 | STOPPED |      |
+--------------+---------+------+
| benchmark-08 | STOPPED |      |
+--------------+---------+------+
| benchmark-09 | STOPPED |      |
+--------------+---------+------+
| benchmark-10 | STOPPED |      |
+--------------+---------+------+

The containers are in the stopped state. That is, they do not consume memory. How much free memory is there?

$ free
       total  used   free shared buff/cache available
Mem: 1016020 56192 791752 2928 168076 805428
Swap:      0     0      0

About 792MB free memory.

There is not enough memory to get them all to run at the same time. It is good that they get into the stopped state when you reboot, so that you can fix.

on September 26, 2017 01:38 PM

Plasma Mobile and Convergence

Sebastian Kügler

Convergence, or the ability the serve different form factors from the same code base, is an often discussed concept. Convergence is at the heart of Plasma‘s design philosophy, but what does this actually mean to how apps are developed? What’s in it for the user? Let’s have a look!

Plasma -- same code, different devicesPlasma — same code, different devices

First, let’s have a look at different angles of “Convergence”. It can actually mean different things, and there is overlap between these. Depending on who you ask, convergence could mean any of the following:

  • Being able to plug a monitor, keyboard and mouse into smartphone and use it as a full-fledged desktop replacement
  • Develop an application that works on a phone as well as on a desktop
  • Create different device user interfaces from the same code base

Convergence, in the broadest sense, has been one of the design goals of Plasma when we started creating it. When we work on Plasma, we ultimately expect components to run on a wide variety of target devices, we refer to that concept as the device spectrum.

Alex, one of Plasma’s designers has created a visual concept for a convergent user interface, that gives an impression how a fully convergent Plasma could look like to the user:

Input Methods and Screen Characteristics

Technically, there are a few aspects of convergence, the most important being: input methods, for example mouse, keyboard, touchscreens or combinations of those, and screen size (both physical dimensions, portrait vs. landscape layout and pixel density).

Touchscreen support is one aspect when it comes to run KDE software on a mobile device or within Plasma Mobile. Touchscreens are not specific to phones any more however, so making an app, or a Plasma component ready for touchscreen usage also benefits people who run Plasma on their convertible laptops, for example. Another big factor is that the app needs to work well on the screen of a smartphone, this means support for high dpi screens as well as a layout that presents the necessary controls in a way that is functional, attractive and user-friendly. With the Kirigami toolkit, which builds on top of QtQuick, we develop apps that work well on both target devices. From a more general point of view, KDE has always developed apps in a cross- platform way, so portability to other platforms is very much at the heart of our codebase.

The Kirigami toolkit, which offers a set of high-level application flow-controls for QtQuick applications achieves exactly that: it allows to built responsive apps that adapt to screen characteristics and input method.

(As an aside, there’s the case for Kirigami also supporting Android. Developing an app specifically for usage in Plasma may be easier, but it is also limiting its reach. Imagine an app running fine on your laptop, but also on your smartphone, be it Android or drive by Plasma Mobile (in the future). That would totally rock, and it would mean a target audience in the billions, not millions. Conversely, providing the technology to create such apps decreases the relative investment compared to the target audience, making technologies such as QtQuick and Kirigami an excellent choice for developers that want to maximize their target audience.)

Plasma Mobile vs. Plasma Desktop

Plasma Mobile is being developed in tandem with the popular Plasma desktop, in fact it shares more then 90% of the code with it. This means that work done on either of the two, mobile and desktop often benefits the other, and that there’s a large degree of compatibility between the two. The result is a system that feels the same across different devices, but makes use of the special capabilities of a given device, and supports different ways of using the software. On the development side, this means huge gains in terms of productivity and quality: A wider set of usage scenarios and having the code running on more machines means that it gets more real-world testing and bugs get shaken out quicker.

Who cares, anyway?

Whether or not convergence is something that users want, I think so. It takes a learning curve for users, and I think advancements in technology to bring this to the market, you need rather powerful hardware, the right connectors, and the right hardware components, so it’s not an easy end-goal. The path to convergence already bears huge benefits, as it means more efficient development, more consistency across different form factors and higher quality code.

Whether or not users care is only relevant to a certain point. Arguably, the biggest benefit of convergence lies in the efficiency of the development process, especially when multiple devices are involved. It doesn’t actually matter all that much if users are going to plug their mouse and keyboard into a phone and use it as a desktop device. Already today, users expect touchscreen to just work, even on laptops, users already expect the convertible being usable when the keyboard is flipped away or unplugged, users already expect to plug a 4K into their 1024×768 resolution laptop and the UI neither becoming unreadable or comically large.

In short: There really is no way around a large degree of convergence in Plasma (and similar products).

on September 26, 2017 11:12 AM

Artful Aardvark (17.10) Beta 2 images are now available for testing.

The Kubuntu team will be releasing 17.10 in October. The final Beta 2 milestone will be available on September 28.

This is the first spin in preparation for the Beta 2 pre-release. Kubuntu Beta pre-releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Kubuntu Beta pre-releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Kubuntu, KDE, and Qt developers

Getting Kubuntu 17.10 Beta 2:

To upgrade to Kubuntu 17.10 pre-releases from 17.04, run sudo do-release-upgrade -d from a command line.

Download a Bootable image and put it onto a DVD or USB Drive via the download link at http://iso.qa.ubuntu.com/qatracker/milestones/382/builds. This is also the direct link to report your findings and any bug reports you file.

See our release notes: https://wiki.ubuntu.com/ArtfulAardvark/Beta2/Kubuntu

Please report your results on the Release tracker.

on September 26, 2017 03:17 AM

September 25, 2017

The Ubuntu desktop team and a lot other people from the Ubuntu community are gathering for the week in New York for the Ubuntu Rally. It’s time to get the final touch and bug fixes for Ubuntu artful which is turning itself soon into Ubuntu 17.10. As you probably know if you follow this blog series, it will feature GNOME Shell by default, with slight modifications to ease and adapt to our audience for this new user experience. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 14: Badges and progress bar on Ubuntu Dock

One of the latest thing we wanted to work on as we highlighted on our previous posts is the notification for new emails or download experience on the Shell. We already do ship the KStatusNotifier extension for application indicator, but need a way to signal the user (even if you are not looking at the screen when this happens) for new emails, IM or download/copy progress.

Andrea stepped up on this and worked with Dash to Dock upstream to implement the unity API for this. Working with them, as usual, was pleasing and we got the green flag that it’s going to merge to master, with possibly some tweaks, which will make this work available to every Dash to Dock users! It means that after this update, Thunderbird is handily showing the number of unread emails you have in your inbox, thanks to thunderbird-gnome-support that we seeded back with Sébastien.

Ubuntu Dock with number of unread emails

Similarly, we now have progress bar support on Nautilus, Firefox downloads and every applications using that API to get updated on transactional actions.

Ubuntu Dock with progress bar during a download with Firefox

And we are all done on our changes to adapt GNOME Shell to our targeted audience! Meanwhile Marco is working on HDPI (and sim cards…) to deliver a fantastic fractional scaling experience.

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

Let’s see how many bugs we can squash. We will of course update you on the slight readjustment we are planning to do during this week at the Ubuntu rally and for the release. Let’s target first the incoming beta which will enable you to test all of this.

on September 25, 2017 09:35 PM

September 24, 2017

APT 1.5 is out

Julian Andres Klode

APT 1.5 is out, after almost 3 months the release of 1.5 alpha 1, and almost six months since the release of 1.4 on April 1st. This release cycle was unusually short, as 1.4 was the stretch release series and the zesty release series, and we waited for the latter of these releases before we started 1.5. In related news, 1.4.8 hit stretch-proposed-updates today, and is waiting in the unapproved queue for zesty.

This release series moves https support from apt-transport-https into apt proper, bringing with it support for https:// proxies, and support for autodetectproxy scripts that return http, https, and socks5h proxies for both http and https.

Unattended updates and upgrades now work better: The dependency on network-online was removed and we introduced a meta wait-online helper with support for NetworkManager, systemd-networkd, and connman that allows us to wait for network even if we want to run updates directly after a resume (which might or might not have worked before, depending on whether update ran before or after network was back up again). This also improves a boot performance regression for systems with rc.local files:

The rc.local.service unit specified After=network-online.target, and login stuff was After=rc.local.service, and apt-daily.timer was Wants=network-online.target, causing network-online.target to be pulled into the boot and the rc.local.service ordering dependency to take effect, significantly slowing down the boot.

An earlier less intrusive variant of that fix is in 1.4.8: It just moves the network-online.target Want/After from apt-daily.timer to apt-daily.service so most boots are uncoupled now. I hope we get the full solution into stretch in a later point release, but we should gather some experience first before discussing this with the release time.

Balint Reczey also provided a patch to increase the time out before killing the daily upgrade service to 15 minutes, to actually give unattended-upgrades some time to finish an in-progress update. Honestly, I’d have though the machine hung up and force rebooted it after 5 seconds already. (this patch is also in 1.4.8)

We also made sure that unreadable config files no longer cause an error, but only a warning, as that was sort of a regression from previous releases; and we added documentation for /etc/apt/auth.conf, so people actually know the preferred way to place sensitive data like passwords (and can make their sources.list files world-readable again).

We also fixed apt-cdrom to support discs without MD5 hashes for Sources (the Files field), and re-enabled support for udev-based detection of cdrom devices which was accidentally broken for 4 years, as it was trying to load libudev.so.0 at runtime, but that library had an SONAME change to libudev.so.1 – we now link against it normally.

Furthermore, if certain information in Release files change, like the codename, apt will now request confirmation from the user, avoiding a scenario where a user has stable in their sources.list and accidentally upgrades to the next release when it becomes stable.

Paul Wise contributed patches to allow configuring the apt-daily intervals more easily – apt-daily is invoked twice a day by systemd but has more fine-grained internal timestamp files. You can now specify the intervals in seconds, minutes, hours, and day units, or specify “always” to always run (that is, up to twice a day on systemd, once per day on non-systemd platforms).

Development for the 1.6 series has started, and I intent to upload a first alpha to unstable in about a week, removing the apt-transport-https package and enabling compressed index files by default (save space, a lot of space, at not much performance cost thanks to lz4). There will also be some small clean ups in there, but I don’t expect any life-changing changes for now.

I think our new approach of uploading development releases directly to unstable instead of parking them in experimental is working out well. Some people are confused why alpha releases appear in unstable, but let me just say one thing: These labels basically just indicate feature-completeness, and not stability. An alpha is just very likely to get a lot more features, a beta is less likely (all the big stuff is in), and the release candidates just fix bugs.

Also, we now have 3 active stable series: The 1.2 LTS series, 1.4 medium LTS, and 1.5. 1.2 receives updates as part of Ubuntu 16.04 (xenial), 1.4 as part of Debian 9.0 (stretch) and Ubuntu 17.04 (zesty); whereas 1.5 will only be supported for 9 months (as part of Ubuntu 17.10). I think the stable release series are working well, although 1.4 is a bit tricky being shared by stretch and zesty right now (but zesty is history soon, so …).


Filed under: Debian, Ubuntu
on September 24, 2017 07:32 PM

September 23, 2017

Adventurous users and developers running the Artful development release can now also test the beta version of Plasma 5.11. This is experimental and can possibly kill kittens!

Bug reports on this beta go to https://bugs.kde.org, not to Launchpad.

The PPA comes with a WARNING: Artful will ship with Plasma 5.10.5, so please be prepared to use ppa-purge to revert changes. Plasma 5.11 will ship too late for inclusion in Kubuntu 17.10, but should be available via the main backports PPA as soon as is practical after release day, October 19th, 2017.

Read more about the beta release: https://www.kde.org/announcements/plasma-5.10.95.php

If you want to test on Artful: sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt-get update && sudo apt full-upgrade -y

The purpose of this PPA is testing, and bug reports go to bugs.kde.org.

on September 23, 2017 09:36 PM

September 22, 2017

The Evolution of Plasma Mobile

Sebastian Kügler

Plasma MobilePlasma Mobile

Back around 2006, when the Plasma project was started by Aaron Seigo and a group of brave hackers (among which, yours truly) we wanted to create a user interface that is future-proof. We didn’t want to create something that would only run on desktop devices (or laptops), but a code-base that grows with us into whatever the future would bring. Mobile devices were already getting more powerful, but would usually run entirely different software than desktop devices. We wondered why. The Linux kernel served as a wonderful example. Linux runs on a wide range of devices, from super computers to embedded systems, you would set it up for the target system and it would run largely without code changes. Linux architecture is in fact convergent. Could we do something similar at the user interface level?

Plasma Netbook

In 2007, Asus introduced the Eee PC, a small, inexpensive laptop. Netbooks proved to be all the rage at that point, so around 2009, we created Plasma Netbook, proving for the first time that we could actually serve different device user interfaces from the same code-base. There was a decent amount of code-sharing, but Plasma Netbook also helped us identifying areas in which we wanted to do better.

Plasma Mobile (I)

Come 2010, we got our hands on an N900 by Nokia, running Maemo, a mobile version of Linux. Within a week, during a sprint, we worked on a proof-of-concept mobile interface of Plasma:

Well, Nokia-as-we-knew-it is dead now, and Plasma never materialized on Nokia devices.

Plasma Active

Plasma Active was built as a successor to the early prototypes, and our first attempt at creating something for end-users. Conceived in 2011, the idea was not just to produce a simple Plasma user interface for a tablet device, but also deliver on a range of novel ideas for interaction with the device, closely related to the semantic desktop. Interlinked documents, contacts, sharing built right into the core, not just a “dumb” platform to run apps on, but a holistic system that allows users to manage their digital life on the fly. While Plasma Active had great promise and a lot of innovative potential, it never materialized for end-users in part due to lack of interest from both, the KDE community itself, but also from people on the outside. This doesn’t mean that the work put into it was lost, but thanks to a convergent code-base, many improvements made primarily with Plasma Active in mind have improved Plasma for all its users and continue to do so today. In many ways, Active proved valuable as a playground, as a clean slate where we want to take the technology, and how we can improve our developemnt process. It’s not a surprise that Plasma 5 today is developed in a process very similar to how we approached Plasma Active back then.

Plasma Mobile (II)

Learning from the Plasma Active project, in 2015 we regrouped and started to build a rather simple smartphone user interface, along with a reference software stack that would allow us not only to develop Plasma Mobile further, but to allow us to run on a growing number of devices. Plasma Mobile (II)’s goal wasn’t to get the most innovative of interfaces out, but to create a bread-and-butter platform, a base to develop applications on. From a technology point of view, Plasma is actually very small. It shares approximately 95% of the code with its desktop companion, widgets, and increasingly applications are interchangeable between the two.

Plasma Mobile (in any shape or form) has never been this close to actually making it into the hands and pockets of end users. A collaboration project with Purism, a company bringing privacy and software freedom to end-users, we may create the first Plasma phone for end users and have it on the market as soon as januari 2019. If you want to support this project, the crowdfunding campaign has just passed the 40% mark, and you can be part of it — either by joining the development crew, or by pre-ordering a device and thereby funding the development.

on September 22, 2017 03:19 PM

September 21, 2017

S10E29 – Adamant Terrible Hammer - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This is Le Crossover Ubuntu Mashup Podcast thingy recorded live at UbuCon Europe in Paris, France.

It’s Season Ten Episode Twenty-Nine of the Ubuntu Podcast! Alan Pope, Martin Wimpress, Marius Quabeck, Max Kristen, Rudy and Tiago Carrondo are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on September 21, 2017 08:10 PM

Ime odbijeno

Ante Karamatić

Nakon 8-9 dana i poslanog maila, danas sam dobio obavijest o tome što se dešava s mojom prijavom. Pa prenosim u cijelosti:

dana 12.09.2017 poslano je rezervacija u TS Zagreb (e – tvrtka). i poštom je poslana dok. i  RZ obrazac u Hitro.hr Zagreb
Papirna dokumentacija je predana  na sud 13.09.2017.Rezevacija imena nije prošla . .Obavijest je predignuta sa suda 18.09.2017(.Hirto.hr  – Zagreb)
Obavijest je poštom danas stigla u Hitro.hr  – Šibenik (21.09.2017.). Zvala sam Vas na mobitel da bi mogli predigniti potvrdu ,ali mi se niko ne javlja.
Stoga Vas obavještvam da možete predignuti obavijest u HITRO:HR Šibenik.

Dakle, eTvrtka je jedno veliko ništa; obična laž i prijevara. I dalje se dokumenti šalju poštom. Da se razumijemo, ovo nije problem službenika koji su bili sustretljivi. Ovo je problem organizacije države, odnosno Vlade. Službenici su tu žrtve isto kao i mi, koji pokušavamo nešto stvoriti.

Dakle, ime je odbijeno.

U Republici Hrvatskoj je potrebno proći 10 dana kako biste saznali možete li pokrenuti tvrtku s određenim imenom. U drugim državama ovakve stvari ni ne postoje, već se tvrtke pokreću unutar jednog dana. Ako želimo biti plodno tlo za poduzetništvo, hitro.hr treba ukinuti (potpuno je besmislen) i uvesti suvremene tehnologije; algoritmi mogu pregledavati imena i to treba biti samo web stranica. Nikakvi protokoli, plaćanja, stajanja u redu.

on September 21, 2017 03:47 PM

The Ubuntu Community Council election has begun and ballots sent out to all Ubuntu Members. Voting closes September 27th at end of day UTC.

The following candidates are standing for 7 seats on the council:

Please contact the community-council@lists.ubuntu.com list if you are an Ubuntu Member but did not receive a ballot. Voting instructions were sent to the public address defined in Launchpad, or your launchpad_id@ubuntu.com address if not. Please also make sure you check your spam folder first.

We’d like to thank all the candidate for their willingness to serve in this capacity, and members for their considered votes.

Originally posted to the ubuntu-news-team mailing list on Tue Sep 12 14:22:49 UTC 2017 by Mark Shuttleworth

on September 21, 2017 03:30 PM

Another successful Randa meeting! I spent most of my days working on snappy packaging for KDE core applications, and I have most of them done!

Snappy Builds on KDE Neon

We need testers! Please see Using snappy to get started.

In the evenings I worked on getting all my appimage work moved into the KDE infrastructure so that the community can take over.

I learned a great deal about accessibility and have been formulating ways to improve KDE neon in this area.

Randa meetings are crucial to the KDE community for developer interaction, brainstorming, and bringing great new things to KDE.
I encourage all of you to please consider a donation at https://www.kde.org/fundraisers/randameetings2017/

on September 21, 2017 12:54 PM

September 20, 2017

Finding your VMs and containers via DNS resolution so you can ssh into them can be tricky. I was talking with Stéphane Graber today about this and he reminded me of his excellent article: Easily ssh to your containers and VMs on Ubuntu 12.04.

These days, libvirt has the `virsh dominfo` command and LXD has a slightly different way of finding the IP address.

Here is an updated `~/.ssh/config` that I’m now using (thank you Stéphane for the update for LXD):

Host *.lxd
    #User ubuntu
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(lxc list -c s4 $(echo %h | sed "s/\.lxd//g") %h | grep RUNNING | cut -d' ' -f4) %p
 
Host *.vm
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(virsh domifaddr $(echo %h | sed "s/\.vm//g") | awk -F'[ /]+' '{if (NR>2 && $5) print $5}') %p

You may want to uncomment `StrictHostKeyChecking` and `UserKnownHostsFile` depending on your environment (see `man ssh_config`) for details.

With the above, I can ssh in with:

$ ssh foo.vm uptime
16:37:26 up 50 min, 0 users, load average: 0.00, 0.00, 0.00
$ ssh bar.lxd uptime
21:37:35 up 12:39, 2 users, load average: 0.55, 0.73, 0.66

Enjoy!


Filed under: canonical, ubuntu, ubuntu-server
on September 20, 2017 09:39 PM

Namespaced file capabilities

As of this past week, namespaced file capabilities are available in the upstream kernel. (Thanks to Eric Biederman for many review cycles and for the final pull request)

TL;DR

Some packages install binaries with file capabilities, and fail to install if you cannot set the file capabilities. Such packages could not be installed from inside a user namespace. With this feature, that problem is fixed.

Yay!

What are they?

POSIX capabilities are pieces of root’s privilege which can be individually used.

File capabilites are POSIX capability sets attached to files. When files with associated capabilities are executed, the resulting task may end up with privilege even if the calling user was unprivileged.

What’s the problem

In single-user-namespace days, POSIX capabilities were completely orthogonal to userids. You can be a non-root user with CAP_SYS_ADMIN, for instance. This can happen by starting as root, setting PR_SET_KEEPCAPS through prctl(2), and dropping the capabilities you don’t want and changing your uid.  Or, it can happen by a non-root user executing a file with file capabilities.  In order to append such a capability to a file, you require the CAP_SETFCAP capability.

User namespaces had several requirements, including:

  1. an unprivileged user should be able to create a user namespace
  2. root in a user namespace should be privileged against its resources
  3. root in a user namespace should be unprivileged against any resources which it does not own.

So in a post-user-namespace age, unprivileged user can “have privilege” with respect to files they own. However if we allow them to write a file capability on one of their files, then they can execute that file as an unprivileged user on the host, thereby gaining that privilege. This violates the third user namespace requirement, and is therefore not allowed.

Unfortunately – and fortunately – some software wants to be installed with file capabilities. On the one hand that is great, but on the other hand, if the package installer isn’t able to handle the failure to set file capabilities, then package installs are broken. This was the case for some common packages – for instance httpd on centos.

With namespaced file capabilities, file capabilities continue to be orthogonal with respect to userids mapped into the namespace. However they capabilities are tagged as belonging to the host uid mapped to the container’s root id (0).  (If uid 0 is not mapped, then file capabilities cannot be assigned)  This prevents the namespace owner from gaining privilege in a namespace against which they should not be privileged.

 

Disclaimer

The opinions expressed in this blog are my own views and not those of Cisco.


on September 20, 2017 03:37 PM

Now that GNOME 3.26 is released, available in Ubuntu artful, and final GNOME Shell UI is confirmed, it’s time to adapt our default user experience to it. Let’s discuss how we worked with dash to dock upstream on the transparency feature. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 13: Adaptive transparency for Ubuntu Dock

GNOME Shell 3.26 excellent new release ships thus some dynamic panel transparency by default. If no window is next to the top panel, the bar is itself is translucent. If any windows is next to it, the panel becomes opaque. This feature is highlighted on the GNOME 3.26 release note. As we already discussed this on a previous blog post, it means that the Ubuntu Dock default opacity level doesn’t fit very well with the transparent top panel on an empty desktop.

Previous default Ubuntu Dock transparency

Even if there were some discussions within GNOME about keeping or reverting this dynamic transparency feature, we reached out the Dash to Dock guys during the 3.25.9x period to be prepared. Started then some excellent discussions on the pull request which was already rolling full speed ahead.

The first idea was to have dynamic transparency. Having one status for the top panel, and another one for the Dock itself. However, this gives some weirds user experience after playing with it a little bit:

We can feel there are too much flickering, having both parts of the UI behaving independently. The idea I raised upstream was thus to consider all Shell UI (which is, in the Ubuntu session the top panel and Ubuntu Dock) as a single entity. Their opacity status is thus linked, as one UI element. François agreed and had the same idea on his mind, before implementing it. The results is way more natural:

Those options are implemented as options in Dash to Dock settings panel, and we just set this last behavior as the default in Ubuntu Dock.

We made sure that this option is working well with the various dock settings we expose in the Settings application:

In particular, you can see that intelli-hide is working as expected: dock opacity changing while the Dock is vanishing and when forcing it to show up again, it’s at the maximum opacity that we set.

The default with no application next to the panel or dock is now giving some good outlook:

Default empty Ubuntu artful desktop

The best part is the following: as we are getting closer to release and there is still a little bit of work upstream to get everything merged in Dash to Dock itself for options and settings UI which doesn’t impact Ubuntu Dock, Michele has prepared a cleaned up branch that we can cherry-pick from directly in our ubuntu-dock branch that they will keep compatible with master for us! Now that the Feature Freeze and UI Freeze exceptions have been approved, the Ubuntu Dock package is currently building in the artful repository alongside other fixes and some shortcuts improvements.

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

It’s really a pleasure to work with Dash to Dock upstream, I’m using this blog opportunity to thank them again for everything they do and the cooperation they ease out for our use case.

on September 20, 2017 11:25 AM

Last week, myself and a number of the OpenStack Charms team had the pleasure of attending the OpenStack Project Teams Gathering in Denver, Colorado.

The first two days of the PTG where dedicated to cross project discussions, with the last three days focused on project specific discussion and work in dedicated rooms.

Here’s a summary of the charm related discussion over the week.

Cross Project Discussions

Skip Level Upgrades

This topic was discussed at the start of the week, in the context of supporting upgrades across multiple OpenStack releases for operators.  What was immediately evident was this was really a discussion around ‘fast-forward’ upgrades, rather than actually skipping any specific OpenStack series as part of a cloud upgrade.  Deployments would still need to step through each OpenStack release series in turn, so the discussion centred around how to make this much easier for operators and deployment tools to consume than it has been to-date.

There was general agreement on the principles that all steps required to update a service between series should be supported whilst the service is offline – i.e. all database migrations can be completed without the services actually running;  This would allow multiple upgrade steps to be completed without having to start services up on interim steps. Note that a lot of projects all ready support this approach, but its never been agreed as a general policy as part of the ‘supports-upgrade‘ tag which was one of the actions resulting from this discussion.

In the context of the OpenStack Charms, we already follow something along these lines for minimising the amount of service disruption in the control plane during OpenStack upgrades; with implementation of this approach across all projects, we can avoid having to start up services on each series step as we do today, further optimising the upgrade process delivered by the charms for services that don’t support rolling upgrades.

Policy in Code

Most services in OpenStack rely on a policy.{json,yaml} file to define the policy for role based access into API endpoints – for example, what operations require admin level permissions for the cloud. Moving all policy default definitions to code rather than in a configuration file is a goal for the Queens development cycle.

This approach will make adapting policies as part of an OpenStack Charm based deployment much easier, as we only have to manage the delta on top of the defaults, rather than having to manage the entire policy file for each OpenStack release.  Notably Nova and Keystone have already moved to this approach during previous development cycles.

Deployment (SIG)

During the first two days, some cross deployment tool discussions where held for a variety of topics; of specific interest for the OpenStack Charms was the discussion around health/status middleware for projects so that the general health of a service can be assessed via its API – this would cover in-depth checks such as access to database and messaging resources, as well as access to other services that the checked service might depend on – for example, can Nova access Keystone’s API for authentication of tokens etc. There was general agreement that this was a good idea, and it will be proposed as a community goal for the OpenStack project.

OpenStack Charms Devroom

Keystone: v3 API as default

The OpenStack Charms have optionally supported Keystone v3 for some time; The Keystone v2 API is officially deprecated, so we had discussion around approach for switching the default API deployed by the charms going forwards; in summary

  • New deployments should default to the v3 API and associated policy definitions
  • Existing deployments that get upgraded to newer charm releases should not switch automatically to v3, limiting the impact of services built around v2 based deployments already in production.
  • The charms already support switching from v2 to v3, so v2 deployments can upgrade as and when they are ready todo so.

At some point in time, we’ll have to automatically switch v2 deployments to v3 on OpenStack series upgrade, but that does not have to happen yet.

Keystone: Fernet Token support

The charms currently only support UUID based tokens (since PKI was dropped from Keystone); The preferred format is now Fernet so we should implement this in the charms – we should be able to leverage the existing PKI key management code to an extent to support Fernet tokens.

Stable Branch Life-cycles

Currently the OpenStack Charms team actively maintains two branches – the current development focus in the master branch, and the most recent stable branch – which right now is stable/17.08.  At the point of the next release, the stable/17.08 branch is no longer maintained, being superseded by the new stable/XX.XX branch.  This is reflected in the promulgated charms in the Juju charm store as well.  Older versions of charms remain consumable (albeit there appears to be some trimming of older revisions which needs investigating). If a bug is discovered in a charm version from a inactive stable branch, the only course of action is to upgrade the the latest stable version for fixes, which may also include new features and behavioural changes.

There are some technical challenges with regard to consumption of multiple stable branches from the charm store – we discussed using a different team namespace for an ‘old-stable’ style consumption model which is not that elegant, but would work.  Maintaining more branches means more resource effort for cherry-picks and reviews which is not feasible with the currently amount of time the development team has for these activities so no change for the time being!

Service Restart Coordination at Scale

tl;dr no one wants enabling debug logging to take out their rabbits

When running the OpenStack Charms at scale, parallel restarts of daemons for services with large numbers of units (we specifically discussed hundreds of compute units) can generate a high load on underlying control plane infrastructure as daemons drop and re-connect to message and database services potentially resulting in service outages. We discussed a few approaches to mitigate this specific problem, but ended up with focus on how we could implement a feature which batched up restarts of services into chunks based on a user provided configuration option.

You can read the full details in the proposed specification for this work.

We also had some good conversation around how unit level overrides for some configuration options would be useful – supporting the use case where a user wants to enable debug logging for a single unit of a service (maybe its causing problems) without having to restart services across all units to support this.  This is not directly supported by Juju today – but we’ll make the request!

Cross Model Relations – Use Cases

We brainstormed some ideas about how we might make use of the new cross-model relation features being developed for future Juju versions; some general ideas:

  • Multiple Region Cloud Deployments
    • Keystone + MySQL and Dashboard in one model (supporting all regions)
    • Each region (including region specific control plane services) deployed into a different model and controller, potentially using different MAAS deployments in different DC’s.
  • Keystone Federation Support
    • Use of Keystone deployments in different models/controllers to build out federated deployments, with one lead Keystone acting as the identity provider to other peon Keystones in different regions or potentially completely different OpenStack Clouds.

We’ll look to use the existing relations for some of these ideas, so as the implementation of this feature in Juju becomes more mature we can be well positioned to support its use in OpenStack deployments.

Deployment Duration

We had some discussion about the length of time taken to deploy a fully HA OpenStack Cloud onto hardware using the OpenStack Charms and how we might improve this by optimising hook executions.

There was general agreement that scope exists in the charms to improve general hook execution time – specifically in charms such as RabbitMQ and Percona XtraDB Cluster which create and distribute credentials to consuming applications.

We also need to ensure that we’re tracking any improvements made with good baseline metrics on charm hook execution times on reference hardware deployments so that any proposed changes to charms can be assessed in terms of positive or negative impact on individual unit hook execution time and overall deployment duration – so expect some work in CI over the next development cycle to support this.

As a follow up to the PTG, the team is looking at whether we can use the presence of a VIP configuration option to signal to the charm to postpone any presentation of access relation data to the point after which HA configuration has been completed and the service can be accessed across multiple units using the VIP.  This would potentially reduce the number (and associated cost) of interim hook executions due to pre-HA relation data being presented to consuming applications.

Mini Sprints

On the Thursday of the PTG, we held a few mini-sprints to get some early work done on features for the Queens cycle; specifically we hacked on:

Good progress was made in most areas with some reviews already up.

We had a good turnout with 10 charm developers in the devroom – thanks to everyone who attended and a special call-out to Billy Olsen who showed up with team T-Shirts for everyone!

We have some new specs already up for review, and I expect to see a few more over the next two weeks!

EOM


on September 20, 2017 10:51 AM

Cvrčci

Ante Karamatić

 

Čekanje

Veli hitro.hr kako se rezervacija imena rješava u 3 (tri) radna dana. Zahtjev je podnesen u utorak, 12.9. Danas je 20.9. Čuju se samo cvrčci.

on September 20, 2017 08:05 AM

APRX On Ubuntu Repository

Mohamad Faizul Zulkifli

Good news! i just noticed that aprx packages already listed on Ubuntu repository.



Aprx is a software package designed to run on any POSIX platform (Linux/BSD/Unix/etc.) and act as an APRS Digipeater and/or Internet Gateway. Aprx is able to support most APRS infrastructure deployments, including single stand-alone digipeaters, receive-only Internet gateways, full RF-gateways for bi-directional routing of traffic, and multi-port digipeaters operating on multiple channels or with multiple directional transceivers.

For more info visit:-



If you want to know more about aprs and ham radio visit:-







on September 20, 2017 08:01 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #519 for the weeks of September 5 – 18, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • Alan Diggs (Le Schyken, El Chicken)
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on September 20, 2017 04:11 AM

September 18, 2017


I had the distinct honor to deliver the closing keynote of the UbuCon Europe conference in Paris a few weeks ago.  First off -- what a beautiful conference and venue!  Kudos to the organizers who really put together a truly remarkable event.  And many thanks to the gentleman (Elias?) who brought me a bottle of his family's favorite champagne, as a gift on Day 2 :-)  I should give more talks in France!

In my keynote, I presented the results of the Ubuntu 18.04 LTS Default Desktops Applications Survey, which was discussed at length on HackerNews, Reddit, and Slashdot.  With the help of the Ubuntu Desktop team (led by Will Cooke), we processed over 15,000 survey responses and in this presentation, I discussed some of the insights of the data.

The team is now hard at work evaluating many of the suggested applications, for those of you that aren't into the all-Emacs spin of Ubuntu ;-)

Moreover, we're also investigating a potential approach to make the Ubuntu Desktop experience perhaps a bit like those Choose-Your-Own-Adventure books we loved when we were kids, where users have the opportunity to select each of their prefer applications (or stick with the distro default) for a handful of categories, during installation.

Marius Quabeck recorded the session and published the audio and video of the presentation here on YouTube:


You can download the slides here, or peruse them below:


Cheers,
Dustin
on September 18, 2017 10:34 PM

MAAS 2.3.0 Alpha 3 release!

Andres Rodriguez

MAAS 2.3.0 (alpha3)

New Features & Improvements

Hardware Testing (backend only)

MAAS has now introduced an improved hardware testing framework. This new framework allows MAAS to test individual components of a single machine, as well as providing better feedback to the user for each of those tests. This feature has introduced:

  • Ability to define a custom testing script with a YAML definition – Each custom test can be defined with YAML that will provide information about the test. This information includes the script name, description, required packages, and other metadata about what information the script will gather. This information can then be displayed in the UI.

  • Ability to pass parameters – Adds the ability to pass specific parameters to the scripts. For example, in upcoming beta releases, users would be able to select which disks they want to test if they don’t want to test all disks.

  • Running test individually – Improves the way how hardware tests are run per component. This allows MAAS to run tests against any individual component (such a single disk).

  • Adding additional performance tests

    • Added a CPU performance test with 7z.

    • Added a storage performance test with fio.

Please note that individual results for each of the components is currently only available over the API. Upcoming beta release will include various UI improvements that will allow the user to better surface and interface with these new features.

Rack Controller Deployment in Whitebox Switches

MAAS has now the ability to install and configure a MAAS rack controller once a machine has been deployed. As of today, this feature is only available when MAAS detects the machine is a whitebox switch. As such, all MAAS certified whitebox switches will be deployed with a MAAS rack controller. Currently certified switches include the Wedge 100 and the Wedge 40.

Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

UI – Controller Versions & Notifications

MAAS now surfaces the version of each running controller, and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as a HA setup.

Issues fixed in this release

  • #1702703    Cannot run maas-regiond without /bin/maas-rack
  • #1711414    [2.3, snap] Cannot delete a rack controller running from the snap
  • #1712450    [2.3] 500 error when uploading a new commissioning script
  • #1714273    [2.3, snap] Rack Controller from the snap fails to power manage on IPMI
  • #1715634    ‘tags machines’ takes 30+ seconds to respond with list of 9 nodes
  • #1676992    [2.2] Zesty ISO install fails on region controller due to postgresql not running
  • #1703035    MAAS should warn on version skew between controllers
  • #1708512    [2.3, UI] DNS and Description Labels misaligned on subnet details page
  • #1711700    [2.x] MAAS should avoid updating DNS if nothing changed
  • #1712422    [2.3] MAAS does not report form errors on script upload
  • #1712423    [2.3] 500 error when clicking the ‘Upload’ button with no script selected.
  • #1684094    [2.2.0rc2, UI, Subnets] Make the contextual menu language consistent across MAAS
  • #1688066    [2.2] VNC/SPICE graphical console for debugging purpose on libvirt pod created VMs
  • #1707850    [2.2] MAAS doesn’t report cloud-init failures post-deployment
  • #1711714    [2.3] cloud-init reporting not configured for deployed ubuntu core systems
  • #1681801    [2.2, UI] Device discovery – Tooltip misspelled
  • #1686246    [CLI help] set-storage-layout says Allocated when it should say Ready
  • #1621175    BMC acc setup during auto-enlistment fails on Huawei model RH1288 V3

For full details please visit:

https://launchpad.net/maas/+milestone/2.3.0alpha3

on September 18, 2017 02:24 PM

Please note that this post, like all of those on my blog, represents only my views, and not those of my employer. Nothing in here implies official hiring policy or requirements.

I’m not going to pretend that this article is unique or has magic bullets to get you into the offensive security space. I also won’t pretend to speak for others in that space or in other areas of information security. It’s a big field, and it turns out that a lot of us have opinions about it. Mubix maintains a list of posts like this so you can see everyone’s opinions. I highly recommend the post “So You Want to Work in Security” by Parisa Tabriz for a view that’s not specific to offensive security. (Though there’s a lot of cross-over.)

My personal area of interest – some would even say expertise – is offensive application security, which includes activities like black box application testing, reverse engineering (but not, generally, malware reversing), penetration testing, and red teaming. I also do whitebox code review and various other things, but mostly I attack things using the same tools and techniques that an illicit attacker would. Of course, I do this in the interest of securing those systems and learning from the experience to help engineer stronger and more robust systems.

I do a lot of work with recruiting and outreach in our company, so I’ve had the chance to talk to many people about what I think makes a good offensive security engineer. After a few dozen times and much reflection, I decided to write out my thoughts on getting started. Don’t believe this is all you need, but it should help you get started.

A Strong Sense of Curiousity and a Desire to Learn

This isn’t a field or a speciality that you get into after a few courses and can stop there. To be successful, you’ll have to constantly keep learning. To keep learning like that, you have to want to keep learning. I spend a lot of my weekends and evenings playing with technology because I want to understand how it works (and consequently, how I can break it). There’s a lot of ways to learn things that are relevant to this field:

  • Reddit
  • Twitter (follow a bunch of industry people)
  • Blogs (perhaps even mine…)
  • Books (my favorites in the resources section)
  • Courses
  • Attend Conferences (Network! Ask people what they’re doing!)
  • Watch Conference Videos
  • Hands on Exercises

Everyone has a different learning style, you’ll have to learn what works for you. I learn best by doing (hands-on) and somewhat by reading. Videos are just inspiration for me to look more into something. Twitter and Reddit are the starting grounds to find all the other resources.

I see an innate passion for this field in most of the successful people I know. Many of us would do this even if we weren’t paid (and do some of it in our spare time anyway). You don’t have to spend every waking moment working, but you do have to keep moving forward or get left behind.

Understanding the Underlying System

To identify, understand, and exploit security vulnerabilities, you have to understand the underlying system. I’ve seen “penetration testers” who don’t know that paths on Linux/Unix systems start with and use / as the path separator. Watching someone try to exploit a potential LFI with \etc\passwd is just painful. (Hint: it doesn’t work.)

If you’re attacking web applications, you should at least have some understanding of:

  • The HTTP Protocol
  • The Same Origin Policy
  • The programming language used
  • The operating system underneath

For non-web networked applications:

  • A basic understanding of TCP/IP (or UDP/IP, if applicable)
  • The OSI Model
  • Basic computer architecture (stack, heap, etc.)
  • Language used for implementation

You don’t have to know everything about every layer, but each item you don’t know is either something you’ll potentially miss, or something that will cost you time. You’ll learn more as you develop your skills, but there’s some fundamentals that will help you get started:

  • Learn at least one interpreted and one compiled programming language.
    • Python and ruby are a good choice for interpreted languages, as most security tools are written in one of those, so you can modify & create your own tools when needed.
    • C is the classic language for demonstrating memory corruption vulnerabilities, and doesn’t hide a lot of the underlying system, so a good choice for a compiled language.
  • Know basic use of both Linux and Windows. Basic use includes:
    • Network configuration
    • Command line basics
    • How services are run
  • Learn a bit about x86/x86-64 architecture.
    • What are pointers?
    • What is the stack and the heap?
    • What are registers?

You don’t have to have a full CS degree (but it certainly wouldn’t hurt), but if you don’t understand how developers do their work, you’ll have a much harder time looking for and exploiting vulnerabilities. Many of the best penetration testers and security researchers have had experience as network administrators, systems administrators, or developers – this experience is incredibly useful in understanding the underlying systems.

The CIA Triad

To understand security at all, you should understand the CIA triad. This has nothing to do with the American intelligence agency, but everything to do with 3 pillars of information security: Confidentiality, Integrity, and Availability.

Confidentiality refers to allowing only authorized access to data. For example, preventing access to someone else’s email falls into confidentiality. This idea has strong parallels to the notion of privacy. Encryption is often used (and misused) in the pursuit of confidentiality. Heartbleed is an example of a well-known bug affecting confidentiality.

Integrity refers to allowing only authorized changes to state. This can be the state of data (avoiding file tampering), the state of execution (avoiding remote code execution), or some combination. Most of the “exciting” vulnerabilities in information security impact integrity. GHOST is an example of a well-known bug affecting integrity.

Availability is, perhaps, the easiest concept to understand. This refers to the ability of a service to be access by legitimate users when they want to access it. (And probably also as the speed they’d like.)

These 3 concepts are the main areas of concern for security engineers.

Understanding Vulnerabilities

There are many ways to categorize vulnerabilities, so I won’t try to list them all, but find some and understand how they work. The OWASP Top 10 is a good start for web vulnerabilities. The Modern Binary Exploitation course from RPISEC is a good choice for understanding “Binary Exploitation”.

It’s really valuable to distinguish a bug from a vulnerability. Most vulnerabilities are bugs, most bugs are not vulnerabilities. Bugs are accidentally-introduced misbehavior in software. Vulnerabilities are ways to gain access to a higher (or different) privilege level in an unintended fashion. Generally, a bug must violate one of the 3 pillars of the CIA triad to be classified as a vulnerability. (Though this is often subjective, see [systemd bug].)

Doing Security

At some point, it stops being about what you know and starts being about what you can do. Knowing things is useful in being able to do, but merely reciting facts is not very useful in actual offensive security. Getting hands-on experience is critical, and this is one field where you need to be careful how to do it. Please remember that, however you choose to practice, you should stay legal and observe all applicable laws.

There’s a number of different options here that build relevant skills:

  • Formal classes with hands-on components
  • CTFs (please note that most CTF challenges have little resemblence to actual security work)
  • Wargames (see CTFs, but some are closer)
  • Lab work
  • Bug bounties

Of these, lab work is the most relevant to me, but also the one requiring the most time investment to setup. Typically, a lab will involve setting up one or more practices machines with known-vulnerable software (though feel free to progress to finding unknown issues). I’ll have a follow-up post with information on building an offensive security practice lab.

Bug bounties are a good option, but to a beginner, they’ll be very daunting because much of the low-hanging fruit will be gone, and there should be no known vulnerabilities to practice on. Getting into bug bounties without any prior experience at all is likely to only teach frustration and anger.

Resources

There are some suggested resources for getting started in Offensive Security. I’ll try to maintain them if I receive suggestions from other members of the community.

Web Resources (Reading/Watching)

Books

Courses

Lab Resources

I’ll have a follow-up about building a lab soon, but there’s some things worth looking at here:

Conclusion

This isn’t an especially easy field to get started in, but it’s the challenge that keeps most of us into it. I know I need to constantly be pushing the edge of my capabilities and of technology for it to stay satisfying. Good luck, and maybe you’ll soon be the author of one of the great resources in our community.

If you have other tips/resources that you think should have been included here, drop me a line or reach me on Twitter.

on September 18, 2017 07:00 AM

September 17, 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, about 189 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours is the same as last month.

The security tracker currently lists 59 packages with a known CVE and the dla-needed.txt file 60. The number of packages with open issues decreased slightly compared to last month but we’re not yet back to the usual situation. The number of CVE to fix per package tends to increase due to the increased usage of fuzzers.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on September 17, 2017 08:24 AM

September 14, 2017

En sólo 5 minutos vas a empaquetar tu primera aplicación snap. ¡Más fácil, imposible!
¿Aceptas el reto? ;) ¡Adelante!

vídeo tutorial


Basado en la conferencia de Alan Pope y Martin Wimpress de la 2ª Ubucon Europea:

snap! snap! snap!
on September 14, 2017 05:17 PM

This week we’ve been adding LED lights to a home studio, we announce the winner of the Entroware Apollo competition, serve up some GUI love and go over your feedback.

It’s Season Ten Episode Twenty-Eight of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

Entroware Competition Winner!

Congratulations to Dave Hingley for creating The Ubuntu Guys comic which was scripted in 20 minutes.

All entries

In no particular order here are all the entries

Roger Light

Neil McPhail

Sorry Neil, it’s 2017 and we still can’t edit tweets!

Andy Partington

Joe Ressington

Paul Gault

Robert Rijkhoff

Gentleman Punter

Ivan Pejić

Mattias Wernér

Masoud Abkenar

Johan Seyfferdt

Ovidiu Serban

Ryan Warnock

Dave Hingley

Ian Phillips

Brain Walton

Martin Tallosy

Lucy Walton

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on September 14, 2017 03:30 PM

September 13, 2017

6 de Septiembre, Miércoles

¡Por fin! Una nueva Ubucon Europea está a la vuelta de la esquina. Tras la primera de Alemania, toca París. Llegué un par de días antes, pues era más barato el avión en estas fechas tan saturadas de turistas.

Marius Quabeck, Miguel Menéndez y un par de chicos escandinavos (los cuales hicieron una bolera con Arduinos) nos juntamos de manera improvisada para cenar enfrente del museo donde se haría el evento.

super peque!

Tras pizza, cerveza y un montón de conversación, cada mochuelo a su olivo, ya con ganas de entrar en la antesala del evento.


7 de Septiembre, Jueves

A primera hora, nos acercamos Miguel y yo a la Ciudad de las Ciencias y de la Industria, que es el museo donde se desarrolla la Ubucon, para echar una mano a la organización y revisar que todo iba a funcionar como debería en nuestras conferencias.
Allí se encontraban muchos de los organizadores (aunque no todos por ser un día laboral). Ayudé a colocar mesas, desempaquetar los ordenadores y montarlos. Casi sin darme cuenta ya era la hora de comer, aunque bueno, con lo temprano que ellos comen... casi que era la hora del vermú :P

ubuntu por doquier :))

Nos acercamos todos a una pizzería, momento en el que se juntaron los grandes Diogo (del equipo portugués), Alan Pope y Martin Wimpress (estos dos últimos muy conocidos por su podcast y Ubuntu MATE). La comida fue agradable e incluso se dilató bastante, para una vez finalizada la sobremesa volver al museo y acabar con los preparativos del día siguiente.

En la tarde, el equipo francés organizó un paseo en barca por el río hasta la Torre Eiffel. Como ya conozco de visitas anteriores la capital de Francia, quedé en el evento hasta la hora de las copas, que eran en un bar alejado del evento, aunque muy peculiar. El bar estaba a la vera del río Sena, era un local muy antiguo, consistente en bóvedas de piedra. Era tan grande que incluso había un concierto en una de las esquinas y no molestaba al resto del bar. También había obras de teatro de manera espontánea. Muy original y único.

foto

Para cenar, salimos en busca de un restaurante de comida rápida, aunque acabamos en una pizzería de los Campos Elíseos, muy cara, pero es lo que hay en esa zona. Me encantó muchísimo la cena, pues me senté enfrente de Alan y Martin y hablamos mucho sobre la distribución.

Ya entrando en la media noche y temerosos de que cerrasen el metro, nos fuimos Miguel y yo al hotel.


Día 1 de la Ubucon, 8 de Septiembre, Viernes

Tras dormir como un lirón, me acerqué bien temprano al evento. El museo de la Ciudad de las Ciencias y de la Industria impresiona por su tamaño desde el exterior. Tras pasar el control de seguridad, bajas al nivel -1 y enfrente de la biblioteca, donde está situado el FabLab, entras en el recinto de la Ubucon.

museo

Nada más entrar, a mano izquierda hay una zona para jugar los niños, ideal para que los padres que quieran asistir puedan desentenderse de sus hijos mientras atienden las conferencias.
A mano derecha está un mostrador, para recibir a los visitantes y con muchísimo merchandising a la venta y hay que decir, que ese dinero va directamente a la organización para ayudar en futuros eventos.
Si avanzamos unos metros, tenemos a mano derecha ordenadores con Ubuntu para que cualquier visitante pueda probarlo in situ y frente a esos ordenadores, distintas organizaciones. Este año, UBPorts, Open Food Facts, Mozilla y Slimbook.

UBports
Slimbook
Mozilla
Zona para niños

Tras pasar esta zona, tenemos la primera sala de conferencias, la más pequeña y frente a ella, una sala de talleres, con unos 20 ordenadores. Si atraviesas esa sala destinada a talleres, llegas al área de install party, por donde pasan decenas de personas con su portátil para instalar Ubuntu.
Al final del evento, está la última de las salas, la más grande.

Olive y Rudy abrieron la Ubucon, contando por qué lo organizaban. Dando entrada a la primera de las charlas, la de Alan Pope sobre snapcraft.

Rudy & Olive
micro tutorial 
Muchísima audiencia para la charla de Alan

El resto del día, se desarrolla con charlas simultáneas, lo cual es bueno y malo. Bueno, porque si una no te atrae, vas a otra; y malo porque si te gustan ambas, te pierdes una ;)
Destacaría en este día las conferencias de Alan Pope, la presentación del proyecto UBports y el curso de programación de Ubuntu Touch por parte de Miguel.

Charla UBports 
Michal presentando la bolera con Arduino

Yo también puse mi granito de arena con la conferencia "Cómo hacer que triunfe tu proyecto de software libre".
El equipo francés preparó hasta el último detalle a conciencia, y ellos mismos preparaban la comida para organizadores y conferenciantes, así no tenías que irte fuera para comer y estabas junto a todos.

Al anochecer, el evento social fue en un bar que disfruté mucho. Se juntó el resto de la comitiva portuguesa con Tiago y Lucía, y como no había para comer en el bar, hicimos una escapada para cenar en un tailandés cercano.
La mayoría de españoles llegaron ese día y tras la cena les entró la modorra y el cansancio, por lo que se fueron a sus respectivos (y lejanos) hoteles.

Yo decidí quedarme y lo pasé como un niño, pues el bar consistía en una especie de cyber café, con muchísimas consolas, especialmente para jugar a dobles. Moló jugar contra Alan, Rudy, Martin, Lucía y Michal al Mario Kart y a la versión de PS4 de Street Fighter (¡cómo evolucionó este juego, pues yo sólo jugué la versión 2 en las recreativas de mi ciudad!).

Martin vs Alan. Round 1, fight! :P
Al Mario Kart: Lucia 1 - Costales 0
Con miedo a perder el último metro y ya un poco cansado del largo día, acompañé a Olive cuando marchó del bar.


Día 2 de la Ubucon, 9 de Septiembre, Sábado

La jornada la abrió Slimbook presentando sus portátiles con Linux preinstalado y la finalizó Martin Wimpress presentando el inminente Ubuntu MATE 17.10, y por cierto, sé de buena tinta que convenció a muchísimos de los asistentes que era reticentes a MATE.

Charla de Slimbook

Descubriendo Ubuntu MATE 17.10


A nivel personal, Paco Molinero y yo grabamos un nuevo podcast de Ubuntu y otras hierbas. Un capítulo muy especial, pues teniendo a mano a personas tan influyentes e importantes de la comunidad, no podíamos menos que entrevistar a Alan Pope, Martin Wimpress, Rudy y Miguel. En unos días publicamos el podcast por los canales habituales ;) 

Costales | Paco Molinero | Alan Pope

También me sorprendió la gran asistencia a mi charla de "Privacidad en la Red". Incluso el propio Ubuntu twitteó sobre ella |o/

Mi charla

Tras las conferencias, el evento social fue en el mismo bar de las bóvedas, junto al río. Allí, Rudy y Olive me hicieron una encerrona y me trajeron una gaita, escocesa, para más inri. Así que tras años sin tocar mi gaita asturiana, afortunado fui de poder hacer sonar algunas notas :P

let's play!
Continuó con un concierto que dio paso a la cena y sobremesa, que disfruté y mucho, con Paul (de UBPorts), Santiago (un nuevo amigo, que se acercó gracias a dar anuncia del evento en nuestro podcast :O), los chicos de Slimbook y Miguel.
Los franceses, como grandes anfitriones que son, se acercaban sin miedo a la mesa llena de españoles para socializar y dar sentido a esa palabra que es 'comunidad' :)) ¡Olé por ellos!


Día 3 de la Ubucon, 10 de Septiembre, domingo

Último día de la 2ª Ubucon Europea.
Destaco el taller de cómo crear una aplicación snap impartido por Alan y Martin. Rudy y Vincent dieron conferencias sobre la comunidad ubuntera, especialmente la francesa, revisando eventos en los que atraen a miles de personas, como pueden ser el webCafe y la Ubuntu Party. Philip Clay también colaboró en su segunda Ubucon Europea explicando cómo personalizar GNOME en la próxima versión de Ubuntu.

Curso de snap
Rudy en su charla

Me encantó una sesión de podcasters moderada por Alan, con Rudy, Martin, Tiago, Max y Marius Quabeck, la cual tengo muchas ganas de volver a escuchar en cuanto esté disponible online.

Podcast conjunto

Antes de finalizar las conferencias, Dustin Kirkland realizó una keynote sobre qué esperar en la futura versión LTS 18.04, tanto en el escritorio, en el servidor, como en el IoT. Muy interesante y parece que Ubuntu se ha tomado muy en serio escuchar más a la comunidad y se vieron todas las áreas en las que Ubuntu es relevante.

Qué veremos en la futura LTS

Mucho feedback de la comunidad

Olive y Rudy despidieron el evento agradeciendo a todos los participantes su asistencia.
Pero que el evento finalice, no implica que todos nos vayamos para el hotel ;) Cerramos la noche cenando en una pizzería cercana y la sobremesa fue única, con Alan provocando a diestro y siniestro. Eso sí, también le provocaron, por ejemplo Philip le preguntaba: "Is the snapd package available as a flatpak?" xD

cena

Un cubo, ingeniería alemana :P

The end

Al día siguiente, nos acercamos Miguel y yo a visitar la Torre Eiffel. Tras ello, Miguel se fue al aeropuerto y yo deambulé por las calles parisinas, calles de posiblemente la ciudad más bohemia del mundo. Una capital que albergó el evento ubuntero más importante de Europa y parte del mundo. Por el pasaron miles de asistentes, se hicieron libres muchísimos ordenadores, se compartió el conocimiento con decenas de charlas... pero si me tengo que quedar con algo, me quedo, sin dudarlo, con los eventos sociales, con toda la gente nueva que conocí o que volví a ver y los buenos momentos que compartí con ellos.

Paris


Para finalizar, quiero felicitar a todo el equipo francés. Han hecho un trabajo perfecto, impresionante y con toda la pasión del mundo. ¡Gracias y hasta la próxima compañeros!

LOL :))) ¡Grandeeee Rudy!

on September 13, 2017 05:58 PM

Canonical and Microsoft have teamed up to deliver an truly special experience -- running Ubuntu containers with Hyper-V Isolation on Windows 10 and Windows Servers!

We have published a fantastic tutorial at https://ubu.one/UhyperV, with screenshots and easy-to-follow instructions.  You should be up and running in minutes!

Follow that tutorial, and you'll be able to launch Ubuntu containers with Hyper-V isolation by running the following directly from a Windows Powershell:
  • docker run -it ubuntu bash
Cheers!
Dustin
on September 13, 2017 04:00 PM

September 12, 2017

The OpenStack Charms team is pleased to announce that the 17.08 release of the OpenStack Charms is now available from jujucharms.com!

In addition to 204 bug fixes across the charms and support for OpenStack Pike, this release includes a new charm for Gnocchi, support for Neutron internal DNS, Percona Cluster performance tuning and much more.

For full details of all the new goodness in this release please refer to the release notes.

Thanks go to the following people who contributed to this release:

Nobuto Murata
Mario Splivalo
Ante Karamatić
zhangbailin
Shane Peters
Billy Olsen
Tytus Kurek
Frode Nordahl
Felipe Reyes
David Ames
Jorge Niedbalski
Daniel Axtens
Edward Hope-Morley
Chris MacNaughton
Xav Paice
James Page
Jason Hobbs
Alex Kavanagh
Corey Bryant
Ryan Beisner
Graham Burgess
Andrew McLeod
Aymen  Frikha
Hua Zhang
Alvaro Uría
Peter Sabaini

EOM

 

 


on September 12, 2017 09:59 PM

KGraphViewer 2.4.0

Jonathan Riddell

KGraphViewer 2.4.0 has been released.

KGraphViewer is a visualiser for Graphviz’s DOT format of graphs.
https://www.kde.org/applications/graphics/kgraphviewer

This ports KGraphViewer to use KDE Frameworks 5 and Qt 5.

It can be used by massif-visualizer to add graphing features.

Download from:
https://download.kde.org/stable/kgraphviewer/2.4.0/

sha256:
88c2fd6514e49404cfd76cdac8ae910511979768477f77095d2f53dca0f231b4 kgraphviewer-2.4.0.tar.xz

Signed with my PGP key
2D1D 5B05 8835 7787 DE9E E225 EC94 D18F 7F05 997E
Jonathan Riddell <jr@jriddell.org>
kgraphviewer-2.4.0.tar.xz.sig

Facebooktwittergoogle_pluslinkedinby feather
on September 12, 2017 03:13 PM

September 11, 2017

Cloud-init is the subject for the most recent episode of Podcast.__init__.
Go and have a listen to Episode 126.

I really enjoyed talking to Tobias about cloud-init and some of the difficulties of the project and goals for the future.

Enjoy!
on September 11, 2017 01:42 PM

September 10, 2017

The Niamh prime

Stuart Langridge

A bit of maths-y fiddling around on a Sunday afternoon.

Fascinating video on the Trinity Hall prime at Numberphile:

Apparently, Professor James McKee found a prime number which, when written out as ASCII art, looks like the crest of Trinity Hall college. Jack Hodkinson at Cambridge then searched for and found a prime which looks like a picture of Corpus Christi college (via Futility Closet). That seems like a cool idea. So, with a bit of help from aalib in JavaScript and the Miller-Rabin primality test, plus a bit of scaling images up and down in Gimp, I found this 2,850-digit prime:

777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,577,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­752,385,356,­867,777,777,­777,777,777,­777,777,777,­777,777,777,­775,352,235,­666,688,668,­667,777,777,­777,777,777,­777,777,777,­777,776,765,­555,556,666,­856,868,667,­777,777,777,­777,777,777,­777,777,777,­222,335,666,­666,866,686,­666,665,777,­777,777,777,­777,777,777,­777,355,336,­358,866,556,­666,655,665,­655,777,777,­777,777,777,­777,777,733,­552,236,666,­666,655,665,­665,666,555,­777,777,777,­777,777,777,­777,276,265,­666,666,656,­655,555,555,­555,533,777,­777,777,777,­777,777,253,­252,566,666,­665,555,556,­555,555,565,­557,777,777,­777,777,777,­725,222,236,­666,565,555,­555,556,555,­535,355,237,­777,777,777,­777,772,266,­725,366,535,­555,355,555,­555,553,553,­533,577,777,­777,777,777,­272,637,356,­655,555,555,­353,535,556,­655,355,332,­277,777,777,­777,772,235,­775,355,665,­553,355,353,­535,533,555,­555,223,777,­777,777,777,­322,222,255,­555,556,353,­555,355,336,­355,653,533,­537,777,777,­777,772,222,­225,555,565,­555,355,335,­355,356,555,­655,353,237,­777,777,777,­772,272,355,­553,553,355,­535,353,365,­355,355,553,­523,577,777,­777,777,332,­333,566,333,­553,555,533,­355,355,555,­555,353,555,­677,777,777,­777,332,355,­555,555,555,­555,555,553,­555,535,555,­666,556,777,­777,777,775,­533,355,535,­355,553,565,­535,353,655,­655,555,565,­557,777,777,­777,756,533,­555,533,335,­353,566,655,­353,566,535,­656,655,577,­777,777,777,­565,353,535,­535,553,335,­566,666,555,­666,568,566,­665,777,777,­777,775,633,­535,555,555,­555,535,565,­556,555,666,­566,666,677,­777,777,777,­758,333,333,­355,555,656,­556,565,866,­666,866,658,­667,777,777,­777,777,582,­233,333,333,­355,565,555,­566,666,666,­868,666,677,­777,777,777,­775,822,333,­333,333,535,­555,556,666,­666,668,688,­586,777,777,­777,777,736,­355,333,333,­355,556,555,­636,666,686,­688,888,557,­777,777,777,­777,565,555,­555,333,555,­566,666,565,­686,666,886,­886,777,777,­777,777,776,­656,888,853,­335,556,686,­556,666,666,­868,886,887,­777,777,777,­777,775,356,­368,532,355,­555,688,666,­866,666,668,­668,877,777,­777,777,777,­732,233,553,­223,323,335,­533,556,666,­686,686,686,­777,777,777,­777,777,323,­333,332,222,­233,332,233,­568,665,666,­656,337,777,­777,777,777,­773,233,333,­322,222,223,­222,223,566,­556,655,533,­777,777,777,­777,777,722,­333,232,222,­222,233,222,­225,665,555,­632,277,777,­777,777,777,­777,323,333,­322,222,223,­222,222,236,­553,553,222,­777,777,777,­777,777,777,­233,232,222,­223,232,222,­223,355,533,­552,277,777,­777,777,777,­777,772,333,­333,222,233,­322,222,223,­353,233,355,­777,777,777,­777,777,777,­723,333,355,­665,533,233,­333,333,322,­335,677,777,­777,777,777,­777,777,735,­533,333,222,­233,333,333,­332,223,356,­777,777,777,­777,777,777,­777,555,333,­232,222,333,­333,333,222,­233,677,777,­777,777,777,­777,777,772,­533,332,222,­222,333,333,­322,222,255,­777,777,777,­777,777,777,­777,775,356,­566,665,553,­333,333,222,­223,557,777,­777,777,777,­777,777,777,­733,335,333,­332,223,333,­332,222,233,­577,777,777,­777,777,777,­777,777,233,­335,332,222,­223,332,322,­222,233,777,­777,777,777,­777,777,777,­777,333,232,­222,222,223,­332,222,222,­237,777,777,­777,777,777,­777,777,773,­222,222,222,­222,223,222,­322,222,377,­777,777,777,­777,777,777,­777,777,222,­222,222,222,­323,332,222,­223,777,777,­777,777,777,­777,777,777,­773,232,222,­222,333,353,­222,222,227,­777,777,777,­777,777,777,­777,777,733,­332,222,223,­355,322,222,­222,277,777,­777,777,777,­777,777,777,­777,755,333,­333,555,533,­222,222,222,­277,777,777,­777,777,777,­777,777,777,­777,775,555,­555,322,222,­222,222,777,­777,777,777,­777,777,777,­777,777,777,­755,555,533,­222,222,222,­227,777,777,­777,777,777,­777,777,777,­777,777,755,­553,332,222,­222,222,227,­777,777,777,­777,777,777,­777,777,777,­777,555,533,­322,222,222,­222,227,777,­777,777,777,­777,777,777,­777,777,773,­553,332,222,­222,222,222,­277,777,777,­777,777,777,­777,777,777,­777,735,533,­322,222,222,­222,222,277,­779,769

although it looks rather better when properly formatted.

77777777777777777777777777777777777777777777777777
77777777777777777777777777777777777777777777777777
77777777777777777777777577777777777777777777777777
77777777777777777777775238535686777777777777777777
77777777777777777753522356666886686677777777777777
77777777777777776765555556666856868667777777777777
77777777777777722233566666686668666666577777777777
77777777777773553363588665566666556656557777777777
77777777777733552236666666655665665666555777777777
77777777777727626566666665665555555555553377777777
77777777772532525666666655555565555555655577777777
77777777725222236666565555555556555535355237777777
77777777226672536653555535555555555355353357777777
77777772726373566555555553535355566553553322777777
77777772235775355665553355353535533555555223777777
77777732222225555555635355535533635565353353777777
77777722222255555655553553353553565556553532377777
77777772272355553553355535353365355355553523577777
77777733233356633355355553335535555555535355567777
77777773323555555555555555555535555355556665567777
77777775533355535355553565535353655655555565557777
77777775653355553333535356665535356653565665557777
77777775653535355355533355666665556665685666657777
77777775633535555555555535565556555666566666677777
77777775833333335555565655656586666686665866777777
77777775822333333333555655555666666668686666777777
77777775822333333333535555556666666668688586777777
77777773635533333335555655563666668668888855777777
77777775655555553335555666665656866668868867777777
77777776656888853335556686556666666868886887777777
77777777535636853235555568866686666666866887777777
77777777322335532233233355335566666866866867777777
77777777323333332222233332233568665666656337777777
77777777323333332222222322222356655665553377777777
77777777223332322222222332222256655556322777777777
77777777323333322222223222222236553553222777777777
77777777723323222222323222222335553355227777777777
77777777723333332222333222222233532333557777777777
77777777723333355665533233333333322335677777777777
77777777773553333322223333333333222335677777777777
77777777775553332322223333333332222336777777777777
77777777772533332222222333333322222255777777777777
77777777777535656666555333333322222355777777777777
77777777777333353333322233333322222335777777777777
77777777777233335332222223332322222233777777777777
77777777777733323222222222333222222223777777777777
77777777777732222222222222232223222223777777777777
77777777777777222222222222323332222223777777777777
77777777777777323222222233335322222222777777777777
77777777777777333322222233553222222222777777777777
77777777777777755333333555533222222222277777777777
77777777777777777777555555532222222222277777777777
77777777777777777777555555332222222222277777777777
77777777777777777777755553332222222222227777777777
77777777777777777777755553332222222222222777777777
77777777777777777777735533322222222222222777777777
77777777777777777777735533322222222222222277779769

I think I’ll call it the Niamh Prime.

on September 10, 2017 07:32 PM

September 08, 2017

I’ve been thinking about the usability of command-line terminals a lot recently.

Command-line interfaces remain mystifying to many people. Usability hobbyists seem as inclined to ask why the terminal exists, as how to optimise it. I’ve also had it suggested to me that the discipline of User Experience (UX) has little to offer the Command-Line Interface (CLI), because the habits of terminal users are too inherent or instinctive to be defined and optimised by usability experts.

As an experienced terminal user with a keen interest in usability, I disagree that usability has little to offer the CLI experience. I believe that the experience can be improved through the application of usability principles just as much as for more graphical domains.

Steps to learn a new CLI tool

To help demystify the command-line experience, I’m going to lay out some of the patterns of thinking and behaviour that define my use of the CLI.

New CLI tools I’ve learned recently include snap, kubectl and nghttp2, and I’ve also dabbled in writing command-line tools myself.

Below I’ll map out an example of the steps I might go through when discovering a new command-line tool, as a basis for exploring how these tools could be optimised for CLI users.

  1. Install the tool
    • First, I might try apt install {tool} (or brew install {tool} on a mac)
    • If that fails, I’ll probably search the internet for “Install {tool}” and hope to find the official documentation
  2. Check it is installed, and if tab-complete works
    • Type the first few characters of the command name (sna for snap) followed by <tab> <tab>, to see if the command name auto-completes, signifying that the system is aware of its existence
    • Hit space, and then <tab> <tab> again, to see if it shows me a list of available sub-commands, indicating that tab completion is set up correctly for the tool
  3. Try my first command
    • I’m probably following some documentation at this point, which will be telling me the first command to run (e.g. snap install {something}), so I’ll try that out and expect prompt succinct feedback to show me that it’s working
    • For basic tools, this may complete my initial interaction with the tool. For more complex tools like kubectl or git I may continue playing with it
  4. Try to do something more complex
    • Now I’m likely no longer following a tutorial, instead I’m experimenting on my own, trying to discover more about the tool
    • If what I want to do seems complex, I’ll straight away search the internet for how to do it
    • If it seems more simple, I’ll start looking for a list of subcommands to achieve my goal
    • I start with {tool} <tab> <tab> to see if it gives me a list of subcommands, in case it will be obvious what to do next from that list
    • If that fails I’ll try, in order, {tool} <enter>, {tool} -h, {tool} --help, {tool} help or {tool} /?
    • If none of those work then I’ll try man {tool}, looking for a Unix manual entry
    • If that fails then I’ll fall back to searching the internet again

UX recommendations

Considering my own experience of CLI tools, I am reasonably confident the following recommendations make good general practice guidelines:

  • Always implement a --help option on the main command and all subcommands, and if appropriate print out some help when no options are provided ({tool} <enter>)
  • Provide both short- (e.g. -h) and long- (e.g. --help) form options, and make them guessable
  • Carefully consider the naming of all subcommands and options, use familiar words where possible (e.g. help, clean, create)
  • Be consistent with your naming – have a clear philosophy behind your use of subcommands vs options, verbs vs nouns etc.
  • Provide helpful, readable output at all times – especially when there’s an error (npm I’m looking at you)
  • Use long-form options in documentation, to make commands more self-explanatory
  • Make the tool easy to install with common software management systems (snap, apt, Homebrew, or sometimes NPM or pip)
  • Provide tab-completion. If it can’t be installed with the tool, make it easy to install and document how to set it up in your installation guide
  • Command outputs should use the appropriate output streams (STDOUT and STDERR) and should be as user-friendly and succinct as possible, and ideally make use of terminal colours

Some of these recommendations are easier to implement than others. Ideally every command should consider their subcommands and options carefully, and implement --help. But writing auto-complete scripts is a significant undertaking.

Similarly, packaging your tool as a snap is significantly easier than, for example, adding software to the official Ubuntu software sources.

Although I believe all of the above to be good general advice, I would very much welcome research to highlight the relative importance of addressing each concern.

Outstanding questions

There are a number of further questions for which the answers don’t seem obvious to me, but I’d love to somehow find out the answers:

  • Once users have learned the short-form options (e.g. -h) do they ever use the long-form (e.g. --help)?
  • Do users prefer subcommands (mytool create {something}) or options (mytool --create {something})?
  • For multi-level commands, do users prefer {tool} {object} {verb} (e.g. git remote add {remote_name}), or {tool} {verb} {object} (e.g. kubectl get pod {pod_name}), or perhaps {tool} {verb}-{object} (e.g. juju remove-application {app_name})?
  • What patterns exist for formatting command output? What’s the optimal length for users to read, and what types of formatting do users find easiest to understand?

If you know of either authoritative recommendations or existing research on these topics, please let me know in the comments below.

I’ll try to write a more in-depth follow-up to this post when I’ve explored a bit further on some of these topics.

on September 08, 2017 10:43 AM

September 07, 2017

17.10 Beta 1 Release

Ubuntu Studio

Ubuntu Studio 17.10 Artful Aardvark Beta 1 is released! It’s that time of the release cycle again. The first beta of the upcoming release of Ubuntu Studio 17.10 is here and ready for testing. You may find the images at cdimage.ubuntu.com/ubuntustudio/releases/artful/beta-1/. More information can be found in the Beta 1 Release Notes. Reporting Bugs If […]
on September 07, 2017 12:32 PM

September 06, 2017

Webteam development summary

Canonical Design Team

Iteration 6

dating between 14th to the 25th of August

This iteration saw a lot of work on tutorials.ubuntu.com and on the migration of design.ubuntu.com from WordPress to a fresh new Jekyll site project. Continued research and planning into the new snapcraft.io site, with some beginnings of the development framework.

Vanilla Framework put a lot of emphasis into polishing the existing components and porting the old theme concept patterns into the code base.

Websites issues: 66 closed, 33 opened (551 in total)

Some highlights include:
– Fixing content of card touching card edge in tutorials – https://github.com/canonical-websites/tutorials.ubuntu.com/issues/312
– Migrate canonical.com to Vanilla: Polish and custom patterns – https://github.com/canonical-websites/www.canonical.com/issues/172
– Prepare for deploy of design.ubuntu.comhttps://github.com/canonical-websites/design.ubuntu.com/issues/54
– Redirect from https://www.ubuntu.com/usn/ to https://usn.ubuntu.com/usn were broken – https://github.com/canonical-websites/www.ubuntu.com/issues/2128
design.ubuntu.com/web-style-guide build page and then hide pages – https://github.com/canonical-websites/design.ubuntu.com/issues/66
– Snapcraft prototype: Snap page – https://github.com/canonical-websites/snapcraft.io/issues/346
– Create Flask skeleton application –https://github.com/canonical-websites/snapcraft-flask/issues/2

Vanilla Framework issues: 24 closed, 16 opened (43 in total)

Some highlights include:
– Combine the entire suite of brochure theme patterns to Vanilla’s code base – https://github.com/vanilla-framework/vanilla-framework/issues/1177
– Many improvements to the documentation theme – https://github.com/vanilla-framework/vanilla-docs-theme/issues/45
– External link icon seems stretched – https://github.com/vanilla-framework/vanilla-framework/issues/1058
– .p-heading–icon pattern remove text color – https://github.com/vanilla-framework/vanilla-framework/issues/1272
– Remove margin rules on card content – https://github.com/vanilla-framework/vanilla-framework/issues/1277

All of these projects are open source. So please file issues if you find any bugs or even better propose a pull request. See you in two weeks for the next update from the web team here at Canonical.

on September 06, 2017 08:15 PM

During the last days I was experimenting a bit with implementing a GObject C API in Rust. The results can be found in this repository, and this is something like an overview of the work, code walkthrough and status report. Note that this is quite long, a little bit further down you can find a table of contents and then jump to the area you’re interested in. Or read it chapter by chapter.

GObject is a C library that allows to write object-oriented, cross-platform APIs in C (which does not have support for that built-in), and provides a very expressive runtime type system with many features known from languages like Java, C# or C++. It is also used by various C library, most notably the cross-platform GTK UI toolkit and the GStreamer multimedia framework. GObject also comes with strong conventions about how an API is supposed to look and behave, which makes it relatively easy to learn new GObject based APIs as compared to generic C libraries that could do anything unexpected.

I’m not going to give a full overview about how GObject works internally and how it is used. If you’re not familiar with that it might be useful to first read the documentation and especially the tutorial. Also some C & Rust (especially unsafe Rust and FFI) knowledge would be good to have for what follows.

If you look at the code, you will notice that there is a lot of unsafe code, boilerplate and glue code. And especially code duplication. I don’t expect anyone to manually write all this code, and the final goal of all this is to have Rust macros to make the life easier. Simple Rust macros that make it as easy as in C are almost trivial to write, but what we really want here is to be able to write it all only in safe Rust in code that looks a bit like C# or Java. There is a prototype for that already written by Niko Matsakis, and a blog post with further details about it. The goal for this code is to work as a manual example that can be integrated one step at a time into the macro based solution. Code written with that macro should in the end look similar to the following

gobject_gen! {
    class Counter {
        struct CounterPrivate {
            f: Cell<u32>
        }

        fn add(&self, x: u32) -> u32 {
            let private = self.private();
            let v = private.f.get() + x;
            private.f.set(v);
            v
        }

        fn get(&self) -> u32 {
            self.private().f.get()
        }
    }
}

and be usable like

let c = Counter::new();
c.add(2);
c.add(20);

The code in my repository is already integrated well into GTK-rs, but the macro generated code should also be integrated well into GTK-rs and work the same as other GTK-rs code from Rust. In addition the generated code should of course make use of all the type FFI conversion infrastructure that already exists in there and was explained by Federico in his blog post (part 1, part 2).
In the end, I would like to see such a macro solution integrated directly into the GLib bindings.

Table of Contents

  1. Why?
  2. Simple (boxed) types
  3. Object types
    1. Inheritance
    2. Virtual Methods
    3. Properties
    4. Signals
  4. Interfaces
  5. Usage from C
  6. Usage from Rust
  7. Usage from Python, JavaScript and Others
  8. What next?

Why?

Now one might ask why? GObject is yet another C library and Rust can export plain C API without any other dependencies just fine. While that is true, C is not very expressive at all and there are no conventions about how C APIs should look like and behave, so everybody does their own stuff. With GObject you would get all kinds of object-oriented programming features and strong conventions about API design. And you actually get a couple of features (inheritance, properties/signals, full runtime type system) that Rust does not have. And as bonus points, you get bindings for various other languages (Python, JavaScript, C++, C#, …) for free. More on the last point later.

Another reason why you might want to do this, is to be able to interact with existing C libraries that use GObject. For example if you want to create a subclass of some GTK widget to give it your own custom behaviour or modify its appearance, or even writing a completely new GTK widget that should be placed together with other widgets in your UI, or for implementing a new GStreamer element that implements some fancy filter or codec or … that you want to use.

Simple (boxed) types

Let’s start with the simple and boring case, which already introduces various GObject concepts. Let’s assume you already have some simple Rust type that you want to expose a C API for, and it should be GObject-style to get all the above advantages. For that, GObject has the concept of boxed types. These have to come with a “copy” and “free” function, which can do an actual copy of the object or just implement reference counting, and GObject allows to register these together with a string name for the type and then gives back a type ID (GType) that allows referencing this type.

Boxed types can then be automatically used, together with any C API they provide, from C and any other languages for which GObject support exists (i.e. basically all). It allows to use instances of these boxed types to be used in signals and properties (see further below), allows them to be stored in GValue (a container type that allows to store an instance of any other type together with its type ID), etc.

So how does all this work? In my repository I’m implementing a boxed type around a Option, one time as a “copy” type RString, another time reference counted (SharedRString). Outside Rust, both are just passed as pointers and the implementation of them is private/opaque. As such, it is possible to use any kind of Rust struct or enum and e.g. marking them as #[repr(C)] is not needed. It is also possible to use #[repr(C)] structs though, in which case the memory layout could be public and any struct fields could be available from C and other languages.

RString

The actual implementation of the type is in the imp.rs file, i.e. in the imp module. I’ll cover the other files in there at a later time, but mod.rs is providing a public Rust API around all this that integrates with GTK-rs.

The following is the whole implementation, in safe Rust:

#[derive(Clone)]
pub struct RString(Option<String>);

impl RString {
    fn new(s: Option<String>) -> RString {
        RString(s)
    }

    fn get(&self) -> Option<String> {
        self.0.clone()
    }

    fn set(&mut self, s: Option<String>) {
        self.0 = s;
    }
}

Type Registration

Once the macro based solution is complete, this would be more or less all that would be required to also make this available to C via GObject, and any other languages. But we’re not there yet, and the goal here is to do it all manually. So first of all, we need to register this type somehow to GObject, for which (by convention) a C function called ex_rstring_get_type() should be defined which registers the type on the first call to get the type ID, and on further calls just returns that type ID. If you’re wondering what ex is: this is the “namespace” (C has no built-in support for namespaces) of the whole library, short for “example”. The get_type() function looks like this:

#[no_mangle]
pub unsafe extern "C" fn ex_rstring_get_type() -> glib_ffi::GType {
    callback_guard!();

    static mut TYPE: glib_ffi::GType = gobject_ffi::G_TYPE_INVALID;
    static ONCE: Once = ONCE_INIT;

    ONCE.call_once(|| {
        let type_name = CString::new("ExRString").unwrap();

        TYPE = gobject_ffi::g_boxed_type_register_static(
            type_name.as_ptr(),
            Some(mem::transmute(ex_rstring_copy as *const c_void)),
            Some(mem::transmute(ex_rstring_free as *const c_void)),
        );

    });

    TYPE
}

This is all unsafe Rust and calling directly into the GObject C library. We use std::sync::Once for the one-time registration of the type, and store the result in a static mut called TYPE (super unsafe, but OK here as we only ever write to it once). For registration we call g_boxed_type_register_static() from GObject (provided to Rust via the gobject-sys crate) and provide the name (via std::ffi::CString for C interoperability) and the copy and free functions. Unfortunately we have to cast them to a generic pointer, and then transmute them to a different function pointer type as the arguments and return value pointers that GObject wants there are plain void * pointers but in our code we would at least like to use RString *. And that’s all that there is to the registration. We mark the whole function as extern “C” to use the C calling conventions, and use #[no_mangle] so that the function is exported with exactly that symbol name (otherwise Rust is doing symbol name mangling), and last we make sure that no panic unwinding happens from this Rust code back to the C code via the callback_guard!() macro from the glib crate.

Memory Managment Functions

Now let’s take a look at the actual copy and free functions, and the actual constructor function called ex_rstring_new():

#[no_mangle]
pub unsafe extern "C" fn ex_rstring_new(s: *const c_char) -> *mut RString {
    callback_guard!();

    let s = Box::new(RString::new(from_glib_none(s)));
    Box::into_raw(s)
}

#[no_mangle]
pub unsafe extern "C" fn ex_rstring_copy(rstring: *const RString) -> *mut RString {
    callback_guard!();

    let rstring = &*rstring;
    let s = Box::new(rstring.clone());
    Box::into_raw(s)
}

#[no_mangle]
pub unsafe extern "C" fn ex_rstring_free(rstring: *mut RString) {
    callback_guard!();

    let _ = Box::from_raw(rstring);
}

These are also unsafe Rust functions that work with raw pointers and C types, but fortunately not too much is happening here.

In the constructor function we get a C string (char *) passed as argument, convert this to a Rust string (actually Option as this can be NULL) via from_glib_none() from the glib crate and then pass that to the Rust constructor of our type. from_glib_none() means that we don’t take ownership of the C string passed to us, the other variant would be from_glib_full() in which case we would take ownership. We then pack up the result in a Rust Box to place the new RString in heap allocated memory (otherwise it would be stack allocated), and use Box’s into_raw() function to get a raw pointer to the memory and not have its Drop implementation called anymore. This is then returned to the caller.

Similarly in the copy and free functions we just do some juggling with Boxes: copy take a raw pointer to our RString, calls the compiler generated clone() function to copy it all, and then packs it up in a new Box to return to the caller. The free function converts the raw pointer back to a Box, and then lets the Drop implementation of Box take care of freeing all memory related to it.

Actual Functionality

The two remaining functions are C wrappers for the get() and set() Rust functions:

#[no_mangle]
pub unsafe extern "C" fn ex_rstring_get(rstring: *const RString) -> *mut c_char {
    callback_guard!();

    let rstring = &*rstring;
    rstring.get().to_glib_full()
}

#[no_mangle]
pub unsafe extern "C" fn ex_rstring_set(rstring: *mut RString, s: *const c_char) {
    callback_guard!();

    let rstring = &mut *rstring;
    rstring.set(from_glib_none(s));
}

These only call the corresponding Rust functions. The set() function again uses glib’s from_glib_none() to convert from a C string to a Rust string. The get() function uses ToGlibPtrFull::to_glib_full() from GLib to convert from a Rust string (Option to be accurate) to a C string, while passing ownership of the C string to the caller (which then also has to free it at a later time).

This was all quite verbose, which is why a macro based solution for all this would be very helpful.

Corresponding C Header

Now if this API would be used from C, the header file to do so would look something like this. Probably no surprises here.

#define EX_TYPE_RSTRING            (ex_rstring_get_type())

typedef struct _ExRString          ExRString;

GType       ex_rstring_get_type    (void);

ExRString * ex_rstring_new         (const gchar * s);
ExRString * ex_rstring_copy        (const ExRString * rstring);
void        ex_rstring_free        (ExRString * rstring);
gchar *     ex_rstring_get         (const ExRString * rstring);
void        ex_rstring_set         (ExRString *rstring, const gchar *s);

Ideally this would also be autogenerated from the Rust code in one way or another, maybe via rusty-cheddar or rusty-binder.

SharedRString

The shared, reference counted, RString works basically the same. The only differences are in how the pointers between C and Rust are converted. For this, let’s take a look at the constructor, copy (aka ref) and free (aka unref) functions again:

#[no_mangle]
pub unsafe extern "C" fn ex_shared_rstring_new(s: *const c_char) -> *mut SharedRString {
    callback_guard!();

    let s = SharedRString::new(from_glib_none(s));
    Arc::into_raw(s) as *mut _
}

#[no_mangle]
pub unsafe extern "C" fn ex_shared_rstring_ref(
    shared_rstring: *mut SharedRString,
) -> *mut SharedRString {
    callback_guard!();

    let shared_rstring = Arc::from_raw(shared_rstring);
    let s = shared_rstring.clone();

    // Forget it and keep it alive, we will still need it later
    let _ = Arc::into_raw(shared_rstring);

    Arc::into_raw(s) as *mut _
}

#[no_mangle]
pub unsafe extern "C" fn ex_shared_rstring_unref(shared_rstring: *mut SharedRString) {
    callback_guard!();

    let _ = Arc::from_raw(shared_rstring);
}

The only difference here is that instead of using a Box, std::alloc::Arc is used, and some differences in the copy (aka ref) function. Previously with the Box, we were just creating a immutable reference from the raw pointer and cloned it, but with the Arc we want to clone the Arc itself (i.e. have the same underlying object but increase the reference count). For this we use Arc::from_raw() to get back an Arc, and then clone the Arc. If we wouldn’t do anything else, at the end of the function our original Arc would get its Drop implementation called and the reference count decreased, defeating the whole point of the function. To prevent that, we convert the original Arc to a raw pointer again and “leak” it. That is, we don’t destroy the reference owned by the caller, which would cause double free problems later.

Apart from this, everything is really the same. And also the C header looks basically the same.

Object types

Now let’s start with the more interesting part: actual subclasses of GObject with all the features you know from object-oriented languages. Everything up to here was only warm-up, even if useful by itself already to expose normal Rust types to C with a slightly more expressive API.

In GObject, subclasses of the GObject base class (think of Object in Java or C#, the most basic type from which everything inherits) all get the main following features from the base class: reference counting, inheritance, virtual methods, properties, signals. Similarly to boxed types, some functions and structs are registered at runtime with the GObject library to get back a type ID but it is slightly more involved. And our structs must be #[repr(C)] and be structured in a very specific way.

Struct Definitions

Every GObject subclass has two structs: 1) one instance struct that is used for the memory layout of every instance and could contain public fields, and 2) one class struct which is storing the class specific data and the instance struct contains a pointer to it. The class struct is more or less what in C++ the vtable would be, i.e. the place where virtual methods are stored, but in GObject it can also contain fields for example. We define a new type Foo that inherits from GObject.

#[repr(C)]
pub struct Foo {
    pub parent: gobject_ffi::GObject,
}

#[repr(C)]
pub struct FooClass {
    pub parent_class: gobject_ffi::GObjectClass,
}

The first element of the structs must be the corresponding struct of the class we inherit from. This later allows casting pointers of our subclass to pointers of the base class, and re-use all API implemented for the base class. In our example here we don’t define any public fields or virtual methods, in the repository the version has them but we get to that later.

Now we will actually need to be able to store some state with our objects, but we want to have that state private. For that we define another struct, a plain Rust struct this time

struct FooPrivate {
    name: RefCell<Option<String>>,
    counter: RefCell<i32>,
}

This uses RefCell for each field, as in GObject modifications of objects are all done conceptually via interior mutability. For a thread-safe object these would have to be Mutex instead.

Type Registration

In the end we glue all this together and register it to the GObject type system via a get_type() function, similar to the one for boxed types

#[no_mangle]
pub unsafe extern "C" fn ex_foo_get_type() -> glib_ffi::GType {
    callback_guard!();

    static mut TYPE: glib_ffi::GType = gobject_ffi::G_TYPE_INVALID;
    static ONCE: Once = ONCE_INIT;

    ONCE.call_once(|| {
        let type_info = gobject_ffi::GTypeInfo {
            class_size: mem::size_of::<FooClass>() as u16,
            base_init: None,
            base_finalize: None,
            class_init: Some(FooClass::init),
            class_finalize: None,
            class_data: ptr::null(),
            instance_size: mem::size_of::<Foo>() as u16,
            n_preallocs: 0,
            instance_init: Some(Foo::init),
            value_table: ptr::null(),
        };

        let type_name = CString::new("ExFoo").unwrap();

        TYPE = gobject_ffi::g_type_register_static(
            gobject_ffi::g_object_get_type(),
            type_name.as_ptr(),
            &type_info,
            gobject_ffi::GTypeFlags::empty(),
        );
    });

    TYPE
}

The main difference here is that we call g_type_register_static(), which takes a struct as parameter that contains all the information about our new subclass. In that struct we provide sizes of the class and instance struct (GObject is allocating them for us), various uninteresting fields for now and two function pointers: 1) class_init for initializing the class struct as allocated by GObject (here we would also override virtual methods, define signals or properties for example) and 2) instance_init to do the same with the instance struct. Both structs are zero-initialized in the parts we defined, and the parent parts of both structs are initialized by the code for the parent class already.

Struct Initialization

These two functions look like the following for us (the versions in the repository already do more things)

impl Foo {
    unsafe extern "C" fn init(obj: *mut gobject_ffi::GTypeInstance, _klass: glib_ffi::gpointer) {
        callback_guard!();

        let private = gobject_ffi::g_type_instance_get_private(
            obj as *mut gobject_ffi::GTypeInstance,
            ex_foo_get_type(),
        ) as *mut Option<FooPrivate>;

        // Here we initialize the private data. By default it is all zero-initialized
        // but we don't really want to have any Drop impls run here so just overwrite the
        // data
        ptr::write(
            private,
            Some(FooPrivate {
                name: RefCell::new(None),
                counter: RefCell::new(0),
            }),
        );
    }
}

impl FooClass {
    unsafe extern "C" fn init(klass: glib_ffi::gpointer, _klass_data: glib_ffi::gpointer) {
        callback_guard!();

        // This is an Option<_> so that we can replace its value with None on finalize() to
        // release all memory it holds
        gobject_ffi::g_type_class_add_private(klass, mem::size_of::<Option<FooPrivate>>() as usize);
    }
}

During class initialization, we tell GObject about the size of our private struct but we actually wrap it into an Option. This allows us to later replace it simply with None to deallocate all memory related to it. During instance initialization this private struct is already allocated for us by GObject (and zero-initialized), so we simply get a raw pointer to it via g_type_instance_get_private() and write an initialized struct to that pointer. Raw pointers must be used here so that the Drop implementation of Option is not called for the old, zero-initialized memory when replacing the struct.

As you might’ve noticed, we currently never set the private struct to None to release the memory, effectively leaking memory, but we get to that later when talking about virtual methods.

Constructor

With what we have so far, it’s already possible to create new instances of our subclass, and for that we also define a constructor function now

#[no_mangle]
pub unsafe extern "C" fn ex_foo_new() -> *mut Foo {
    callback_guard!();

    let this = gobject_ffi::g_object_newv(
        ex_foo_get_type(),
        0,
        ptr::null_mut(),
    );

    this as *mut Foo
}

There is probably not much that has to be explained here: we only tell GObject to allocate a new instance of our specific type (by providing the type ID), which then causes the memory to be allocated and our initialization functions to be called. For the very first time, class_init would be called, for all times instance_init is called.

Methods

All this would be rather boring at this point because there is no way to actually do something with our object, so various functions are defined to work with the private data. For example to get the value of the counter

impl Foo {
    fn get_counter(_this: &FooWrapper, private: &FooPrivate) -> i32 {
        *private.counter.borrow()
    }
}

#[no_mangle]
pub unsafe extern "C" fn ex_foo_get_counter(this: *mut Foo) -> i32 {
    callback_guard!();

    let private = (*this).get_priv();

    Foo::get_counter(&from_glib_borrow(this), private)
}

This gets the private struct from GObject (get_priv() is a helper function that does the same as we did in instance_init), and then calls a safe Rust function implemented on our struct to actually get the value. Notable here is that we don’t pass &self to the function, but something called FooWrapper. This is a GTK-rs style wrapper type that directly allows to use any API implemented on parent classes and provides various other functionality. It is defined in mod.rs but we will talk about that later.

Inheritance

GObject allows single-inheritance from a base class, similar to Java and C#. All behaviour of the base class is inherited, and API of the base class can be used on the subclass.

I shortly hinted at how that works above already: 1) instance and class struct have the parent class’ structs as first field, so casting to pointers of the parent class work just fine, 2) GObject is told what the parent class is in the call to g_type_register_static(). We did that above already, as we inherited from GObject.

By inheriting from GObject, we e.g. can call g_object_ref() to do reference counting, or any of the other GObject API. Also it allows the Rust wrapper type defined in mod.rs to provide appropriate API for the base class to us without any casts, and to do memory management automatically. How that works is probably going to be explained in one of the following blog posts on Federico’s blog.

In the example repository, there is also another type defined which inherits from our type Foo, called Bar. It’s basically the same code again, except for the name and parent type.

#[repr(C)]
pub struct Bar {
    pub parent: foo::imp::Foo,
}

#[repr(C)]
pub struct BarClass {
    pub parent_class: foo::imp::FooClass,
}

#[no_mangle]
pub unsafe extern "C" fn ex_bar_get_type() -> glib_ffi::GType {
    [...]
        TYPE = gobject_ffi::g_type_register_static(
            foo::imp::ex_foo_get_type(),
            type_name.as_ptr(),
            &type_info,
            gobject_ffi::GTypeFlags::empty(),
        );
    [...]
}

Virtual Methods

Overriding Virtual Methods

Inheritance alone is already useful for reducing code duplication, but to make it really useful virtual methods are needed so that behaviour can be adjusted. In GObject this works similar to how it’s done in e.g. C++, just manually: you place function pointers to the virtual method implementations into the class struct and then call those. As every subclass has its own copy of the class struct (initialized with the values from the parent class), it can override these with whatever function it wants. And as it’s possible to get the actual class struct of the parent class, it is possible to chain up to the implementation of the virtual function of the parent class. Let’s look at the example of the GObject::finalize virtual method, which is called at the very end when the object is to be destroyed and which should free all memory. In there we will free our private data struct with the RefCells.

As a first step, we need to override the function pointer in the class struct in our class_init function and replace it with another function that implements the behaviour we want

impl FooClass {
    unsafe extern "C" fn init(klass: glib_ffi::gpointer, _klass_data: glib_ffi::gpointer) {
        [...]
        {
            let gobject_klass = &mut *(klass as *mut gobject_ffi::GObjectClass);
            gobject_klass.finalize = Some(Foo::finalize);
        }
        [...]
    }
}

impl Foo {
    unsafe extern "C" fn finalize(obj: *mut gobject_ffi::GObject) {
        callback_guard!();

        // Free private data by replacing it with None
        let private = gobject_ffi::g_type_instance_get_private(
            obj as *mut gobject_ffi::GTypeInstance,
            ex_foo_get_type(),
        ) as *mut Option<FooPrivate>;
        let _ = (*private).take();

        (*PRIV.parent_class).finalize.map(|f| f(obj));
    }
}

This new function could call into a safe Rust implementation, like it’s done for other virtual methods (see a bit later) but for finalize we have to do manual memory management and that’s all unsafe Rust. The way how we free the memory here is by replacing, that is take()ing the Some value out of the Option that contains our private struct, and then let it be dropped. Afterwards we have to chain up to the parent class’ implementation of finalize, which is done by calling map() on the Option that contains the function pointer.

All the function pointers in glib-sys and related crates is stored in Options to be able to handle the case of a NULL function pointer and an actual function pointer to a function.

Now for chaining up to the parent class’ finalize implementation, there’s a static, global variable containing a pointer to the parent class’ class struct, called PRIV. This is also initialized in the class_init function

struct FooClassPrivate {
    parent_class: *const gobject_ffi::GObjectClass,
}
static mut PRIV: FooClassPrivate = FooClassPrivate {
    parent_class: 0 as *const _,
};

impl FooClass {
    unsafe extern "C" fn init(klass: glib_ffi::gpointer, _klass_data: glib_ffi::gpointer) {
        [...]
        PRIV.parent_class =
            gobject_ffi::g_type_class_peek_parent(klass) as *const gobject_ffi::GObjectClass;
    }
}

While this is a static mut global variable, this is fine as it’s only ever written to once from class_init, and can only ever be accessed after class_init is done.

Defining New Virtual Methods

For defining new virtual methods, we would add a corresponding function pointer to the class struct and optionally initialize it to a default implementation in the class_init function, or otherwise keep it at NULL/None.

#[repr(C)]
pub struct FooClass {
    pub parent_class: gobject_ffi::GObjectClass,
    pub increment: Option<unsafe extern "C" fn(*mut Foo, inc: i32) -> i32>,
}

impl FooClass {
    unsafe extern "C" fn init(klass: glib_ffi::gpointer, _klass_data: glib_ffi::gpointer) {
        {
            let foo_klass = &mut *(klass as *mut FooClass);
            foo_klass.increment = Some(Foo::increment_trampoline);
        }
    }
}

The trampoline function provided here is responsible for converting from the C types to the Rust types, and then calling a safe Rust implementation of the virtual method.

impl Foo {
    unsafe extern "C" fn increment_trampoline(this: *mut Foo, inc: i32) -> i32 {
        callback_guard!();

        let private = (*this).get_priv();

        Foo::increment(&from_glib_borrow(this), private, inc)
    }

    fn increment(this: &FooWrapper, private: &FooPrivate, inc: i32) -> i32 {
        let mut val = private.counter.borrow_mut();

        *val += inc;

        *val
    }
}

To make it possible to call these virtual methods from the outside, a C function has to be defined again similar to the ones for non-virtual methods. Instead of calling the Rust implementation directly, this gets the class struct of the type that is passed in and then calls the function pointer for the virtual method implementation of that specific type.

#[no_mangle]
pub unsafe extern "C" fn ex_foo_increment(this: *mut Foo, inc: i32) -> i32 {
    callback_guard!();

    let klass = (*this).get_class();

    (klass.increment.as_ref().unwrap())(this, inc)
}

Subclasses would override this default implementation (or provide an actual implementation) exactly the same way, and also chain up to the parent class’ implementation like we saw before for GObject::finalize.

Properties

Similar to Objective-C and C#, GObject has support for properties. These are registered per type, have some metadata attached to them (property type, name, description, writability, valid value range, etc) and subclasses are inheriting them and can override them. The main difference between properties and struct fields is that setting/getting the property values is executing some code instead of just pointing at a memory location, and you can connect a callback to the property to be notified whenever its value changes. And they can be queried at runtime from a specific type, and set/get via their string names instead of actual C API. Allowed types for properties are everything that has a GObject type ID assigned, including all GObject subclasses, many fundamental types (integers, strings, …) and boxed types like our RString and SharedRString above.

Defining Properties

To define a property, we have to register it in the class_init function and also implement the GObject::get_property() and GObject::set_property() virtual methods (or only one of them for read-only / write-only properties). Internally inside the implementation of our GObject, the properties are identified by an integer index for which we define a simple enum, and when registered we get back a GParamSpec pointer that we should also store (for notifying about property changes for example).

#[repr(u32)]
enum Properties {
    Name = 1,
}

struct FooClassPrivate {
    parent_class: *const gobject_ffi::GObjectClass,
    properties: *const Vec<*const gobject_ffi::GParamSpec>,
}
static mut PRIV: FooClassPrivate = FooClassPrivate {
    parent_class: 0 as *const _,
    properties: 0 as *const _,
};

In class_init we then override the two virtual methods and register a new property, by providing the name, type, value of our enum corresponding to that property, default value and various other metadata. We then store the GParamSpec related to the property in a Vec, indexed by the enum value. In our example we add a string-typed “name” property that is readable and writable, but can only ever be written to during object construction.

impl FooClass {
    // Class struct initialization, called from GObject
unsafe extern "C" fn init(klass: glib_ffi::gpointer, _klass_data: glib_ffi::gpointer) {
    {
        [...]
        {
            let gobject_klass = &mut *(klass as *mut gobject_ffi::GObjectClass);
            gobject_klass.finalize = Some(Foo::finalize);
            gobject_klass.set_property = Some(Foo::set_property);
            gobject_klass.get_property = Some(Foo::get_property);

            let mut properties = Vec::new();

            let name_cstr = CString::new("name").unwrap();
            let nick_cstr = CString::new("Name").unwrap();
            let blurb_cstr = CString::new("Name of the object").unwrap();

            properties.push(ptr::null());
            properties.push(gobject_ffi::g_param_spec_string(
                name_cstr.as_ptr(),
                nick_cstr.as_ptr(),
                blurb_cstr.as_ptr(),
                ptr::null_mut(),
                gobject_ffi::G_PARAM_READWRITE | gobject_ffi::G_PARAM_CONSTRUCT_ONLY,
            ));
            gobject_ffi::g_object_class_install_properties(
                gobject_klass,
                properties.len() as u32,
                properties.as_mut_ptr() as *mut *mut _,
            );

            PRIV.properties = Box::into_raw(Box::new(properties));
        }
    }
}

Afterwards we define the trampoline implementations for the set_property and get_property virtual methods.

impl Foo {
    unsafe extern "C" fn set_property(
        obj: *mut gobject_ffi::GObject,
        id: u32,
        value: *mut gobject_ffi::GValue,
        _pspec: *mut gobject_ffi::GParamSpec,
    ) {
        callback_guard!();

        let this = &*(obj as *mut Foo);
        let private = (*this).get_priv();

        // FIXME: How to get rid of the transmute?
        match mem::transmute::<u32, Properties>(id) {
            Properties::Name => {
                // FIXME: Need impl FromGlibPtrBorrow for Value
                let name = gobject_ffi::g_value_get_string(value);
                Foo::set_name(
                    &from_glib_borrow(obj as *mut Foo),
                    private,
                    from_glib_none(name),
                );
            }
            _ => unreachable!(),
        }
    }

    unsafe extern "C" fn get_property(
        obj: *mut gobject_ffi::GObject,
        id: u32,
        value: *mut gobject_ffi::GValue,
        _pspec: *mut gobject_ffi::GParamSpec,
    ) {
        callback_guard!();

        let private = (*(obj as *mut Foo)).get_priv();

        // FIXME: How to get rid of the transmute?
        match mem::transmute::<u32, Properties>(id) {
            Properties::Name => {
                let name = Foo::get_name(&from_glib_borrow(obj as *mut Foo), private);
                // FIXME: Need impl FromGlibPtrBorrow for Value
                gobject_ffi::g_value_set_string(value, name.to_glib_none().0);
            }
            _ => unreachable!(),
        }
    }
}

In there we decide based on the index which property is meant, and then convert from/to the GValue container provided by GObject, and then call into safe Rust getters/setters.

impl Foo {
    fn get_name(_this: &FooWrapper, private: &FooPrivate) -> Option<String> {
        private.name.borrow().clone()
    }

    fn set_name(_this: &FooWrapper, private: &FooPrivate, name: Option<String>) {
        *private.name.borrow_mut() = name;
    }
}

This property can now be used via the GObject API, e.g. its value can be retrieved via g_object_get(obj, “name”, &pointer_to_a_char_pointer) in C.

Construct Properties

The property we defined above had one special feature: it can only ever be set during object construction. Similarly, every property that is writable can also be set during object construction. This works by providing a value to g_object_new() in the constructor function, which then causes GObject to pass this to our set_property() implementation.

#[no_mangle]
pub unsafe extern "C" fn ex_foo_new(name: *const c_char) -> *mut Foo {
    callback_guard!();

    let prop_name_name = "name".to_glib_none();
    let prop_name_str: Option<String> = from_glib_none(name);
    let prop_name_value = glib::Value::from(prop_name_str.as_ref());

    let mut properties = [
        gobject_ffi::GParameter {
            name: prop_name_name.0,
            value: prop_name_value.into_raw(),
        },
    ];
    let this = gobject_ffi::g_object_newv(
        ex_foo_get_type(),
        properties.len() as u32,
        properties.as_mut_ptr(),
    );

    gobject_ffi::g_value_unset(&mut properties[0].value);

    this as *mut Foo
}

Signals

GObject also supports signals. These are similar to events in e.g. C#, Qt or the C++ Boost signals library, and not to be confused with UNIX signals. GObject signals allow you to connect a callback that is called every time a specific event happens.

Signal Registration

Similarly to properties, these are registered in class_init together with various metadata, can be queried at runtime and are usually used by string name. Notification about property changes is implemented with signals, the GObject::notify signal.

Also similarly to properties, internally in our implementation the signals are used by an integer index. We also store that globally, indexed by a simple enum.

#[repr(u32)]
enum Signals {
    Incremented = 0,
}

struct FooClassPrivate {
    parent_class: *const gobject_ffi::GObjectClass,
    properties: *const Vec<*const gobject_ffi::GParamSpec>,
    signals: *const Vec<u32>,
}
static mut PRIV: FooClassPrivate = FooClassPrivate {
    parent_class: 0 as *const _,
    properties: 0 as *const _,
    signals: 0 as *const _,
};

In class_init we then register the signal for our type. For that we provide a name, the parameters of the signal (anything that can be stored in a GValue can be used for this again), the return value (we don’t have one here) and various other metadata. GObject then tells us the ID of the signal, which we store in our vector. In our case we define a signal named “incremented”, that is emitted every time the internal counter of the object is incremented and provides the current value of the counter and by how much it was incremented.

impl FooClass {
    // Class struct initialization, called from GObject
unsafe extern "C" fn init(klass: glib_ffi::gpointer, _klass_data: glib_ffi::gpointer) {
        [...]
        let mut signals = Vec::new();

        let name_cstr = CString::new("incremented").unwrap();
        let param_types = [gobject_ffi::G_TYPE_INT, gobject_ffi::G_TYPE_INT];

        // FIXME: Is there a better way?
        let class_offset = {
            let dummy: FooClass = mem::uninitialized();
            ((&dummy.incremented as *const _ as usize) - (&dummy as *const _ as usize)) as u32
        };

        signals.push(gobject_ffi::g_signal_newv(
            name_cstr.as_ptr(),
            ex_foo_get_type(),
            gobject_ffi::G_SIGNAL_RUN_LAST,
            gobject_ffi::g_signal_type_cclosure_new(ex_foo_get_type(), class_offset),
            None,
            ptr::null_mut(),
            None,
            gobject_ffi::G_TYPE_NONE,
            param_types.len() as u32,
            param_types.as_ptr() as *mut _,
        ));

        PRIV.signals = Box::into_raw(Box::new(signals));
    }
}

One special part here is the class_offset. GObject allows to (optionally) define a default class handler for the signal. This is always called when the signal is emitted, and is usually a virtual method that can be overridden by subclasses. During signal registration, the offset in bytes to the function pointer of that virtual method inside the class struct is provided.

#[repr(C)]
pub struct FooClass {
    pub parent_class: gobject_ffi::GObjectClass,
    pub increment: Option<unsafe extern "C" fn(*mut Foo, inc: i32) -> i32>,
    pub incremented: Option<unsafe extern "C" fn(*mut Foo, val: i32, inc: i32)>,
}

impl Foo {
    unsafe extern "C" fn incremented_trampoline(this: *mut Foo, val: i32, inc: i32) {
        callback_guard!();

        let private = (*this).get_priv();

        Foo::incremented(&from_glib_borrow(this), private, val, inc);
    }

    fn incremented(_this: &FooWrapper, _private: &FooPrivate, _val: i32, _inc: i32) {
        // Could do something here. Default/class handler of the "incremented"
        // signal that could be overriden by subclasses
    }
}

This is all exactly the same as for virtual methods, just that it will be automatically called when the signal is emitted.

Signal Emission

For emitting the signal, we have to provide the instance and the arguments in an array as GValues, and then emit the signal by the ID we got back during signal registration.

impl Foo {
    fn increment(this: &FooWrapper, private: &FooPrivate, inc: i32) -> i32 {
        let mut val = private.counter.borrow_mut();

        *val += inc;

        unsafe {
            let params = [this.to_value(), (*val).to_value(), inc.to_value()];
            gobject_ffi::g_signal_emitv(
                params.as_ptr() as *mut _,
                (*PRIV.signals)[Signals::Incremented as usize],
                0,
                ptr::null_mut(),
            );
        }

        *val
    }
}

While all parameters to the signal are provided as a GValue here, GObject calls our default class handler and other C callbacks connected to the signal with the corresponding C types directly. The conversion is done inside GObject and then the corresponding function is called via libffi. It is also possible to directly get the array of GValues instead though, by using the GClosure API, for which there are also Rust bindings.

Connecting to the signal can now be done via e.g. g_object_connect() from C.

C header

Similarly to the boxed types, we also have to define a C header for the exported GObject C API. This ideally would also be autogenerated from the macro based solution (e.g. with rusty-cheddar), but here we write it manually. This is mostly GObject boilerplate and conventions.

#include <glib-object.h>

G_BEGIN_DECLS

#define EX_TYPE_FOO            (ex_foo_get_type())
#define EX_FOO(obj)            (G_TYPE_CHECK_INSTANCE_CAST((obj),EX_TYPE_FOO,ExFoo))
#define EX_IS_FOO(obj)         (G_TYPE_CHECK_INSTANCE_TYPE((obj),EX_TYPE_FOO))
#define EX_FOO_CLASS(klass)    (G_TYPE_CHECK_CLASS_CAST((klass) ,EX_TYPE_FOO,ExFooClass))
#define EX_IS_FOO_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass) ,EX_TYPE_FOO))
#define EX_FOO_GET_CLASS(obj)  (G_TYPE_INSTANCE_GET_CLASS((obj) ,EX_TYPE_FOO,ExFooClass))

typedef struct _ExFoo      ExFoo;
typedef struct _ExFooClass ExFooClass;

struct _ExFoo {
  GObject parent;
};

struct _ExFooClass {
  GObjectClass parent_class;

  gint (*increment) (ExFoo * foo, gint inc);
  void (*incremented) (ExFoo * foo, gint val, gint inc);
};

GType   ex_foo_get_type    (void);

ExFoo * ex_foo_new         (const gchar * name);

gint    ex_foo_increment   (ExFoo * foo, gint inc);
gint    ex_foo_get_counter (ExFoo * foo);
gchar * ex_foo_get_name    (ExFoo * foo);

G_END_DECLS

Interfaces

While GObject only allows single inheritance, it provides the ability to implement any number of interfaces on a class to provide a common API between independent types. These interfaces are similar to what exists in Java and C#, but similar to Rust traits it is possible to provide default implementations for the interface methods. Also similar to Rust traits, interfaces can declare pre-requisites: interfaces an implementor must also implement, or a base type it must inherit from.

In the repository, a Nameable interface with a get_name() method is implemented. Generally it all works exactly the same as with non-interface types and virtual methods. You register a type with GObject that inherits from G_TYPE_INTERFACE. This type only has a class struct, no instance struct. And instead of an instance struct, a typedef’d void * pointer is used. Behind that pointer would be the instance struct of the actual type implementing the interface. A default implementation of methods can be provided the same way as with virtual methods in class_init.

There are two main differences though. One is for calling an interface method

impl Nameable {
    // Helper functions
    fn get_iface(&self) -> &NameableInterface {
        unsafe {
            let klass = (*(self as *const _ as *const gobject_ffi::GTypeInstance)).g_class;
            let interface =
                gobject_ffi::g_type_interface_peek(klass as *mut c_void, ex_nameable_get_type());
            &*(interface as *const NameableInterface)
        }
    }
}

#[no_mangle]
pub unsafe extern "C" fn ex_nameable_get_name(this: *mut Nameable) -> *mut c_char {
    callback_guard!();

    let iface = (*this).get_iface();
    iface.get_name.map(|f| f(this)).unwrap_or(ptr::null_mut())
}

Instead of directly getting the class struct from the instance, we have to call some GObject API to get the interface struct of a specific interface type ID with the virtual methods.

The other difference is for implementation of the interface. Inside the get_type() function a new set of functions is registered, which are used similar to class_init for initialization of the interface struct

#[no_mangle]
pub unsafe extern "C" fn ex_foo_get_type() -> glib_ffi::GType {
        [...]
        // Implement Nameable interface here
        let nameable_info = gobject_ffi::GInterfaceInfo {
            interface_init: Some(FooClass::init_nameable_interface),
            interface_finalize: None,
            interface_data: ptr::null_mut(),
        };
        gobject_ffi::g_type_add_interface_static(
            TYPE,
            ::nameable::imp::ex_nameable_get_type(),
            &nameable_info,
        );
    });
}

impl FooClass {
    unsafe extern "C" fn init_nameable_interface(
        iface: glib_ffi::gpointer,
        _iface_data: glib_ffi::gpointer,
    ) {
        let iface = &mut *(iface as *mut ::nameable::imp::NameableInterface);
        iface.get_name = Some(Foo::nameable_get_name_trampoline);
    }
}

The interface also gets a C header, which looks basically the same as for normal classes.

Usage from C

As mentioned above a few times, we export a normal (GObject) C API. For that various headers have to be written, or ideally be generated later. These can be all found here.

Nothing special has to be taken care off for using this API from C, you simply link to the generated shared library, use the headers and then use it like any other GObject based C API.

Usage from Rust

I mentioned shortly above that in the mod.rs there are gtk-rs-style Rust bindings. And these are also what would be passed (the “Wrapper” arguments) to the safe Rust implementations of the methods.

Ideally these would be autogenerated from a macro, similarly how the gir tool can do already for C based GObject libraries (this is the tool to generate most of the GLib, GTK, etc bindings for Rust).

For usage of those bindings, I’ll just let the code speak for itself

#[test]
    fn test_counter() {
        let foo = Foo::new(Some("foo's name"));

        let incremented = Rc::new(RefCell::new((0i32, 0i32)));
        let incremented_clone = incremented.clone();
        foo.connect_incremented(move |_, val, inc| {
            *incremented_clone.borrow_mut() = (val, inc);
        });

        assert_eq!(foo.get_counter(), 0);
        assert_eq!(foo.increment(1), 1);
        assert_eq!(*incremented.borrow(), (1, 1));
        assert_eq!(foo.get_counter(), 1);
        assert_eq!(foo.increment(10), 11);
        assert_eq!(*incremented.borrow(), (11, 10));
        assert_eq!(foo.get_counter(), 11);
    }

    #[test]
    fn test_new() {
        let s = RString::new(Some("bla"));
        assert_eq!(s.get(), Some("bla".into()));

        let mut s2 = s.clone();
        s2.set(Some("blabla"));
        assert_eq!(s.get(), Some("bla".into()));
        assert_eq!(s2.get(), Some("blabla".into()));
    }

This does automatic memory management, allows to call base-class methods on instances of a subclass, provides access to methods, virtual methods, signals, properties, etc.

Usage from Python, JavaScript and Others

Now all this was a lot of boilerplate, but here comes the reason why it is probably all worth it. By exporting a GObject-style C API, we automatically get support for generating bindings for dozens of languages, without having to write any more code. This is possible thanks to the strong API conventions of GObject, and the GObject-Introspection project. Supported languages are for example Rust (of course!), Python, JavaScript (GJS and Node), Go, C++, Haskell, C#, Perl, PHP, Ruby, …

GObject-Introspection achieves this by scanning the C headers, introspecting the GObject types and then generating an XML based API description (which also contains information about ownership transfer!). This XML based API description can then be used by code generators for static, compiled bindings (e.g. Rust, Go, Haskell, …), but it can also be compiled to a so-called “typelib”. The typelib provides a C ABI that allows bindings to be generated at runtime, mostly used by scripting languages (e.g. Python and JavaScript).

To show the power of this, I’ve included a simple Python and JavaScript (GJS) application that uses all the types we defined above, and a Makefile that generates the GObject-Introspection metadata and can directly run the Python and JavaScript applications (“make run-python” and “make run-javascript”).

The Python code looks as follows

#! /usr/bin/python3

import gi
gi.require_version("Ex", "0.1")
from gi.repository import Ex

def on_incremented(obj, val, inc):
    print("incremented to {} by {}".format(val, inc))

foo = Ex.Foo.new("foo's name")
foo.connect("incremented", on_incremented)

print("foo name: " + str(foo.get_name()))
print("foo inc 1: " + str(foo.increment(1)))
print("foo inc 10: " + str(foo.increment(10)))
print("foo counter: " + str(foo.get_counter()))

bar = Ex.Bar.new("bar's name")
bar.connect("incremented", on_incremented)

print("bar name: " + str(bar.get_name()))
print("bar inc 1: " + str(bar.increment(1)))
print("bar inc 10: " + str(bar.increment(10)))
print("bar counter: " + str(bar.get_counter()))

print("bar number: " + str(bar.get_number()))
print("bar number (property): " + str(bar.get_property("number")))
bar.set_number(10.0)
print("bar number: " + str(bar.get_number()))
print("bar number (property): " + str(bar.get_property("number")))
bar.set_property("number", 20.0)
print("bar number: " + str(bar.get_number()))
print("bar number (property): " + str(bar.get_property("number")))

s = Ex.RString.new("something")
print("rstring: " + str(s.get()))
s2 = s.copy()
s2.set("something else")
print("rstring 2: " + str(s2.get()))

s = Ex.SharedRString.new("something")
print("shared rstring: " + str(s.get()))
s2 = s.ref()
print("shared rstring 2: " + str(s2.get()))

and the JavaScript (GJS) code as follows

#!/usr/bin/gjs

const Lang = imports.lang;
const Ex = imports.gi.Ex;

let foo = new Ex.Foo({name: "foo's name"});
foo.connect("incremented", function(obj, val, inc) {
    print("incremented to " + val + " by " + inc);
});

print("foo name: " + foo.get_name());
print("foo inc 1: " + foo.increment(1));
print("foo inc 10: " + foo.increment(10));
print("foo counter: " + foo.get_counter());

let bar = new Ex.Bar({name: "bar's name"});
bar.connect("incremented", function(obj, val, inc) {
    print("incremented to " + val + " by " + inc);
});

print("bar name: " + bar.get_name());
print("bar inc 1: " + bar.increment(1));
print("bar inc 10: " + bar.increment(10));
print("bar counter: " + bar.get_counter());

print("bar number: " + bar.get_number());
print("bar number (property): " + bar["number"]);
bar.set_number(10.0)
print("bar number: " + bar.get_number());
print("bar number (property): " + bar["number"]);
bar["number"] = 20.0;
print("bar number: " + bar.get_number());
print("bar number (property): " + bar["number"]);

let s = new Ex.RString("something");
print("rstring: " + s.get());
let s2 = s.copy();
s2.set("something else");
print("rstring2: " + s2.get());

let s = new Ex.SharedRString("something");
print("shared rstring: " + s.get());
let s2 = s.ref();
print("shared rstring2: " + s2.get());

Both are doing the same and nothing useful, they simple use all of the available API.

What next?

While everything here can be used as-is already (and I use a variation of this in gst-plugin-rs, a crate to write GStreamer plugins in Rust), it’s rather inconvenient. The goal of this blog post is to have a low-level explanation about how all this works in GObject with Rust, and to have a “template” to use for Nikos’ gnome-class macro. Federico is planning to work on this in the near future, and step by step move features from my repository to the macro. Work on this will also be done at the GNOME/Rust hackfest in November in Berlin, which will hopefully yield a lot of progress on the macro but also on the bindings in general.

In the end, this macro would ideally end up in the glib-rs bindings and can then be used directly by anybody to implement GObject subclasses in Rust. At that point, this blog post can hopefully help a bit as documentation to understand how the macro works.

on September 06, 2017 01:47 PM

September 05, 2017

Previously: v4.12.

Here’s a short summary of some of interesting security things in Sunday’s v4.13 release of the Linux kernel:

security documentation ReSTification
The kernel has been switching to formatting documentation with ReST, and I noticed that none of the Documentation/security/ tree had been converted yet. I took the opportunity to take a few passes at formatting the existing documentation and, at Jon Corbet’s recommendation, split it up between end-user documentation (which is mainly how to use LSMs) and developer documentation (which is mainly how to use various internal APIs). A bunch of these docs need some updating, so maybe with the improved visibility, they’ll get some extra attention.

CONFIG_REFCOUNT_FULL
Since Peter Zijlstra implemented the refcount_t API in v4.11, Elena Reshetova (with Hans Liljestrand and David Windsor) has been systematically replacing atomic_t reference counters with refcount_t. As of v4.13, there are now close to 125 conversions with many more to come. However, there were concerns over the performance characteristics of the refcount_t implementation from the maintainers of the net, mm, and block subsystems. In order to assuage these concerns and help the conversion progress continue, I added an “unchecked” refcount_t implementation (identical to the earlier atomic_t implementation) as the default, with the fully checked implementation now available under CONFIG_REFCOUNT_FULL. The plan is that for v4.14 and beyond, the kernel can grow per-architecture implementations of refcount_t that have performance characteristics on par with atomic_t (as done in grsecurity’s PAX_REFCOUNT).

CONFIG_FORTIFY_SOURCE
Daniel Micay created a version of glibc’s FORTIFY_SOURCE compile-time and run-time protection for finding overflows in the common string (e.g. strcpy, strcmp) and memory (e.g. memcpy, memcmp) functions. The idea is that since the compiler already knows the size of many of the buffer arguments used by these functions, it can already build in checks for buffer overflows. When all the sizes are known at compile time, this can actually allow the compiler to fail the build instead of continuing with a proven overflow. When only some of the sizes are known (e.g. destination size is known at compile-time, but source size is only known at run-time) run-time checks are added to catch any cases where an overflow might happen. Adding this found several places where minor leaks were happening, and Daniel and I chased down fixes for them.

One interesting note about this protection is that is only examines the size of the whole object for its size (via __builtin_object_size(..., 0)). If you have a string within a structure, CONFIG_FORTIFY_SOURCE as currently implemented will make sure only that you can’t copy beyond the structure (but therefore, you can still overflow the string within the structure). The next step in enhancing this protection is to switch from 0 (above) to 1, which will use the closest surrounding subobject (e.g. the string). However, there are a lot of cases where the kernel intentionally copies across multiple structure fields, which means more fixes before this higher level can be enabled.

NULL-prefixed stack canary
Rik van Riel and Daniel Micay changed how the stack canary is defined on 64-bit systems to always make sure that the leading byte is zero. This provides a deterministic defense against overflowing string functions (e.g. strcpy), since they will either stop an overflowing read at the NULL byte, or be unable to write a NULL byte, thereby always triggering the canary check. This does reduce the entropy from 64 bits to 56 bits for overflow cases where NULL bytes can be written (e.g. memcpy), but the trade-off is worth it. (Besdies, x86_64’s canary was 32-bits until recently.)

IPC refactoring
Partially in support of allowing IPC structure layouts to be randomized by the randstruct plugin, Manfred Spraul and I reorganized the internal layout of how IPC is tracked in the kernel. The resulting allocations are smaller and much easier to deal with, even if I initially missed a few needed container_of() uses.

randstruct gcc plugin
I ported grsecurity’s clever randstruct gcc plugin to upstream. This plugin allows structure layouts to be randomized on a per-build basis, providing a probabilistic defense against attacks that need to know the location of sensitive structure fields in kernel memory (which is most attacks). By moving things around in this fashion, attackers need to perform much more work to determine the resulting layout before they can mount a reliable attack.

Unfortunately, due to the timing of the development cycle, only the “manual” mode of randstruct landed in upstream (i.e. marking structures with __randomize_layout). v4.14 will also have the automatic mode enabled, which randomizes all structures that contain only function pointers.

A large number of fixes to support randstruct have been landing from v4.10 through v4.13, most of which were already identified and fixed by grsecurity, but many were novel, either in newly added drivers, as whitelisted cross-structure casts, refactorings (like IPC noted above), or in a corner case on ARM found during upstream testing.

lower ELF_ET_DYN_BASE
One of the issues identified from the Stack Clash set of vulnerabilities was that it was possible to collide stack memory with the highest portion of a PIE program’s text memory since the default ELF_ET_DYN_BASE (the lowest possible random position of a PIE executable in memory) was already so high in the memory layout (specifically, 2/3rds of the way through the address space). Fixing this required teaching the ELF loader how to load interpreters as shared objects in the mmap region instead of as a PIE executable (to avoid potentially colliding with the binary it was loading). As a result, the PIE default could be moved down to ET_EXEC (0x400000) on 32-bit, entirely avoiding the subset of Stack Clash attacks. 64-bit could be moved to just above the 32-bit address space (0x100000000), leaving the entire 32-bit region open for VMs to do 32-bit addressing, but late in the cycle it was discovered that Address Sanitizer couldn’t handle it moving. With most of the Stack Clash risk only applicable to 32-bit, fixing 64-bit has been deferred until there is a way to teach Address Sanitizer how to load itself as a shared object instead of as a PIE binary.

early device randomness
I noticed that early device randomness wasn’t actually getting added to the kernel entropy pools, so I fixed that to improve the effectiveness of the latent_entropy gcc plugin.

That’s it for now; please let me know if I missed anything. As a side note, I was rather alarmed to discover that due to all my trivial ReSTification formatting, and tiny FORTIFY_SOURCE and randstruct fixes, I made it into the most active 4.13 developers list (by patch count) at LWN with 76 patches: a whopping 0.6% of the cycle’s patches. ;)

Anyway, the v4.14 merge window is open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on September 05, 2017 11:01 PM