September 21, 2017

S10E29 – Adamant Terrible Hammer - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This is Le Crossover Ubuntu Mashup Podcast thingy recorded live at UbuCon Europe in Paris, France.

It’s Season Ten Episode Twenty-Nine of the Ubuntu Podcast! Alan Pope, Martin Wimpress, Marius Quabeck, Max Kristen, Rudy and Tiago Carrondo are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on September 21, 2017 08:10 PM

By Leann Ogasawara, Director of Kernel Engineering

Ubuntu has long been a popular choice for Linux instances on Azure.  Our ongoing partnership with Microsoft has brought forth great results, such as the support of the latest Azure features, Ubuntu underlying SQL Server instances, bash on Windows, Ubuntu containers with Hyper-V Isolation on Windows 10 and Windows Servers, and much more.

Canonical, with the team at Microsoft Azure, are now delighted to announce that as of September 21, 2017, Ubuntu Cloud Images for Ubuntu 16.04 LTS on Azure have been enabled with a new Azure tailored Ubuntu kernel by default.  The Azure tailored Ubuntu kernel will receive the same level of support and security maintenance as all supported Ubuntu kernels for the duration of the Ubuntu 16.04 LTS support life.

The kernel itself is provided by the linux-azure kernel package. The most notable highlights for this kernel include:

  • Infiniband and RDMAcapability for Azure HPC to deliver optimized performance of compute intensive workloads on Azure A8, A9, H-series, and NC24r.
  • Full support for Accelerated Networking in Azure.  Direct access to the PCI device provides gains in overall network performance offering the highest throughput and lowest latency for guests in Azure.  Transparent SR-IOV eliminates configuration steps for bonding network devices.  SR-IOV for Linux in Azure is in preview but will become generally available later this year.
  • NAPI and Receive Segment Coalescing for 10% greater throughput on guests not using SR-IOV.
  • 18% reduction in kernel size.
  • Hyper-V socket capability — a socket-based host/guest communication method that does not require a network.
  • The very latest Hyper-V device drivers and feature support available.

The ongoing collaboration between Canonical and Microsoft will also continue to produce upgrades to newer kernel versions providing access to the latest kernel features, bug fixes, and security updates.  Any Ubuntu 16.04 LTS image brought up from the Azure portal after September 21st will be running on this Azure tailored Ubuntu kernel.

How to verify which kernel is used:

$ uname -r

4.11.0-1011-azure

 

Instances using the Azure tailored Ubuntu kernel will, of course, be supportable through Canonical’s Ubuntu Advantage service, available for purchase on our online shop or through sales@canonical.com in three tiers:

  • Essential: designed for self-sufficient users, providing access to our self-support portal as well as a variety of Canonical tools and services.
  • Standard: adding business-hours web and email support on top of the contents of Essential, as well as a 2-hour to 2-business days response time (severity 1-4).
  • Advanced: adding 24×7 web and email support on top of the contents of Essential, as well as a 1-hour to 1-business day response time (severity 1-4).

The Azure tailored Ubuntu kernel will not support the Canonical Livepatch Service at the time of this announcement, but investigation is underway to evaluate delivery of this service in the future.

If, for now, you prefer livepatching at scale over the above performance improvements, it is possible to revert to the standard kernel, using the following commands:

 

$ sudo apt install linux-virtual linux-cloud-tools-virtual

$ sudo apt purge linux*azure

$ sudo reboot

 

As we continue to collaborate closely with various Microsoft teams on public cloud, private cloud, containers and services, you can expect further boosts in performance, simplification of operations at scale, and enablement of new innovations and technologies.

on September 21, 2017 04:00 PM

Ime odbijeno

Ante Karamatić

Nakon 8-9 dana i poslanog maila, danas sam dobio obavijest o tome što se dešava s mojom prijavom. Pa prenosim u cijelosti:

dana 12.09.2017 poslano je rezervacija u TS Zagreb (e – tvrtka). i poštom je poslana dok. i  RZ obrazac u Hitro.hr Zagreb
Papirna dokumentacija je predana  na sud 13.09.2017.Rezevacija imena nije prošla . .Obavijest je predignuta sa suda 18.09.2017(.Hirto.hr  – Zagreb)
Obavijest je poštom danas stigla u Hitro.hr  – Šibenik (21.09.2017.). Zvala sam Vas na mobitel da bi mogli predigniti potvrdu ,ali mi se niko ne javlja.
Stoga Vas obavještvam da možete predignuti obavijest u HITRO:HR Šibenik.

Dakle, eTvrtka je jedno veliko ništa; obična laž i prijevara. I dalje se dokumenti šalju poštom. Da se razumijemo, ovo nije problem službenika koji su bili sustretljivi. Ovo je problem organizacije države, odnosno Vlade. Službenici su tu žrtve isto kao i mi, koji pokušavamo nešto stvoriti.

Dakle, ime je odbijeno.

U Republici Hrvatskoj je potrebno proći 10 dana kako biste saznali možete li pokrenuti tvrtku s određenim imenom. U drugim državama ovakve stvari ni ne postoje, već se tvrtke pokreću unutar jednog dana. Ako želimo biti plodno tlo za poduzetništvo, hitro.hr treba ukinuti (potpuno je besmislen) i uvesti suvremene tehnologije; algoritmi mogu pregledavati imena i to treba biti samo web stranica. Nikakvi protokoli, plaćanja, stajanja u redu.

on September 21, 2017 03:47 PM

The Ubuntu Community Council election has begun and ballots sent out to all Ubuntu Members. Voting closes September 27th at end of day UTC.

The following candidates are standing for 7 seats on the council:

Please contact the community-council@lists.ubuntu.com list if you are an Ubuntu Member but did not receive a ballot. Voting instructions were sent to the public address defined in Launchpad, or your launchpad_id@ubuntu.com address if not. Please also make sure you check your spam folder first.

We’d like to thank all the candidate for their willingness to serve in this capacity, and members for their considered votes.

Originally posted to the ubuntu-news-team mailing list on Tue Sep 12 14:22:49 UTC 2017 by Mark Shuttleworth

on September 21, 2017 03:30 PM

This article originally appeared on George Kraft’s blog

When we built the Canonical Distribution of Kubernetes (CDK), one of our goals was to provide snap packages for the various Kubernetes clients and services: kubectl, kube-apiserver, kubelet, etc.

While we mainly built the snaps for use in CDK, they are freely available to use for other purposes as well. Let’s have a quick look at how to install and configure the Kubernetes snaps directly.

The Client Snaps

This covers: kubectl, kubeadm, kubefed

Nothing special to know about these. Just snap install and you can use them right away:

$ sudo snap install kubectl --classic
kubectl 1.7.4 from 'canonical' installed
$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.4", GitCommit:"793658f2d7ca7f064d2bdf606519f9fe1229c381", GitTreeState:"clean", BuildDate:"2017-08-17T08:48:23Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

The Server Snaps

This covers: kube-apiserver, kube-controller-manager, kube-scheduler, kubelet, kube-proxy

Example: kube-apiserver

We will use kube-apiserver as an example. The other services generally work the same way.

Install with snap install

This creates a systemd service named snap.kube-apiserver.daemon. Initially, it will be in an error state because it’s missing important configuration:

$ systemctl status snap.kube-apiserver.daemon
● snap.kube-apiserver.daemon.service - Service for snap application kube-apiserver.daemon
   Loaded: loaded (/etc/systemd/system/snap.kube-apiserver.daemon.service; enabled; vendor preset: enabled)
   Active: inactive (dead) (Result: exit-code) since Fri 2017-09-01 15:54:39 UTC; 11s ago
   ...

Configure kube-apiserver using snap set.

sudo snap set kube-apiserver \
  etcd-servers=https://172.31.9.254:2379 \
  etcd-certfile=/root/certs/client.crt \
  etcd-keyfile=/root/certs/client.key \
  etcd-cafile=/root/certs/ca.crt \
  service-cluster-ip-range=10.123.123.0/24 \
  cert-dir=/root/certs

Note: Any files used by the service, such as certificate files, must be placed within the /root/ directory to be visible to the service. This limitation allows us to run a few of the services in a strict confinement mode that offers better isolation and security.

After configuring, restart the service and you should see it running:

$ sudo service snap.kube-apiserver.daemon restart
$ systemctl status snap.kube-apiserver.daemon
● snap.kube-apiserver.daemon.service - Service for snap application kube-apiserver.daemon
   Loaded: loaded (/etc/systemd/system/snap.kube-apiserver.daemon.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2017-09-01 16:02:33 UTC; 6s ago
   ...

Configuration

The keys and values for snap set map directly to arguments that you would
normally pass to the service. You can view a list of arguments by invoking the
service directly, e.g. kube-apiserver -h.

For configuring the snaps, drop the leading dashes and pass them through
snap set. For example, if you want kube-apiserver to be invoked like this

kube-apiserver --etcd-servers https://172.31.9.254:2379 --allow-privileged

You would configure the snap like this:

snap set kube-apiserver etcd-servers=https://172.31.9.254:2379 allow-privileged=true

Note, also, that we had to specify a value of true for allow-privileged. This
applies to all boolean flags.

Going deeper

Want to know more? Here are a couple good things to know:

If you’re confused about what snap set ... is actually doing, you can read
the snap configure hooks in

/snap/<snap-name>/current/meta/hooks/configure

to see how they work.

The configure hook creates an args file here:

/var/snap/<snap-name>/current/args

This contains the actual arguments that get passed to the service by the snap:

$ cat /var/snap/kube-apiserver/current/args 
--cert-dir "/root/certs"
--etcd-cafile "/root/certs/ca.crt"
--etcd-certfile "/root/certs/client.crt"
--etcd-keyfile "/root/certs/client.key"
--etcd-servers "https://172.31.9.254:2379"
--service-cluster-ip-range "10.123.123.0/24"

Note: While you can technically bypass snap set and edit the args file directly, it’s best not to do so. The next time the configure hook runs, it will obliterate your changes. This can occur not only from a call to snap set but also during a background refresh of the snap.

The source code for the snaps can be found here: https://github.com/juju-solutions/release/tree/rye/snaps/snap

We’re working on getting these snaps added to the upstream Kubernetes build process. You can follow our progress on that here: https://github.com/kubernetes/release/pull/293

If you have any questions or need help, you can either find us at #juju on
freenode, or open an issue against https://github.com/juju-solutions/bundle-canonical-kubernetes and we’ll help you out as soon as we can.

on September 21, 2017 01:46 PM

Another successful Randa meeting! I spent most of my days working on snappy packaging for KDE core applications, and I have most of them done!

Snappy Builds on KDE Neon

We need testers! Please see Using snappy to get started.

In the evenings I worked on getting all my appimage work moved into the KDE infrastructure so that the community can take over.

I learned a great deal about accessibility and have been formulating ways to improve KDE neon in this area.

Randa meetings are crucial to the KDE community for developer interaction, brainstorming, and bringing great new things to KDE.
I encourage all of you to please consider a donation at https://www.kde.org/fundraisers/randameetings2017/

on September 21, 2017 12:54 PM

September 20, 2017

Finding your VMs and containers via DNS resolution so you can ssh into them can be tricky. I was talking with Stéphane Graber today about this and he reminded me of his excellent article: Easily ssh to your containers and VMs on Ubuntu 12.04.

These days, libvirt has the `virsh dominfo` command and LXD has a slightly different way of finding the IP address.

Here is an updated `~/.ssh/config` that I’m now using (thank you Stéphane for the update for LXD):

Host *.lxd
    #User ubuntu
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(lxc list -c s4 $(echo %h | sed "s/\.lxd//g") %h | grep RUNNING | cut -d' ' -f4) %p
 
Host *.vm
    #StrictHostKeyChecking no
    #UserKnownHostsFile /dev/null
    ProxyCommand nc $(virsh domifaddr $(echo %h | sed "s/\.vm//g") | awk -F'[ /]+' '{if (NR>2 && $5) print $5}') %p

You may want to uncomment `StrictHostKeyChecking` and `UserKnownHostsFile` depending on your environment (see `man ssh_config`) for details.

With the above, I can ssh in with:

$ ssh foo.vm uptime
16:37:26 up 50 min, 0 users, load average: 0.00, 0.00, 0.00
$ ssh bar.lxd uptime
21:37:35 up 12:39, 2 users, load average: 0.55, 0.73, 0.66

Enjoy!


Filed under: canonical, ubuntu, ubuntu-server
on September 20, 2017 09:39 PM

Namespaced file capabilities

As of this past week, namespaced file capabilities are available in the upstream kernel. (Thanks to Eric Biederman for many review cycles and for the final pull request)

TL;DR

Some packages install binaries with file capabilities, and fail to install if you cannot set the file capabilities. Such packages could not be installed from inside a user namespace. With this feature, that problem is fixed.

Yay!

What are they?

POSIX capabilities are pieces of root’s privilege which can be individually used.

File capabilites are POSIX capability sets attached to files. When files with associated capabilities are executed, the resulting task may end up with privilege even if the calling user was unprivileged.

What’s the problem

In single-user-namespace days, POSIX capabilities were completely orthogonal to userids. You can be a non-root user with CAP_SYS_ADMIN, for instance. This can happen by starting as root, setting PR_SET_KEEPCAPS through prctl(2), and dropping the capabilities you don’t want and changing your uid.  Or, it can happen by a non-root user executing a file with file capabilities.  In order to append such a capability to a file, you require the CAP_SETFCAP capability.

User namespaces had several requirements, including:

  1. an unprivileged user should be able to create a user namespace
  2. root in a user namespace should be privileged against its resources
  3. root in a user namespace should be unprivileged against any resources which it does not own.

So in a post-user-namespace age, unprivileged user can “have privilege” with respect to files they own. However if we allow them to write a file capability on one of their files, then they can execute that file as an unprivileged user on the host, thereby gaining that privilege. This violates the third user namespace requirement, and is therefore not allowed.

Unfortunately – and fortunately – some software wants to be installed with file capabilities. On the one hand that is great, but on the other hand, if the package installer isn’t able to handle the failure to set file capabilities, then package installs are broken. This was the case for some common packages – for instance httpd on centos.

With namespaced file capabilities, file capabilities continue to be orthogonal with respect to userids mapped into the namespace. However they capabilities are tagged as belonging to the host uid mapped to the container’s root id (0).  (If uid 0 is not mapped, then file capabilities cannot be assigned)  This prevents the namespace owner from gaining privilege in a namespace against which they should not be privileged.

 

Disclaimer

The opinions expressed in this blog are my own views and not those of Cisco.


on September 20, 2017 03:37 PM

Now that GNOME 3.26 is released, available in Ubuntu artful, and final GNOME Shell UI is confirmed, it’s time to adapt our default user experience to it. Let’s discuss how we worked with dash to dock upstream on the transparency feature. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 13: Adaptive transparency for Ubuntu Dock

GNOME Shell 3.26 excellent new release ships thus some dynamic panel transparency by default. If no window is next to the top panel, the bar is itself is translucent. If any windows is next to it, the panel becomes opaque. This feature is highlighted on the GNOME 3.26 release note. As we already discussed this on a previous blog post, it means that the Ubuntu Dock default opacity level doesn’t fit very well with the transparent top panel on an empty desktop.

Previous default Ubuntu Dock transparency

Even if there were some discussions within GNOME about keeping or reverting this dynamic transparency feature, we reached out the Dash to Dock guys during the 3.25.9x period to be prepared. Started then some excellent discussions on the pull request which was already rolling full speed ahead.

The first idea was to have dynamic transparency. Having one status for the top panel, and another one for the Dock itself. However, this gives some weirds user experience after playing with it a little bit:

We can feel there are too much flickering, having both parts of the UI behaving independently. The idea I raised upstream was thus to consider all Shell UI (which is, in the Ubuntu session the top panel and Ubuntu Dock) as a single entity. Their opacity status is thus linked, as one UI element. François agreed and had the same idea on his mind, before implementing it. The results is way more natural:

Those options are implemented as options in Dash to Dock settings panel, and we just set this last behavior as the default in Ubuntu Dock.

We made sure that this option is working well with the various dock settings we expose in the Settings application:

In particular, you can see that intelli-hide is working as expected: dock opacity changing while the Dock is vanishing and when forcing it to show up again, it’s at the maximum opacity that we set.

The default with no application next to the panel or dock is now giving some good outlook:

Default empty Ubuntu artful desktop

The best part is the following: as we are getting closer to release and there is still a little bit of work upstream to get everything merged in Dash to Dock itself for options and settings UI which doesn’t impact Ubuntu Dock, Michele has prepared a cleaned up branch that we can cherry-pick from directly in our ubuntu-dock branch that they will keep compatible with master for us! Now that the Feature Freeze and UI Freeze exceptions have been approved, the Ubuntu Dock package is currently building in the artful repository alongside other fixes and some shortcuts improvements.

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

It’s really a pleasure to work with Dash to Dock upstream, I’m using this blog opportunity to thank them again for everything they do and the cooperation they ease out for our use case.

on September 20, 2017 11:25 AM

Last week, myself and a number of the OpenStack Charms team had the pleasure of attending the OpenStack Project Teams Gathering in Denver, Colorado.

The first two days of the PTG where dedicated to cross project discussions, with the last three days focused on project specific discussion and work in dedicated rooms.

Here’s a summary of the charm related discussion over the week.

Cross Project Discussions

Skip Level Upgrades

This topic was discussed at the start of the week, in the context of supporting upgrades across multiple OpenStack releases for operators.  What was immediately evident was this was really a discussion around ‘fast-forward’ upgrades, rather than actually skipping any specific OpenStack series as part of a cloud upgrade.  Deployments would still need to step through each OpenStack release series in turn, so the discussion centred around how to make this much easier for operators and deployment tools to consume than it has been to-date.

There was general agreement on the principles that all steps required to update a service between series should be supported whilst the service is offline – i.e. all database migrations can be completed without the services actually running;  This would allow multiple upgrade steps to be completed without having to start services up on interim steps. Note that a lot of projects all ready support this approach, but its never been agreed as a general policy as part of the ‘supports-upgrade‘ tag which was one of the actions resulting from this discussion.

In the context of the OpenStack Charms, we already follow something along these lines for minimising the amount of service disruption in the control plane during OpenStack upgrades; with implementation of this approach across all projects, we can avoid having to start up services on each series step as we do today, further optimising the upgrade process delivered by the charms for services that don’t support rolling upgrades.

Policy in Code

Most services in OpenStack rely on a policy.{json,yaml} file to define the policy for role based access into API endpoints – for example, what operations require admin level permissions for the cloud. Moving all policy default definitions to code rather than in a configuration file is a goal for the Queens development cycle.

This approach will make adapting policies as part of an OpenStack Charm based deployment much easier, as we only have to manage the delta on top of the defaults, rather than having to manage the entire policy file for each OpenStack release.  Notably Nova and Keystone have already moved to this approach during previous development cycles.

Deployment (SIG)

During the first two days, some cross deployment tool discussions where held for a variety of topics; of specific interest for the OpenStack Charms was the discussion around health/status middleware for projects so that the general health of a service can be assessed via its API – this would cover in-depth checks such as access to database and messaging resources, as well as access to other services that the checked service might depend on – for example, can Nova access Keystone’s API for authentication of tokens etc. There was general agreement that this was a good idea, and it will be proposed as a community goal for the OpenStack project.

OpenStack Charms Devroom

Keystone: v3 API as default

The OpenStack Charms have optionally supported Keystone v3 for some time; The Keystone v2 API is officially deprecated, so we had discussion around approach for switching the default API deployed by the charms going forwards; in summary

  • New deployments should default to the v3 API and associated policy definitions
  • Existing deployments that get upgraded to newer charm releases should not switch automatically to v3, limiting the impact of services built around v2 based deployments already in production.
  • The charms already support switching from v2 to v3, so v2 deployments can upgrade as and when they are ready todo so.

At some point in time, we’ll have to automatically switch v2 deployments to v3 on OpenStack series upgrade, but that does not have to happen yet.

Keystone: Fernet Token support

The charms currently only support UUID based tokens (since PKI was dropped from Keystone); The preferred format is now Fernet so we should implement this in the charms – we should be able to leverage the existing PKI key management code to an extent to support Fernet tokens.

Stable Branch Life-cycles

Currently the OpenStack Charms team actively maintains two branches – the current development focus in the master branch, and the most recent stable branch – which right now is stable/17.08.  At the point of the next release, the stable/17.08 branch is no longer maintained, being superseded by the new stable/XX.XX branch.  This is reflected in the promulgated charms in the Juju charm store as well.  Older versions of charms remain consumable (albeit there appears to be some trimming of older revisions which needs investigating). If a bug is discovered in a charm version from a inactive stable branch, the only course of action is to upgrade the the latest stable version for fixes, which may also include new features and behavioural changes.

There are some technical challenges with regard to consumption of multiple stable branches from the charm store – we discussed using a different team namespace for an ‘old-stable’ style consumption model which is not that elegant, but would work.  Maintaining more branches means more resource effort for cherry-picks and reviews which is not feasible with the currently amount of time the development team has for these activities so no change for the time being!

Service Restart Coordination at Scale

tl;dr no one wants enabling debug logging to take out their rabbits

When running the OpenStack Charms at scale, parallel restarts of daemons for services with large numbers of units (we specifically discussed hundreds of compute units) can generate a high load on underlying control plane infrastructure as daemons drop and re-connect to message and database services potentially resulting in service outages. We discussed a few approaches to mitigate this specific problem, but ended up with focus on how we could implement a feature which batched up restarts of services into chunks based on a user provided configuration option.

You can read the full details in the proposed specification for this work.

We also had some good conversation around how unit level overrides for some configuration options would be useful – supporting the use case where a user wants to enable debug logging for a single unit of a service (maybe its causing problems) without having to restart services across all units to support this.  This is not directly supported by Juju today – but we’ll make the request!

Cross Model Relations – Use Cases

We brainstormed some ideas about how we might make use of the new cross-model relation features being developed for future Juju versions; some general ideas:

  • Multiple Region Cloud Deployments
    • Keystone + MySQL and Dashboard in one model (supporting all regions)
    • Each region (including region specific control plane services) deployed into a different model and controller, potentially using different MAAS deployments in different DC’s.
  • Keystone Federation Support
    • Use of Keystone deployments in different models/controllers to build out federated deployments, with one lead Keystone acting as the identity provider to other peon Keystones in different regions or potentially completely different OpenStack Clouds.

We’ll look to use the existing relations for some of these ideas, so as the implementation of this feature in Juju becomes more mature we can be well positioned to support its use in OpenStack deployments.

Deployment Duration

We had some discussion about the length of time taken to deploy a fully HA OpenStack Cloud onto hardware using the OpenStack Charms and how we might improve this by optimising hook executions.

There was general agreement that scope exists in the charms to improve general hook execution time – specifically in charms such as RabbitMQ and Percona XtraDB Cluster which create and distribute credentials to consuming applications.

We also need to ensure that we’re tracking any improvements made with good baseline metrics on charm hook execution times on reference hardware deployments so that any proposed changes to charms can be assessed in terms of positive or negative impact on individual unit hook execution time and overall deployment duration – so expect some work in CI over the next development cycle to support this.

As a follow up to the PTG, the team is looking at whether we can use the presence of a VIP configuration option to signal to the charm to postpone any presentation of access relation data to the point after which HA configuration has been completed and the service can be accessed across multiple units using the VIP.  This would potentially reduce the number (and associated cost) of interim hook executions due to pre-HA relation data being presented to consuming applications.

Mini Sprints

On the Thursday of the PTG, we held a few mini-sprints to get some early work done on features for the Queens cycle; specifically we hacked on:

Good progress was made in most areas with some reviews already up.

We had a good turnout with 10 charm developers in the devroom – thanks to everyone who attended and a special call-out to Billy Olsen who showed up with team T-Shirts for everyone!

We have some new specs already up for review, and I expect to see a few more over the next two weeks!

EOM


on September 20, 2017 10:51 AM

Cvrčci

Ante Karamatić

 

Čekanje

Veli hitro.hr kako se rezervacija imena rješava u 3 (tri) radna dana. Zahtjev je podnesen u utorak, 12.9. Danas je 20.9. Čuju se samo cvrčci.

on September 20, 2017 08:05 AM

APRX On Ubuntu Repository

Mohamad Faizul Zulkifli

Good news! i just noticed that aprx packages already listed on Ubuntu repository.



Aprx is a software package designed to run on any POSIX platform (Linux/BSD/Unix/etc.) and act as an APRS Digipeater and/or Internet Gateway. Aprx is able to support most APRS infrastructure deployments, including single stand-alone digipeaters, receive-only Internet gateways, full RF-gateways for bi-directional routing of traffic, and multi-port digipeaters operating on multiple channels or with multiple directional transceivers.

For more info visit:-



If you want to know more about aprs and ham radio visit:-







on September 20, 2017 08:01 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #519 for the weeks of September 5 – 18, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • Chris Guiver
  • Athul Muralidhar
  • Alan Diggs (Le Schyken, El Chicken)
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on September 20, 2017 04:11 AM

Kubuntu 17.10 — code-named Artful Aardvark — will be released on October 19th, 2017. We need a new banner for the website, and invite artists and designers to submit designs to us based on the Plasma wallpaper and perhaps the mascot design.

The banner is 1500×385 SVG.

Please submit designs to the Kubuntu-devel mail list.

on September 20, 2017 02:25 AM

September 18, 2017

Adventurous users and developers running the Artful development release can now also test the beta version of Plasma 5.11. This is experimental and can possibly kill kittens!

Bug reports on this beta go to https://bugs.kde.org, not to Launchpad.

The PPA comes with a WARNING: Artful will ship with Plasma 5.10.5, so please be prepared to use ppa-purge to revert changes. Plasma 5.11 will ship too late for inclusion in Kubuntu 17.10, but should be available via backports soon after release day, October 19th, 2017.

Read more about the beta release: https://www.kde.org/announcements/plasma-5.10.95.php

If you want to test on Artful: sudo add-apt-repository ppa:kubuntu-ppa/beta && sudo apt-get update && sudo apt full-upgrade -y

The purpose of this PPA is testing, and bug reports go to bugs.kde.org.

on September 18, 2017 11:12 PM

I had the distinct honor to deliver the closing keynote of the UbuCon Europe conference in Paris a few weeks ago.  First off -- what a beautiful conference and venue!  Kudos to the organizers who really put together a truly remarkable event.  And many thanks to the gentleman (Elias?) who brought me a bottle of his family's favorite champagne, as a gift on Day 2 :-)  I should give more talks in France!

In my keynote, I presented the results of the Ubuntu 18.04 LTS Default Desktops Applications Survey, which was discussed at length on HackerNews, Reddit, and Slashdot.  With the help of the Ubuntu Desktop team (led by Will Cooke), we processed over 15,000 survey responses and in this presentation, I discussed some of the insights of the data.

The team is now hard at work evaluating many of the suggested applications, for those of you that aren't into the all-Emacs spin of Ubuntu ;-)

Moreover, we're also investigating a potential approach to make the Ubuntu Desktop experience perhaps a bit like those Choose-Your-Own-Adventure books we loved when we were kids, where users have the opportunity to select each of their prefer applications (or stick with the distro default) for a handful of categories, during installation.

Marius Quabeck recorded the session and published the audio and video of the presentation here on YouTube:


You can download the slides here, or peruse them below:


Cheers,
Dustin
on September 18, 2017 10:34 PM

MAAS 2.3.0 Alpha 3 release!

Andres Rodriguez

MAAS 2.3.0 (alpha3)

New Features & Improvements

Hardware Testing (backend only)

MAAS has now introduced an improved hardware testing framework. This new framework allows MAAS to test individual components of a single machine, as well as providing better feedback to the user for each of those tests. This feature has introduced:

  • Ability to define a custom testing script with a YAML definition – Each custom test can be defined with YAML that will provide information about the test. This information includes the script name, description, required packages, and other metadata about what information the script will gather. This information can then be displayed in the UI.

  • Ability to pass parameters – Adds the ability to pass specific parameters to the scripts. For example, in upcoming beta releases, users would be able to select which disks they want to test if they don’t want to test all disks.

  • Running test individually – Improves the way how hardware tests are run per component. This allows MAAS to run tests against any individual component (such a single disk).

  • Adding additional performance tests

    • Added a CPU performance test with 7z.

    • Added a storage performance test with fio.

Please note that individual results for each of the components is currently only available over the API. Upcoming beta release will include various UI improvements that will allow the user to better surface and interface with these new features.

Rack Controller Deployment in Whitebox Switches

MAAS has now the ability to install and configure a MAAS rack controller once a machine has been deployed. As of today, this feature is only available when MAAS detects the machine is a whitebox switch. As such, all MAAS certified whitebox switches will be deployed with a MAAS rack controller. Currently certified switches include the Wedge 100 and the Wedge 40.

Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

UI – Controller Versions & Notifications

MAAS now surfaces the version of each running controller, and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as a HA setup.

Issues fixed in this release

  • #1702703    Cannot run maas-regiond without /bin/maas-rack
  • #1711414    [2.3, snap] Cannot delete a rack controller running from the snap
  • #1712450    [2.3] 500 error when uploading a new commissioning script
  • #1714273    [2.3, snap] Rack Controller from the snap fails to power manage on IPMI
  • #1715634    ‘tags machines’ takes 30+ seconds to respond with list of 9 nodes
  • #1676992    [2.2] Zesty ISO install fails on region controller due to postgresql not running
  • #1703035    MAAS should warn on version skew between controllers
  • #1708512    [2.3, UI] DNS and Description Labels misaligned on subnet details page
  • #1711700    [2.x] MAAS should avoid updating DNS if nothing changed
  • #1712422    [2.3] MAAS does not report form errors on script upload
  • #1712423    [2.3] 500 error when clicking the ‘Upload’ button with no script selected.
  • #1684094    [2.2.0rc2, UI, Subnets] Make the contextual menu language consistent across MAAS
  • #1688066    [2.2] VNC/SPICE graphical console for debugging purpose on libvirt pod created VMs
  • #1707850    [2.2] MAAS doesn’t report cloud-init failures post-deployment
  • #1711714    [2.3] cloud-init reporting not configured for deployed ubuntu core systems
  • #1681801    [2.2, UI] Device discovery – Tooltip misspelled
  • #1686246    [CLI help] set-storage-layout says Allocated when it should say Ready
  • #1621175    BMC acc setup during auto-enlistment fails on Huawei model RH1288 V3

For full details please visit:

https://launchpad.net/maas/+milestone/2.3.0alpha3

on September 18, 2017 02:24 PM

Please note that this post, like all of those on my blog, represents only my views, and not those of my employer. Nothing in here implies official hiring policy or requirements.

I’m not going to pretend that this article is unique or has magic bullets to get you into the offensive security space. I also won’t pretend to speak for others in that space or in other areas of information security. It’s a big field, and it turns out that a lot of us have opinions about it. Mubix maintains a list of posts like this so you can see everyone’s opinions. I highly recommend the post “So You Want to Work in Security” by Parisa Tabriz for a view that’s not specific to offensive security. (Though there’s a lot of cross-over.)

My personal area of interest – some would even say expertise – is offensive application security, which includes activities like black box application testing, reverse engineering (but not, generally, malware reversing), penetration testing, and red teaming. I also do whitebox code review and various other things, but mostly I attack things using the same tools and techniques that an illicit attacker would. Of course, I do this in the interest of securing those systems and learning from the experience to help engineer stronger and more robust systems.

I do a lot of work with recruiting and outreach in our company, so I’ve had the chance to talk to many people about what I think makes a good offensive security engineer. After a few dozen times and much reflection, I decided to write out my thoughts on getting started. Don’t believe this is all you need, but it should help you get started.

A Strong Sense of Curiousity and a Desire to Learn

This isn’t a field or a speciality that you get into after a few courses and can stop there. To be successful, you’ll have to constantly keep learning. To keep learning like that, you have to want to keep learning. I spend a lot of my weekends and evenings playing with technology because I want to understand how it works (and consequently, how I can break it). There’s a lot of ways to learn things that are relevant to this field:

  • Reddit
  • Twitter (follow a bunch of industry people)
  • Blogs (perhaps even mine…)
  • Books (my favorites in the resources section)
  • Courses
  • Attend Conferences (Network! Ask people what they’re doing!)
  • Watch Conference Videos
  • Hands on Exercises

Everyone has a different learning style, you’ll have to learn what works for you. I learn best by doing (hands-on) and somewhat by reading. Videos are just inspiration for me to look more into something. Twitter and Reddit are the starting grounds to find all the other resources.

I see an innate passion for this field in most of the successful people I know. Many of us would do this even if we weren’t paid (and do some of it in our spare time anyway). You don’t have to spend every waking moment working, but you do have to keep moving forward or get left behind.

Understanding the Underlying System

To identify, understand, and exploit security vulnerabilities, you have to understand the underlying system. I’ve seen “penetration testers” who don’t know that paths on Linux/Unix systems start with and use / as the path separator. Watching someone try to exploit a potential LFI with \etc\passwd is just painful. (Hint: it doesn’t work.)

If you’re attacking web applications, you should at least have some understanding of:

  • The HTTP Protocol
  • The Same Origin Policy
  • The programming language used
  • The operating system underneath

For non-web networked applications:

  • A basic understanding of TCP/IP (or UDP/IP, if applicable)
  • The OSI Model
  • Basic computer architecture (stack, heap, etc.)
  • Language used for implementation

You don’t have to know everything about every layer, but each item you don’t know is either something you’ll potentially miss, or something that will cost you time. You’ll learn more as you develop your skills, but there’s some fundamentals that will help you get started:

  • Learn at least one interpreted and one compiled programming language.
    • Python and ruby are a good choice for interpreted languages, as most security tools are written in one of those, so you can modify & create your own tools when needed.
    • C is the classic language for demonstrating memory corruption vulnerabilities, and doesn’t hide a lot of the underlying system, so a good choice for a compiled language.
  • Know basic use of both Linux and Windows. Basic use includes:
    • Network configuration
    • Command line basics
    • How services are run
  • Learn a bit about x86/x86-64 architecture.
    • What are pointers?
    • What is the stack and the heap?
    • What are registers?

You don’t have to have a full CS degree (but it certainly wouldn’t hurt), but if you don’t understand how developers do their work, you’ll have a much harder time looking for and exploiting vulnerabilities. Many of the best penetration testers and security researchers have had experience as network administrators, systems administrators, or developers – this experience is incredibly useful in understanding the underlying systems.

The CIA Triad

To understand security at all, you should understand the CIA triad. This has nothing to do with the American intelligence agency, but everything to do with 3 pillars of information security: Confidentiality, Integrity, and Availability.

Confidentiality refers to allowing only authorized access to data. For example, preventing access to someone else’s email falls into confidentiality. This idea has strong parallels to the notion of privacy. Encryption is often used (and misused) in the pursuit of confidentiality. Heartbleed is an example of a well-known bug affecting confidentiality.

Integrity refers to allowing only authorized changes to state. This can be the state of data (avoiding file tampering), the state of execution (avoiding remote code execution), or some combination. Most of the “exciting” vulnerabilities in information security impact integrity. GHOST is an example of a well-known bug affecting integrity.

Availability is, perhaps, the easiest concept to understand. This refers to the ability of a service to be access by legitimate users when they want to access it. (And probably also as the speed they’d like.)

These 3 concepts are the main areas of concern for security engineers.

Understanding Vulnerabilities

There are many ways to categorize vulnerabilities, so I won’t try to list them all, but find some and understand how they work. The OWASP Top 10 is a good start for web vulnerabilities. The Modern Binary Exploitation course from RPISEC is a good choice for understanding “Binary Exploitation”.

It’s really valuable to distinguish a bug from a vulnerability. Most vulnerabilities are bugs, most bugs are not vulnerabilities. Bugs are accidentally-introduced misbehavior in software. Vulnerabilities are ways to gain access to a higher (or different) privilege level in an unintended fashion. Generally, a bug must violate one of the 3 pillars of the CIA triad to be classified as a vulnerability. (Though this is often subjective, see [systemd bug].)

Doing Security

At some point, it stops being about what you know and starts being about what you can do. Knowing things is useful in being able to do, but merely reciting facts is not very useful in actual offensive security. Getting hands-on experience is critical, and this is one field where you need to be careful how to do it. Please remember that, however you choose to practice, you should stay legal and observe all applicable laws.

There’s a number of different options here that build relevant skills:

  • Formal classes with hands-on components
  • CTFs (please note that most CTF challenges have little resemblence to actual security work)
  • Wargames (see CTFs, but some are closer)
  • Lab work
  • Bug bounties

Of these, lab work is the most relevant to me, but also the one requiring the most time investment to setup. Typically, a lab will involve setting up one or more practices machines with known-vulnerable software (though feel free to progress to finding unknown issues). I’ll have a follow-up post with information on building an offensive security practice lab.

Bug bounties are a good option, but to a beginner, they’ll be very daunting because much of the low-hanging fruit will be gone, and there should be no known vulnerabilities to practice on. Getting into bug bounties without any prior experience at all is likely to only teach frustration and anger.

Resources

There are some suggested resources for getting started in Offensive Security. I’ll try to maintain them if I receive suggestions from other members of the community.

Web Resources (Reading/Watching)

Books

Courses

Lab Resources

I’ll have a follow-up about building a lab soon, but there’s some things worth looking at here:

Conclusion

This isn’t an especially easy field to get started in, but it’s the challenge that keeps most of us into it. I know I need to constantly be pushing the edge of my capabilities and of technology for it to stay satisfying. Good luck, and maybe you’ll soon be the author of one of the great resources in our community.

If you have other tips/resources that you think should have been included here, drop me a line or reach me on Twitter.

on September 18, 2017 07:00 AM

September 17, 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, about 189 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours is the same as last month.

The security tracker currently lists 59 packages with a known CVE and the dla-needed.txt file 60. The number of packages with open issues decreased slightly compared to last month but we’re not yet back to the usual situation. The number of CVE to fix per package tends to increase due to the increased usage of fuzzers.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on September 17, 2017 08:24 AM

September 14, 2017

En sólo 5 minutos vas a empaquetar tu primera aplicación snap. ¡Más fácil, imposible!
¿Aceptas el reto? ;) ¡Adelante!

vídeo tutorial


Basado en la conferencia de Alan Pope y Martin Wimpress de la 2ª Ubucon Europea:

snap! snap! snap!
on September 14, 2017 05:17 PM

This week we’ve been adding LED lights to a home studio, we announce the winner of the Entroware Apollo competition, serve up some GUI love and go over your feedback.

It’s Season Ten Episode Twenty-Eight of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

Entroware Competition Winner!

Congratulations to Dave Hingley for creating The Ubuntu Guys comic which was scripted in 20 minutes.

All entries

In no particular order here are all the entries

Roger Light

Neil McPhail

Sorry Neil, it’s 2017 and we still can’t edit tweets!

Andy Partington

Joe Ressington

Paul Gault

Robert Rijkhoff

Gentleman Punter

Ivan Pejić

Mattias Wernér

Masoud Abkenar

Johan Seyfferdt

Ovidiu Serban

Ryan Warnock

Dave Hingley

Ian Phillips

Brain Walton

Martin Tallosy

Lucy Walton

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on September 14, 2017 03:30 PM

The news is out! KDE and Purism are working together on a Free software smartphone featuring Plasma Mobile. Purism is running a crowdfunding campaign right now, and if that succeeds, with the help of KDE, the plan is to deliver a smartphone based on Plasma Mobile in January 2019.

Why do I care?

Data collection and evesdropping has become a very common problem. Not only governments (friendly and less-friendly) are spying on us, collecting information about our private lives, also companies are doing so. There is a lot of data about the average user stored in databases around the world that not only allows them to impersonate you, but also to steal from you, to kidnap your data, and to make your life a living hell. There is hardly any effective control how this data is secured, and the more data is out there, the more interesting a target it is to criminals. Do you trust random individuals with your most private information? You probably don’t, and this is why you should care.

Protect your data

Mockup of a Plasma Mobile based phoneMockup of a Plasma Mobile based phone
Mockup of a Plasma Mobile based phone[/caption]The only way to re-gain control before bad things happen is to make sure as little data as possible gets collected. Yet, most electronic products out there do the exact opposite. Worse, the market for smartphones is a duopoly of two companies, neither of which has as a goal the protection of its users. It’s just different flavors of bad.

There’s a hidden price to the cheap services of the Googles and Facebooks of this world, and that is collection of data, which is then sold to third parties. Hardly any user is aware of the problems surrounding that.

KDE has set out to provide users an alternative. Plasma Mobile was created to give users a choice to regain control. We’re building an operating system, transparently, based on the values of Free software and we build it for users to take back control.

Purism and KDE

In the past week, we’ve worked with Purism, a Social Purpose Corporation devoted to bringing security, privacy, software freedom, and digital independence to everyone’s personal computing experience, to create a mobile phone that allows users to regain control.
Purism has started a crowdfunding campaign to collect the funds to make the dream of a security and privacy focused phone.

Invest in your future

By supporting this campaign, you can invest not only into your own future, become an early adopter of the first wave of privacy-protecting personal communication devices, but also to proof that there is a market for products that act in the best interest of the users.

Support the crowdfunding campaign, and help us protect you.

on September 14, 2017 02:02 PM

We’ll focus today on our advanced user base. We, of course, try to keep our default user experience as comprehensible as possible for the wider public, but we want as well to think about our more technical users by fine tuning the experience… and all of this, obviously, while changing our default session to use GNOME Shell. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 12: Alt-tab behavior for advanced users.

Some early feedbacks that we got (probably from people being used to Unity or other desktop environments) is that application switching via Alt-Tab isn’t something completely natural to them. However, we still think, and this was a common shared experience between both GNOME Shell and Unity that model fits better in general. People who disagrees can still install one of the many GNOME Shell extensions for this.

When digging a little bit more, we see that the typical class of users complaining about that model is the power users, who has multiple windows of the same applications (typically terminals), and want to switch quickly between the last 2 windows of either:

  • the current application
  • the focused window of the last 2 applications

The first case is covered by [Alt] + [Key above tab] (and that won’t change even for ex-Unity users)1. However the second case isn’t.

That could lead to some frustrating experience if you have a window (typically a browser) standing in the background to read documentation, and having a terminal on top. If you want to quickly switch back and forth to your terminal (having multiple windows), you end up with:

Note that I started from one terminal window and a partially covered browser window, to end up, after 2 quick alt-tabs to two terminal windows covering the browser application.

We want to experiment back with quick alt-tab. Quick alt-tab is alt-tabbing before the switcher appears (the switcher doesn’t appear right away to avoid flickering). We can imagine in that case, we can switch between the last focused window of the last 2 applications. In the previous example, it would put us back to the initial state. However, if we wait enough for the switcher to be displayed, then, you are in the “application mode”, where you switch between all applications, and the default (if you don’t go into the thumbnail preview), is then to raise all windows of that applications. That forced us to increase slightly the delay before the switcher appears.

That was the default of our previous user experience and we didn’t experience bug report about that behavior, meaning that it seems that it fits both power user, and a more traditional type of users (having mostly one window per application, and so not being impacted, or not using quick alt-tab, but the application switcher itself, which doesn’t change behavior). We proposed a patch and opened a discussion upstream to see how we can converge to this idea, which might evolve and being refined in a later release to only be restricted to terminals from where the discussion seems to lead on. For now, as usual, this change is limited to the ubuntu session and doesn’t impact the GNOME vanilla one.

Here is the result:

Another slight inconsistency was the [Alt] + [key above tab] in the switcher itself. Some GNOME Shell releases ago, going into the window preview mode in the switcher was enabling selecting a particular window instance, but was still raising all other windows of the selected application. The selected window was just the most top one. Later, this behavior has been changed to only raise the selected window.

While using [Alt] + [Tab] followed by [Alt] + [key above tab] selecting directly the second window of the current app made sense in the first approach (as, after all, selecting the first window was the same effect than selecting the whole app), now that only the selected window is raised, it makes sense to select the first window of the current application on initial key press. Furthermore, that’s already the behavior of pressing the [down] array key in the alt-tab switcher. For Ubuntu Artful, this behavior is as well only available in the ubuntu session, however, it seems that upstream is considering the patch and you might see it in the next GNOME release.

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

We hope to find a way to satisfy both advanced and casual users, tuning the right balance between the 2 use cases. Focusing on a wider audience doesn’t necessarily mean we can’t make the logical flow compatible with other type of users.


  1. Note that I didn’t write [Alt] + [~] because any sane keyboard layout would have a ² key above tab. :)
on September 14, 2017 10:25 AM

September 13, 2017

6 de Septiembre, Miércoles

¡Por fin! Una nueva Ubucon Europea está a la vuelta de la esquina. Tras la primera de Alemania, toca París. Llegué un par de días antes, pues era más barato el avión en estas fechas tan saturadas de turistas.

Marius Quabeck, Miguel Menéndez y un par de chicos escandinavos (los cuales hicieron una bolera con Arduinos) nos juntamos de manera improvisada para cenar enfrente del museo donde se haría el evento.

super peque!

Tras pizza, cerveza y un montón de conversación, cada mochuelo a su olivo, ya con ganas de entrar en la antesala del evento.


7 de Septiembre, Jueves

A primera hora, nos acercamos Miguel y yo a la Ciudad de las Ciencias y de la Industria, que es el museo donde se desarrolla la Ubucon, para echar una mano a la organización y revisar que todo iba a funcionar como debería en nuestras conferencias.
Allí se encontraban muchos de los organizadores (aunque no todos por ser un día laboral). Ayudé a colocar mesas, desempaquetar los ordenadores y montarlos. Casi sin darme cuenta ya era la hora de comer, aunque bueno, con lo temprano que ellos comen... casi que era la hora del vermú :P

ubuntu por doquier :))

Nos acercamos todos a una pizzería, momento en el que se juntaron los grandes Diogo (del equipo portugués), Alan Pope y Martin Wimpress (estos dos últimos muy conocidos por su podcast y Ubuntu MATE). La comida fue agradable e incluso se dilató bastante, para una vez finalizada la sobremesa volver al museo y acabar con los preparativos del día siguiente.

En la tarde, el equipo francés organizó un paseo en barca por el río hasta la Torre Eiffel. Como ya conozco de visitas anteriores la capital de Francia, quedé en el evento hasta la hora de las copas, que eran en un bar alejado del evento, aunque muy peculiar. El bar estaba a la vera del río Sena, era un local muy antiguo, consistente en bóvedas de piedra. Era tan grande que incluso había un concierto en una de las esquinas y no molestaba al resto del bar. También había obras de teatro de manera espontánea. Muy original y único.

foto

Para cenar, salimos en busca de un restaurante de comida rápida, aunque acabamos en una pizzería de los Campos Elíseos, muy cara, pero es lo que hay en esa zona. Me encantó muchísimo la cena, pues me senté enfrente de Alan y Martin y hablamos mucho sobre la distribución.

Ya entrando en la media noche y temerosos de que cerrasen el metro, nos fuimos Miguel y yo al hotel.


Día 1 de la Ubucon, 8 de Septiembre, Viernes

Tras dormir como un lirón, me acerqué bien temprano al evento. El museo de la Ciudad de las Ciencias y de la Industria impresiona por su tamaño desde el exterior. Tras pasar el control de seguridad, bajas al nivel -1 y enfrente de la biblioteca, donde está situado el FabLab, entras en el recinto de la Ubucon.

museo

Nada más entrar, a mano izquierda hay una zona para jugar los niños, ideal para que los padres que quieran asistir puedan desentenderse de sus hijos mientras atienden las conferencias.
A mano derecha está un mostrador, para recibir a los visitantes y con muchísimo merchandising a la venta y hay que decir, que ese dinero va directamente a la organización para ayudar en futuros eventos.
Si avanzamos unos metros, tenemos a mano derecha ordenadores con Ubuntu para que cualquier visitante pueda probarlo in situ y frente a esos ordenadores, distintas organizaciones. Este año, UBPorts, Open Food Facts, Mozilla y Slimbook.

UBports
Slimbook
Mozilla
Zona para niños

Tras pasar esta zona, tenemos la primera sala de conferencias, la más pequeña y frente a ella, una sala de talleres, con unos 20 ordenadores. Si atraviesas esa sala destinada a talleres, llegas al área de install party, por donde pasan decenas de personas con su portátil para instalar Ubuntu.
Al final del evento, está la última de las salas, la más grande.

Olive y Rudy abrieron la Ubucon, contando por qué lo organizaban. Dando entrada a la primera de las charlas, la de Alan Pope sobre snapcraft.

Rudy & Olive
micro tutorial 
Muchísima audiencia para la charla de Alan

El resto del día, se desarrolla con charlas simultáneas, lo cual es bueno y malo. Bueno, porque si una no te atrae, vas a otra; y malo porque si te gustan ambas, te pierdes una ;)
Destacaría en este día las conferencias de Alan Pope, la presentación del proyecto UBports y el curso de programación de Ubuntu Touch por parte de Miguel.

Charla UBports 
Michal presentando la bolera con Arduino

Yo también puse mi granito de arena con la conferencia "Cómo hacer que triunfe tu proyecto de software libre".
El equipo francés preparó hasta el último detalle a conciencia, y ellos mismos preparaban la comida para organizadores y conferenciantes, así no tenías que irte fuera para comer y estabas junto a todos.

Al anochecer, el evento social fue en un bar que disfruté mucho. Se juntó el resto de la comitiva portuguesa con Tiago y Lucía, y como no había para comer en el bar, hicimos una escapada para cenar en un tailandés cercano.
La mayoría de españoles llegaron ese día y tras la cena les entró la modorra y el cansancio, por lo que se fueron a sus respectivos (y lejanos) hoteles.

Yo decidí quedarme y lo pasé como un niño, pues el bar consistía en una especie de cyber café, con muchísimas consolas, especialmente para jugar a dobles. Moló jugar contra Alan, Rudy, Martin, Lucía y Michal al Mario Kart y a la versión de PS4 de Street Fighter (¡cómo evolucionó este juego, pues yo sólo jugué la versión 2 en las recreativas de mi ciudad!).

Martin vs Alan. Round 1, fight! :P
Al Mario Kart: Lucia 1 - Costales 0
Con miedo a perder el último metro y ya un poco cansado del largo día, acompañé a Olive cuando marchó del bar.


Día 2 de la Ubucon, 9 de Septiembre, Sábado

La jornada la abrió Slimbook presentando sus portátiles con Linux preinstalado y la finalizó Martin Wimpress presentando el inminente Ubuntu MATE 17.10, y por cierto, sé de buena tinta que convenció a muchísimos de los asistentes que era reticentes a MATE.

Charla de Slimbook

Descubriendo Ubuntu MATE 17.10


A nivel personal, Paco Molinero y yo grabamos un nuevo podcast de Ubuntu y otras hierbas. Un capítulo muy especial, pues teniendo a mano a personas tan influyentes e importantes de la comunidad, no podíamos menos que entrevistar a Alan Pope, Martin Wimpress, Rudy y Miguel. En unos días publicamos el podcast por los canales habituales ;) 

Costales | Paco Molinero | Alan Pope

También me sorprendió la gran asistencia a mi charla de "Privacidad en la Red". Incluso el propio Ubuntu twitteó sobre ella |o/

Mi charla

Tras las conferencias, el evento social fue en el mismo bar de las bóvedas, junto al río. Allí, Rudy y Olive me hicieron una encerrona y me trajeron una gaita, escocesa, para más inri. Así que tras años sin tocar mi gaita asturiana, afortunado fui de poder hacer sonar algunas notas :P

let's play!
Continuó con un concierto que dio paso a la cena y sobremesa, que disfruté y mucho, con Paul (de UBPorts), Santiago (un nuevo amigo, que se acercó gracias a dar anuncia del evento en nuestro podcast :O), los chicos de Slimbook y Miguel.
Los franceses, como grandes anfitriones que son, se acercaban sin miedo a la mesa llena de españoles para socializar y dar sentido a esa palabra que es 'comunidad' :)) ¡Olé por ellos!


Día 3 de la Ubucon, 10 de Septiembre, domingo

Último día de la 2ª Ubucon Europea.
Destaco el taller de cómo crear una aplicación snap impartido por Alan y Martin. Rudy y Vincent dieron conferencias sobre la comunidad ubuntera, especialmente la francesa, revisando eventos en los que atraen a miles de personas, como pueden ser el webCafe y la Ubuntu Party. Philip Clay también colaboró en su segunda Ubucon Europea explicando cómo personalizar GNOME en la próxima versión de Ubuntu.

Curso de snap
Rudy en su charla

Me encantó una sesión de podcasters moderada por Alan, con Rudy, Martin, Tiago, Max y Marius Quabeck, la cual tengo muchas ganas de volver a escuchar en cuanto esté disponible online.

Podcast conjunto

Antes de finalizar las conferencias, Dustin Kirkland realizó una keynote sobre qué esperar en la futura versión LTS 18.04, tanto en el escritorio, en el servidor, como en el IoT. Muy interesante y parece que Ubuntu se ha tomado muy en serio escuchar más a la comunidad y se vieron todas las áreas en las que Ubuntu es relevante.

Qué veremos en la futura LTS

Mucho feedback de la comunidad

Olive y Rudy despidieron el evento agradeciendo a todos los participantes su asistencia.
Pero que el evento finalice, no implica que todos nos vayamos para el hotel ;) Cerramos la noche cenando en una pizzería cercana y la sobremesa fue única, con Alan provocando a diestro y siniestro. Eso sí, también le provocaron, por ejemplo Philip le preguntaba: "Is the snapd package available as a flatpak?" xD

cena

Un cubo, ingeniería alemana :P

The end

Al día siguiente, nos acercamos Miguel y yo a visitar la Torre Eiffel. Tras ello, Miguel se fue al aeropuerto y yo deambulé por las calles parisinas, calles de posiblemente la ciudad más bohemia del mundo. Una capital que albergó el evento ubuntero más importante de Europa y parte del mundo. Por el pasaron miles de asistentes, se hicieron libres muchísimos ordenadores, se compartió el conocimiento con decenas de charlas... pero si me tengo que quedar con algo, me quedo, sin dudarlo, con los eventos sociales, con toda la gente nueva que conocí o que volví a ver y los buenos momentos que compartí con ellos.

Paris


Para finalizar, quiero felicitar a todo el equipo francés. Han hecho un trabajo perfecto, impresionante y con toda la pasión del mundo. ¡Gracias y hasta la próxima compañeros!

LOL :))) ¡Grandeeee Rudy!

on September 13, 2017 05:58 PM

Canonical and Microsoft have teamed up to deliver an truly special experience -- running Ubuntu containers with Hyper-V Isolation on Windows 10 and Windows Servers!

We have published a fantastic tutorial at https://ubu.one/UhyperV, with screenshots and easy-to-follow instructions.  You should be up and running in minutes!

Follow that tutorial, and you'll be able to launch Ubuntu containers with Hyper-V isolation by running the following directly from a Windows Powershell:
  • docker run -it ubuntu bash
Cheers!
Dustin
on September 13, 2017 04:00 PM

September 12, 2017

The OpenStack Charms team is pleased to announce that the 17.08 release of the OpenStack Charms is now available from jujucharms.com!

In addition to 204 bug fixes across the charms and support for OpenStack Pike, this release includes a new charm for Gnocchi, support for Neutron internal DNS, Percona Cluster performance tuning and much more.

For full details of all the new goodness in this release please refer to the release notes.

Thanks go to the following people who contributed to this release:

Nobuto Murata
Mario Splivalo
Ante Karamatić
zhangbailin
Shane Peters
Billy Olsen
Tytus Kurek
Frode Nordahl
Felipe Reyes
David Ames
Jorge Niedbalski
Daniel Axtens
Edward Hope-Morley
Chris MacNaughton
Xav Paice
James Page
Jason Hobbs
Alex Kavanagh
Corey Bryant
Ryan Beisner
Graham Burgess
Andrew McLeod
Aymen  Frikha
Hua Zhang
Alvaro Uría
Peter Sabaini

EOM

 

 


on September 12, 2017 09:59 PM

KGraphViewer 2.4.0

Jonathan Riddell

KGraphViewer 2.4.0 has been released.

KGraphViewer is a visualiser for Graphviz’s DOT format of graphs.
https://www.kde.org/applications/graphics/kgraphviewer

This ports KGraphViewer to use KDE Frameworks 5 and Qt 5.

It can be used by massif-visualizer to add graphing features.

Download from:
https://download.kde.org/stable/kgraphviewer/2.4.0/

sha256:
88c2fd6514e49404cfd76cdac8ae910511979768477f77095d2f53dca0f231b4 kgraphviewer-2.4.0.tar.xz

Signed with my PGP key
2D1D 5B05 8835 7787 DE9E E225 EC94 D18F 7F05 997E
Jonathan Riddell <jr@jriddell.org>
kgraphviewer-2.4.0.tar.xz.sig

Facebooktwittergoogle_pluslinkedinby feather
on September 12, 2017 03:13 PM

September 11, 2017

Cloud-init is the subject for the most recent episode of Podcast.__init__.
Go and have a listen to Episode 126.

I really enjoyed talking to Tobias about cloud-init and some of the difficulties of the project and goals for the future.

Enjoy!
on September 11, 2017 01:42 PM

September 10, 2017

The Niamh prime

Stuart Langridge

A bit of maths-y fiddling around on a Sunday afternoon.

Fascinating video on the Trinity Hall prime at Numberphile:

Apparently, Professor James McKee found a prime number which, when written out as ASCII art, looks like the crest of Trinity Hall college. Jack Hodkinson at Cambridge then searched for and found a prime which looks like a picture of Corpus Christi college (via Futility Closet). That seems like a cool idea. So, with a bit of help from aalib in JavaScript and the Miller-Rabin primality test, plus a bit of scaling images up and down in Gimp, I found this 2,850-digit prime:

777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,577,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­777,777,777,­752,385,356,­867,777,777,­777,777,777,­777,777,777,­777,777,777,­775,352,235,­666,688,668,­667,777,777,­777,777,777,­777,777,777,­777,776,765,­555,556,666,­856,868,667,­777,777,777,­777,777,777,­777,777,777,­222,335,666,­666,866,686,­666,665,777,­777,777,777,­777,777,777,­777,355,336,­358,866,556,­666,655,665,­655,777,777,­777,777,777,­777,777,733,­552,236,666,­666,655,665,­665,666,555,­777,777,777,­777,777,777,­777,276,265,­666,666,656,­655,555,555,­555,533,777,­777,777,777,­777,777,253,­252,566,666,­665,555,556,­555,555,565,­557,777,777,­777,777,777,­725,222,236,­666,565,555,­555,556,555,­535,355,237,­777,777,777,­777,772,266,­725,366,535,­555,355,555,­555,553,553,­533,577,777,­777,777,777,­272,637,356,­655,555,555,­353,535,556,­655,355,332,­277,777,777,­777,772,235,­775,355,665,­553,355,353,­535,533,555,­555,223,777,­777,777,777,­322,222,255,­555,556,353,­555,355,336,­355,653,533,­537,777,777,­777,772,222,­225,555,565,­555,355,335,­355,356,555,­655,353,237,­777,777,777,­772,272,355,­553,553,355,­535,353,365,­355,355,553,­523,577,777,­777,777,332,­333,566,333,­553,555,533,­355,355,555,­555,353,555,­677,777,777,­777,332,355,­555,555,555,­555,555,553,­555,535,555,­666,556,777,­777,777,775,­533,355,535,­355,553,565,­535,353,655,­655,555,565,­557,777,777,­777,756,533,­555,533,335,­353,566,655,­353,566,535,­656,655,577,­777,777,777,­565,353,535,­535,553,335,­566,666,555,­666,568,566,­665,777,777,­777,775,633,­535,555,555,­555,535,565,­556,555,666,­566,666,677,­777,777,777,­758,333,333,­355,555,656,­556,565,866,­666,866,658,­667,777,777,­777,777,582,­233,333,333,­355,565,555,­566,666,666,­868,666,677,­777,777,777,­775,822,333,­333,333,535,­555,556,666,­666,668,688,­586,777,777,­777,777,736,­355,333,333,­355,556,555,­636,666,686,­688,888,557,­777,777,777,­777,565,555,­555,333,555,­566,666,565,­686,666,886,­886,777,777,­777,777,776,­656,888,853,­335,556,686,­556,666,666,­868,886,887,­777,777,777,­777,775,356,­368,532,355,­555,688,666,­866,666,668,­668,877,777,­777,777,777,­732,233,553,­223,323,335,­533,556,666,­686,686,686,­777,777,777,­777,777,323,­333,332,222,­233,332,233,­568,665,666,­656,337,777,­777,777,777,­773,233,333,­322,222,223,­222,223,566,­556,655,533,­777,777,777,­777,777,722,­333,232,222,­222,233,222,­225,665,555,­632,277,777,­777,777,777,­777,323,333,­322,222,223,­222,222,236,­553,553,222,­777,777,777,­777,777,777,­233,232,222,­223,232,222,­223,355,533,­552,277,777,­777,777,777,­777,772,333,­333,222,233,­322,222,223,­353,233,355,­777,777,777,­777,777,777,­723,333,355,­665,533,233,­333,333,322,­335,677,777,­777,777,777,­777,777,735,­533,333,222,­233,333,333,­332,223,356,­777,777,777,­777,777,777,­777,555,333,­232,222,333,­333,333,222,­233,677,777,­777,777,777,­777,777,772,­533,332,222,­222,333,333,­322,222,255,­777,777,777,­777,777,777,­777,775,356,­566,665,553,­333,333,222,­223,557,777,­777,777,777,­777,777,777,­733,335,333,­332,223,333,­332,222,233,­577,777,777,­777,777,777,­777,777,233,­335,332,222,­223,332,322,­222,233,777,­777,777,777,­777,777,777,­777,333,232,­222,222,223,­332,222,222,­237,777,777,­777,777,777,­777,777,773,­222,222,222,­222,223,222,­322,222,377,­777,777,777,­777,777,777,­777,777,222,­222,222,222,­323,332,222,­223,777,777,­777,777,777,­777,777,777,­773,232,222,­222,333,353,­222,222,227,­777,777,777,­777,777,777,­777,777,733,­332,222,223,­355,322,222,­222,277,777,­777,777,777,­777,777,777,­777,755,333,­333,555,533,­222,222,222,­277,777,777,­777,777,777,­777,777,777,­777,775,555,­555,322,222,­222,222,777,­777,777,777,­777,777,777,­777,777,777,­755,555,533,­222,222,222,­227,777,777,­777,777,777,­777,777,777,­777,777,755,­553,332,222,­222,222,227,­777,777,777,­777,777,777,­777,777,777,­777,555,533,­322,222,222,­222,227,777,­777,777,777,­777,777,777,­777,777,773,­553,332,222,­222,222,222,­277,777,777,­777,777,777,­777,777,777,­777,735,533,­322,222,222,­222,222,277,­779,769

although it looks rather better when properly formatted.

77777777777777777777777777777777777777777777777777
77777777777777777777777777777777777777777777777777
77777777777777777777777577777777777777777777777777
77777777777777777777775238535686777777777777777777
77777777777777777753522356666886686677777777777777
77777777777777776765555556666856868667777777777777
77777777777777722233566666686668666666577777777777
77777777777773553363588665566666556656557777777777
77777777777733552236666666655665665666555777777777
77777777777727626566666665665555555555553377777777
77777777772532525666666655555565555555655577777777
77777777725222236666565555555556555535355237777777
77777777226672536653555535555555555355353357777777
77777772726373566555555553535355566553553322777777
77777772235775355665553355353535533555555223777777
77777732222225555555635355535533635565353353777777
77777722222255555655553553353553565556553532377777
77777772272355553553355535353365355355553523577777
77777733233356633355355553335535555555535355567777
77777773323555555555555555555535555355556665567777
77777775533355535355553565535353655655555565557777
77777775653355553333535356665535356653565665557777
77777775653535355355533355666665556665685666657777
77777775633535555555555535565556555666566666677777
77777775833333335555565655656586666686665866777777
77777775822333333333555655555666666668686666777777
77777775822333333333535555556666666668688586777777
77777773635533333335555655563666668668888855777777
77777775655555553335555666665656866668868867777777
77777776656888853335556686556666666868886887777777
77777777535636853235555568866686666666866887777777
77777777322335532233233355335566666866866867777777
77777777323333332222233332233568665666656337777777
77777777323333332222222322222356655665553377777777
77777777223332322222222332222256655556322777777777
77777777323333322222223222222236553553222777777777
77777777723323222222323222222335553355227777777777
77777777723333332222333222222233532333557777777777
77777777723333355665533233333333322335677777777777
77777777773553333322223333333333222335677777777777
77777777775553332322223333333332222336777777777777
77777777772533332222222333333322222255777777777777
77777777777535656666555333333322222355777777777777
77777777777333353333322233333322222335777777777777
77777777777233335332222223332322222233777777777777
77777777777733323222222222333222222223777777777777
77777777777732222222222222232223222223777777777777
77777777777777222222222222323332222223777777777777
77777777777777323222222233335322222222777777777777
77777777777777333322222233553222222222777777777777
77777777777777755333333555533222222222277777777777
77777777777777777777555555532222222222277777777777
77777777777777777777555555332222222222277777777777
77777777777777777777755553332222222222227777777777
77777777777777777777755553332222222222222777777777
77777777777777777777735533322222222222222777777777
77777777777777777777735533322222222222222277779769

I think I’ll call it the Niamh Prime.

on September 10, 2017 07:32 PM

September 08, 2017

I’ve been thinking about the usability of command-line terminals a lot recently.

Command-line interfaces remain mystifying to many people. Usability hobbyists seem as inclined to ask why the terminal exists, as how to optimise it. I’ve also had it suggested to me that the discipline of User Experience (UX) has little to offer the Command-Line Interface (CLI), because the habits of terminal users are too inherent or instinctive to be defined and optimised by usability experts.

As an experienced terminal user with a keen interest in usability, I disagree that usability has little to offer the CLI experience. I believe that the experience can be improved through the application of usability principles just as much as for more graphical domains.

Steps to learn a new CLI tool

To help demystify the command-line experience, I’m going to lay out some of the patterns of thinking and behaviour that define my use of the CLI.

New CLI tools I’ve learned recently include snap, kubectl and nghttp2, and I’ve also dabbled in writing command-line tools myself.

Below I’ll map out an example of the steps I might go through when discovering a new command-line tool, as a basis for exploring how these tools could be optimised for CLI users.

  1. Install the tool
    • First, I might try apt install {tool} (or brew install {tool} on a mac)
    • If that fails, I’ll probably search the internet for “Install {tool}” and hope to find the official documentation
  2. Check it is installed, and if tab-complete works
    • Type the first few characters of the command name (sna for snap) followed by <tab> <tab>, to see if the command name auto-completes, signifying that the system is aware of its existence
    • Hit space, and then <tab> <tab> again, to see if it shows me a list of available sub-commands, indicating that tab completion is set up correctly for the tool
  3. Try my first command
    • I’m probably following some documentation at this point, which will be telling me the first command to run (e.g. snap install {something}), so I’ll try that out and expect prompt succinct feedback to show me that it’s working
    • For basic tools, this may complete my initial interaction with the tool. For more complex tools like kubectl or git I may continue playing with it
  4. Try to do something more complex
    • Now I’m likely no longer following a tutorial, instead I’m experimenting on my own, trying to discover more about the tool
    • If what I want to do seems complex, I’ll straight away search the internet for how to do it
    • If it seems more simple, I’ll start looking for a list of subcommands to achieve my goal
    • I start with {tool} <tab> <tab> to see if it gives me a list of subcommands, in case it will be obvious what to do next from that list
    • If that fails I’ll try, in order, {tool} <enter>, {tool} -h, {tool} --help, {tool} help or {tool} /?
    • If none of those work then I’ll try man {tool}, looking for a Unix manual entry
    • If that fails then I’ll fall back to searching the internet again

UX recommendations

Considering my own experience of CLI tools, I am reasonably confident the following recommendations make good general practice guidelines:

  • Always implement a --help option on the main command and all subcommands, and if appropriate print out some help when no options are provided ({tool} <enter>)
  • Provide both short- (e.g. -h) and long- (e.g. --help) form options, and make them guessable
  • Carefully consider the naming of all subcommands and options, use familiar words where possible (e.g. help, clean, create)
  • Be consistent with your naming – have a clear philosophy behind your use of subcommands vs options, verbs vs nouns etc.
  • Provide helpful, readable output at all times – especially when there’s an error (npm I’m looking at you)
  • Use long-form options in documentation, to make commands more self-explanatory
  • Make the tool easy to install with common software management systems (snap, apt, Homebrew, or sometimes NPM or pip)
  • Provide tab-completion. If it can’t be installed with the tool, make it easy to install and document how to set it up in your installation guide
  • Command outputs should use the appropriate output streams (STDOUT and STDERR) and should be as user-friendly and succinct as possible, and ideally make use of terminal colours

Some of these recommendations are easier to implement than others. Ideally every command should consider their subcommands and options carefully, and implement --help. But writing auto-complete scripts is a significant undertaking.

Similarly, packaging your tool as a snap is significantly easier than, for example, adding software to the official Ubuntu software sources.

Although I believe all of the above to be good general advice, I would very much welcome research to highlight the relative importance of addressing each concern.

Outstanding questions

There are a number of further questions for which the answers don’t seem obvious to me, but I’d love to somehow find out the answers:

  • Once users have learned the short-form options (e.g. -h) do they ever use the long-form (e.g. --help)?
  • Do users prefer subcommands (mytool create {something}) or options (mytool --create {something})?
  • For multi-level commands, do users prefer {tool} {object} {verb} (e.g. git remote add {remote_name}), or {tool} {verb} {object} (e.g. kubectl get pod {pod_name}), or perhaps {tool} {verb}-{object} (e.g. juju remove-application {app_name})?
  • What patterns exist for formatting command output? What’s the optimal length for users to read, and what types of formatting do users find easiest to understand?

If you know of either authoritative recommendations or existing research on these topics, please let me know in the comments below.

I’ll try to write a more in-depth follow-up to this post when I’ve explored a bit further on some of these topics.

on September 08, 2017 10:43 AM

September 07, 2017

Massif Visualizer is a visualiser for output generated by Valgrind’s massif tool.  It shows you graphs which measure how much heap memory your program uses.

Download link: https://download.kde.org/stable/massif-visualizer/0.7.0/src/

sha256:
f8a4cc23c80a259a9edac989e957c48ed308cf9da9caeef19eec3ffb52361f6d  massif-visualizer-0.7.0.tar.xz

PGP signature is mine:
Jonathan Riddell with 0xEC94D18F7F05997E.

It has an optional dependency on KGraphViewer which is due for a release shortly.

 

Facebooktwittergoogle_pluslinkedinby feather
on September 07, 2017 02:31 PM

17.10 Beta 1 Release

Ubuntu Studio

Ubuntu Studio 17.10 Artful Aardvark Beta 1 is released! It’s that time of the release cycle again. The first beta of the upcoming release of Ubuntu Studio 17.10 is here and ready for testing. You may find the images at cdimage.ubuntu.com/ubuntustudio/releases/artful/beta-1/. More information can be found in the Beta 1 Release Notes. Reporting Bugs If […]
on September 07, 2017 12:32 PM

September 06, 2017

Webteam development summary

Canonical Design Team

Iteration 6

dating between 14th to the 25th of August

This iteration saw a lot of work on tutorials.ubuntu.com and on the migration of design.ubuntu.com from WordPress to a fresh new Jekyll site project. Continued research and planning into the new snapcraft.io site, with some beginnings of the development framework.

Vanilla Framework put a lot of emphasis into polishing the existing components and porting the old theme concept patterns into the code base.

Websites issues: 66 closed, 33 opened (551 in total)

Some highlights include:
– Fixing content of card touching card edge in tutorials – https://github.com/canonical-websites/tutorials.ubuntu.com/issues/312
– Migrate canonical.com to Vanilla: Polish and custom patterns – https://github.com/canonical-websites/www.canonical.com/issues/172
– Prepare for deploy of design.ubuntu.comhttps://github.com/canonical-websites/design.ubuntu.com/issues/54
– Redirect from https://www.ubuntu.com/usn/ to https://usn.ubuntu.com/usn were broken – https://github.com/canonical-websites/www.ubuntu.com/issues/2128
design.ubuntu.com/web-style-guide build page and then hide pages – https://github.com/canonical-websites/design.ubuntu.com/issues/66
– Snapcraft prototype: Snap page – https://github.com/canonical-websites/snapcraft.io/issues/346
– Create Flask skeleton application –https://github.com/canonical-websites/snapcraft-flask/issues/2

Vanilla Framework issues: 24 closed, 16 opened (43 in total)

Some highlights include:
– Combine the entire suite of brochure theme patterns to Vanilla’s code base – https://github.com/vanilla-framework/vanilla-framework/issues/1177
– Many improvements to the documentation theme – https://github.com/vanilla-framework/vanilla-docs-theme/issues/45
– External link icon seems stretched – https://github.com/vanilla-framework/vanilla-framework/issues/1058
– .p-heading–icon pattern remove text color – https://github.com/vanilla-framework/vanilla-framework/issues/1272
– Remove margin rules on card content – https://github.com/vanilla-framework/vanilla-framework/issues/1277

All of these projects are open source. So please file issues if you find any bugs or even better propose a pull request. See you in two weeks for the next update from the web team here at Canonical.

on September 06, 2017 08:15 PM

During the last days I was experimenting a bit with implementing a GObject C API in Rust. The results can be found in this repository, and this is something like an overview of the work, code walkthrough and status report. Note that this is quite long, a little bit further down you can find a table of contents and then jump to the area you’re interested in. Or read it chapter by chapter.

GObject is a C library that allows to write object-oriented, cross-platform APIs in C (which does not have support for that built-in), and provides a very expressive runtime type system with many features known from languages like Java, C# or C++. It is also used by various C library, most notably the cross-platform GTK UI toolkit and the GStreamer multimedia framework. GObject also comes with strong conventions about how an API is supposed to look and behave, which makes it relatively easy to learn new GObject based APIs as compared to generic C libraries that could do anything unexpected.

I’m not going to give a full overview about how GObject works internally and how it is used. If you’re not familiar with that it might be useful to first read the documentation and especially the tutorial. Also some C & Rust (especially unsafe Rust and FFI) knowledge would be good to have for what follows.

If you look at the code, you will notice that there is a lot of unsafe code, boilerplate and glue code. And especially code duplication. I don’t expect anyone to manually write all this code, and the final goal of all this is to have Rust macros to make the life easier. Simple Rust macros that make it as easy as in C are almost trivial to write, but what we really want here is to be able to write it all only in safe Rust in code that looks a bit like C# or Java. There is a prototype for that already written by Niko Matsakis, and a blog post with further details about it. The goal for this code is to work as a manual example that can be integrated one step at a time into the macro based solution. Code written with that macro should in the end look similar to the following

gobject_gen! {
    class Counter {
        struct CounterPrivate {
            f: Cell<u32>
        }

        fn add(&self, x: u32) -> u32 {
            let private = self.private();
            let v = private.f.get() + x;
            private.f.set(v);
            v
        }

        fn get(&self) -> u32 {
            self.private().f.get()
        }
    }
}

and be usable like

let c = Counter::new();
c.add(2);
c.add(20);

The code in my repository is already integrated well into GTK-rs, but the macro generated code should also be integrated well into GTK-rs and work the same as other GTK-rs code from Rust. In addition the generated code should of course make use of all the type FFI conversion infrastructure that already exists in there and was explained by Federico in his blog post (part 1, part 2).
In the end, I would like to see such a macro solution integrated directly into the GLib bindings.

Table of Contents

  1. Why?
  2. Simple (boxed) types
  3. Object types
    1. Inheritance
    2. Virtual Methods
    3. Properties
    4. Signals
  4. Interfaces
  5. Usage from C
  6. Usage from Rust
  7. Usage from Python, JavaScript and Others
  8. What next?

Why?

Now one might ask why? GObject is yet another C library and Rust can export plain C API without any other dependencies just fine. While that is true, C is not very expressive at all and there are no conventions about how C APIs should look like and behave, so everybody does their own stuff. With GObject you would get all kinds of object-oriented programming features and strong conventions about API design. And you actually get a couple of features (inheritance, properties/signals, full runtime type system) that Rust does not have. And as bonus points, you get bindings for various other languages (Python, JavaScript, C++, C#, …) for free. More on the last point later.

Another reason why you might want to do this, is to be able to interact with existing C libraries that use GObject. For example if you want to create a subclass of some GTK widget to give it your own custom behaviour or modify its appearance, or even writing a completely new GTK widget that should be placed together with other widgets in your UI, or for implementing a new GStreamer element that implements some fancy filter or codec or … that you want to use.

Simple (boxed) types

Let’s start with the simple and boring case, which already introduces various GObject concepts. Let’s assume you already have some simple Rust type that you want to expose a C API for, and it should be GObject-style to get all the above advantages. For that, GObject has the concept of boxed types. These have to come with a “copy” and “free” function, which can do an actual copy of the object or just implement reference counting, and GObject allows to register these together with a string name for the type and then gives back a type ID (GType) that allows referencing this type.

Boxed types can then be automatically used, together with any C API they provide, from C and any other languages for which GObject support exists (i.e. basically all). It allows to use instances of these boxed types to be used in signals and properties (see further below), allows them to be stored in GValue (a container type that allows to store an instance of any other type together with its type ID), etc.

So how does all this work? In my repository I’m implementing a boxed type around a Option, one time as a “copy” type RString, another time reference counted (SharedRString). Outside Rust, both are just passed as pointers and the implementation of them is private/opaque. As such, it is possible to use any kind of Rust struct or enum and e.g. marking them as #[repr(C)] is not needed. It is also possible to use #[repr(C)] structs though, in which case the memory layout could be public and any struct fields could be available from C and other languages.

RString

The actual implementation of the type is in the imp.rs file, i.e. in the imp module. I’ll cover the other files in there at a later time, but mod.rs is providing a public Rust API around all this that integrates with GTK-rs.

The following is the whole implementation, in safe Rust:

#[derive(Clone)]
pub struct RString(Option<String>);

impl RString {
    fn new(s: Option<String>) -> RString {
        RString(s)
    }

    fn get(&self) -> Option<String> {
        self.0.clone()
    }

    fn set(&mut self, s: Option<String>) {
        self.0 = s;
    }
}

Type Registration

Once the macro based solution is complete, this would be more or less all that would be required to also make this available to C via GObject, and any other languages. But we’re not there yet, and the goal here is to do it all manually. So first of all, we need to register this type somehow to GObject, for which (by convention) a C function called ex_rstring_get_type() should be defined which registers the type on the first call to get the type ID, and on further calls just returns that type ID. If you’re wondering what ex is: this is the “namespace” (C has no built-in support for namespaces) of the whole library, short for “example”. The get_type() function looks like this:

#[no_mangle]
pub unsafe extern "C" fn ex_rstring_get_type() -> glib_ffi::GType {
    callback_guard!();

    static mut TYPE: glib_ffi::GType = gobject_ffi::G_TYPE_INVALID;
    static ONCE: Once = ONCE_INIT;

    ONCE.call_once(|| {
        let type_name = CString::new("ExRString").unwrap();

        TYPE = gobject_ffi::g_boxed_type_register_static(
            type_name.as_ptr(),
            Some(mem::transmute(ex_rstring_copy as *const c_void)),
            Some(mem::transmute(ex_rstring_free as *const c_void)),
        );

    });

    TYPE
}

This is all unsafe Rust and calling directly into the GObject C library. We use std::sync::Once for the one-time registration of the type, and store the result in a static mut called TYPE (super unsafe, but OK here as we only ever write to it once). For registration we call g_boxed_type_register_static() from GObject (provided to Rust via the gobject-sys crate) and provide the name (via std::ffi::CString for C interoperability) and the copy and free functions. Unfortunately we have to cast them to a generic pointer, and then transmute them to a different function pointer type as the arguments and return value pointers that GObject wants there are plain void * pointers but in our code we would at least like to use RString *. And that’s all that there is to the registration. We mark the whole function as extern “C” to use the C calling conventions, and use #[no_mangle] so that the function is exported with exactly that symbol name (otherwise Rust is doing symbol name mangling), and last we make sure that no panic unwinding happens from this Rust code back to the C code via the callback_guard!() macro from the glib crate.

Memory Managment Functions

Now let’s take a look at the actual copy and free functions, and the actual constructor function called ex_rstring_new():

#[no_mangle]
pub unsafe extern "C" fn ex_rstring_new(s: *const c_char) -> *mut RString {
    callback_guard!();

    let s = Box::new(RString::new(from_glib_none(s)));
    Box::into_raw(s)
}

#[no_mangle]
pub unsafe extern "C" fn ex_rstring_copy(rstring: *const RString) -> *mut RString {
    callback_guard!();

    let rstring = &*rstring;
    let s = Box::new(rstring.clone());
    Box::into_raw(s)
}

#[no_mangle]
pub unsafe extern "C" fn ex_rstring_free(rstring: *mut RString) {
    callback_guard!();

    let _ = Box::from_raw(rstring);
}

These are also unsafe Rust functions that work with raw pointers and C types, but fortunately not too much is happening here.

In the constructor function we get a C string (char *) passed as argument, convert this to a Rust string (actually Option as this can be NULL) via from_glib_none() from the glib crate and then pass that to the Rust constructor of our type. from_glib_none() means that we don’t take ownership of the C string passed to us, the other variant would be from_glib_full() in which case we would take ownership. We then pack up the result in a Rust Box to place the new RString in heap allocated memory (otherwise it would be stack allocated), and use Box’s into_raw() function to get a raw pointer to the memory and not have its Drop implementation called anymore. This is then returned to the caller.

Similarly in the copy and free functions we just do some juggling with Boxes: copy take a raw pointer to our RString, calls the compiler generated clone() function to copy it all, and then packs it up in a new Box to return to the caller. The free function converts the raw pointer back to a Box, and then lets the Drop implementation of Box take care of freeing all memory related to it.

Actual Functionality

The two remaining functions are C wrappers for the get() and set() Rust functions:

#[no_mangle]
pub unsafe extern "C" fn ex_rstring_get(rstring: *const RString) -> *mut c_char {
    callback_guard!();

    let rstring = &*rstring;
    rstring.get().to_glib_full()
}

#[no_mangle]
pub unsafe extern "C" fn ex_rstring_set(rstring: *mut RString, s: *const c_char) {
    callback_guard!();

    let rstring = &mut *rstring;
    rstring.set(from_glib_none(s));
}

These only call the corresponding Rust functions. The set() function again uses glib’s from_glib_none() to convert from a C string to a Rust string. The get() function uses ToGlibPtrFull::to_glib_full() from GLib to convert from a Rust string (Option to be accurate) to a C string, while passing ownership of the C string to the caller (which then also has to free it at a later time).

This was all quite verbose, which is why a macro based solution for all this would be very helpful.

Corresponding C Header

Now if this API would be used from C, the header file to do so would look something like this. Probably no surprises here.

#define EX_TYPE_RSTRING            (ex_rstring_get_type())

typedef struct _ExRString          ExRString;

GType       ex_rstring_get_type    (void);

ExRString * ex_rstring_new         (const gchar * s);
ExRString * ex_rstring_copy        (const ExRString * rstring);
void        ex_rstring_free        (ExRString * rstring);
gchar *     ex_rstring_get         (const ExRString * rstring);
void        ex_rstring_set         (ExRString *rstring, const gchar *s);

Ideally this would also be autogenerated from the Rust code in one way or another, maybe via rusty-cheddar or rusty-binder.

SharedRString

The shared, reference counted, RString works basically the same. The only differences are in how the pointers between C and Rust are converted. For this, let’s take a look at the constructor, copy (aka ref) and free (aka unref) functions again:

#[no_mangle]
pub unsafe extern "C" fn ex_shared_rstring_new(s: *const c_char) -> *mut SharedRString {
    callback_guard!();

    let s = SharedRString::new(from_glib_none(s));
    Arc::into_raw(s) as *mut _
}

#[no_mangle]
pub unsafe extern "C" fn ex_shared_rstring_ref(
    shared_rstring: *mut SharedRString,
) -> *mut SharedRString {
    callback_guard!();

    let shared_rstring = Arc::from_raw(shared_rstring);
    let s = shared_rstring.clone();

    // Forget it and keep it alive, we will still need it later
    let _ = Arc::into_raw(shared_rstring);

    Arc::into_raw(s) as *mut _
}

#[no_mangle]
pub unsafe extern "C" fn ex_shared_rstring_unref(shared_rstring: *mut SharedRString) {
    callback_guard!();

    let _ = Arc::from_raw(shared_rstring);
}

The only difference here is that instead of using a Box, std::alloc::Arc is used, and some differences in the copy (aka ref) function. Previously with the Box, we were just creating a immutable reference from the raw pointer and cloned it, but with the Arc we want to clone the Arc itself (i.e. have the same underlying object but increase the reference count). For this we use Arc::from_raw() to get back an Arc, and then clone the Arc. If we wouldn’t do anything else, at the end of the function our original Arc would get its Drop implementation called and the reference count decreased, defeating the whole point of the function. To prevent that, we convert the original Arc to a raw pointer again and “leak” it. That is, we don’t destroy the reference owned by the caller, which would cause double free problems later.

Apart from this, everything is really the same. And also the C header looks basically the same.

Object types

Now let’s start with the more interesting part: actual subclasses of GObject with all the features you know from object-oriented languages. Everything up to here was only warm-up, even if useful by itself already to expose normal Rust types to C with a slightly more expressive API.

In GObject, subclasses of the GObject base class (think of Object in Java or C#, the most basic type from which everything inherits) all get the main following features from the base class: reference counting, inheritance, virtual methods, properties, signals. Similarly to boxed types, some functions and structs are registered at runtime with the GObject library to get back a type ID but it is slightly more involved. And our structs must be #[repr(C)] and be structured in a very specific way.

Struct Definitions

Every GObject subclass has two structs: 1) one instance struct that is used for the memory layout of every instance and could contain public fields, and 2) one class struct which is storing the class specific data and the instance struct contains a pointer to it. The class struct is more or less what in C++ the vtable would be, i.e. the place where virtual methods are stored, but in GObject it can also contain fields for example. We define a new type Foo that inherits from GObject.

#[repr(C)]
pub struct Foo {
    pub parent: gobject_ffi::GObject,
}

#[repr(C)]
pub struct FooClass {
    pub parent_class: gobject_ffi::GObjectClass,
}

The first element of the structs must be the corresponding struct of the class we inherit from. This later allows casting pointers of our subclass to pointers of the base class, and re-use all API implemented for the base class. In our example here we don’t define any public fields or virtual methods, in the repository the version has them but we get to that later.

Now we will actually need to be able to store some state with our objects, but we want to have that state private. For that we define another struct, a plain Rust struct this time

struct FooPrivate {
    name: RefCell<Option<String>>,
    counter: RefCell<i32>,
}

This uses RefCell for each field, as in GObject modifications of objects are all done conceptually via interior mutability. For a thread-safe object these would have to be Mutex instead.

Type Registration

In the end we glue all this together and register it to the GObject type system via a get_type() function, similar to the one for boxed types

#[no_mangle]
pub unsafe extern "C" fn ex_foo_get_type() -> glib_ffi::GType {
    callback_guard!();

    static mut TYPE: glib_ffi::GType = gobject_ffi::G_TYPE_INVALID;
    static ONCE: Once = ONCE_INIT;

    ONCE.call_once(|| {
        let type_info = gobject_ffi::GTypeInfo {
            class_size: mem::size_of::<FooClass>() as u16,
            base_init: None,
            base_finalize: None,
            class_init: Some(FooClass::init),
            class_finalize: None,
            class_data: ptr::null(),
            instance_size: mem::size_of::<Foo>() as u16,
            n_preallocs: 0,
            instance_init: Some(Foo::init),
            value_table: ptr::null(),
        };

        let type_name = CString::new("ExFoo").unwrap();

        TYPE = gobject_ffi::g_type_register_static(
            gobject_ffi::g_object_get_type(),
            type_name.as_ptr(),
            &type_info,
            gobject_ffi::GTypeFlags::empty(),
        );
    });

    TYPE
}

The main difference here is that we call g_type_register_static(), which takes a struct as parameter that contains all the information about our new subclass. In that struct we provide sizes of the class and instance struct (GObject is allocating them for us), various uninteresting fields for now and two function pointers: 1) class_init for initializing the class struct as allocated by GObject (here we would also override virtual methods, define signals or properties for example) and 2) instance_init to do the same with the instance struct. Both structs are zero-initialized in the parts we defined, and the parent parts of both structs are initialized by the code for the parent class already.

Struct Initialization

These two functions look like the following for us (the versions in the repository already do more things)

impl Foo {
    unsafe extern "C" fn init(obj: *mut gobject_ffi::GTypeInstance, _klass: glib_ffi::gpointer) {
        callback_guard!();

        let private = gobject_ffi::g_type_instance_get_private(
            obj as *mut gobject_ffi::GTypeInstance,
            ex_foo_get_type(),
        ) as *mut Option<FooPrivate>;

        // Here we initialize the private data. By default it is all zero-initialized
        // but we don't really want to have any Drop impls run here so just overwrite the
        // data
        ptr::write(
            private,
            Some(FooPrivate {
                name: RefCell::new(None),
                counter: RefCell::new(0),
            }),
        );
    }
}

impl FooClass {
    unsafe extern "C" fn init(klass: glib_ffi::gpointer, _klass_data: glib_ffi::gpointer) {
        callback_guard!();

        // This is an Option<_> so that we can replace its value with None on finalize() to
        // release all memory it holds
        gobject_ffi::g_type_class_add_private(klass, mem::size_of::<Option<FooPrivate>>() as usize);
    }
}

During class initialization, we tell GObject about the size of our private struct but we actually wrap it into an Option. This allows us to later replace it simply with None to deallocate all memory related to it. During instance initialization this private struct is already allocated for us by GObject (and zero-initialized), so we simply get a raw pointer to it via g_type_instance_get_private() and write an initialized struct to that pointer. Raw pointers must be used here so that the Drop implementation of Option is not called for the old, zero-initialized memory when replacing the struct.

As you might’ve noticed, we currently never set the private struct to None to release the memory, effectively leaking memory, but we get to that later when talking about virtual methods.

Constructor

With what we have so far, it’s already possible to create new instances of our subclass, and for that we also define a constructor function now

#[no_mangle]
pub unsafe extern "C" fn ex_foo_new() -> *mut Foo {
    callback_guard!();

    let this = gobject_ffi::g_object_newv(
        ex_foo_get_type(),
        0,
        ptr::null_mut(),
    );

    this as *mut Foo
}

There is probably not much that has to be explained here: we only tell GObject to allocate a new instance of our specific type (by providing the type ID), which then causes the memory to be allocated and our initialization functions to be called. For the very first time, class_init would be called, for all times instance_init is called.

Methods

All this would be rather boring at this point because there is no way to actually do something with our object, so various functions are defined to work with the private data. For example to get the value of the counter

impl Foo {
    fn get_counter(_this: &FooWrapper, private: &FooPrivate) -> i32 {
        *private.counter.borrow()
    }
}

#[no_mangle]
pub unsafe extern "C" fn ex_foo_get_counter(this: *mut Foo) -> i32 {
    callback_guard!();

    let private = (*this).get_priv();

    Foo::get_counter(&from_glib_borrow(this), private)
}

This gets the private struct from GObject (get_priv() is a helper function that does the same as we did in instance_init), and then calls a safe Rust function implemented on our struct to actually get the value. Notable here is that we don’t pass &self to the function, but something called FooWrapper. This is a GTK-rs style wrapper type that directly allows to use any API implemented on parent classes and provides various other functionality. It is defined in mod.rs but we will talk about that later.

Inheritance

GObject allows single-inheritance from a base class, similar to Java and C#. All behaviour of the base class is inherited, and API of the base class can be used on the subclass.

I shortly hinted at how that works above already: 1) instance and class struct have the parent class’ structs as first field, so casting to pointers of the parent class work just fine, 2) GObject is told what the parent class is in the call to g_type_register_static(). We did that above already, as we inherited from GObject.

By inheriting from GObject, we e.g. can call g_object_ref() to do reference counting, or any of the other GObject API. Also it allows the Rust wrapper type defined in mod.rs to provide appropriate API for the base class to us without any casts, and to do memory management automatically. How that works is probably going to be explained in one of the following blog posts on Federico’s blog.

In the example repository, there is also another type defined which inherits from our type Foo, called Bar. It’s basically the same code again, except for the name and parent type.

#[repr(C)]
pub struct Bar {
    pub parent: foo::imp::Foo,
}

#[repr(C)]
pub struct BarClass {
    pub parent_class: foo::imp::FooClass,
}

#[no_mangle]
pub unsafe extern "C" fn ex_bar_get_type() -> glib_ffi::GType {
    [...]
        TYPE = gobject_ffi::g_type_register_static(
            foo::imp::ex_foo_get_type(),
            type_name.as_ptr(),
            &type_info,
            gobject_ffi::GTypeFlags::empty(),
        );
    [...]
}

Virtual Methods

Overriding Virtual Methods

Inheritance alone is already useful for reducing code duplication, but to make it really useful virtual methods are needed so that behaviour can be adjusted. In GObject this works similar to how it’s done in e.g. C++, just manually: you place function pointers to the virtual method implementations into the class struct and then call those. As every subclass has its own copy of the class struct (initialized with the values from the parent class), it can override these with whatever function it wants. And as it’s possible to get the actual class struct of the parent class, it is possible to chain up to the implementation of the virtual function of the parent class. Let’s look at the example of the GObject::finalize virtual method, which is called at the very end when the object is to be destroyed and which should free all memory. In there we will free our private data struct with the RefCells.

As a first step, we need to override the function pointer in the class struct in our class_init function and replace it with another function that implements the behaviour we want

impl FooClass {
    unsafe extern "C" fn init(klass: glib_ffi::gpointer, _klass_data: glib_ffi::gpointer) {
        [...]
        {
            let gobject_klass = &mut *(klass as *mut gobject_ffi::GObjectClass);
            gobject_klass.finalize = Some(Foo::finalize);
        }
        [...]
    }
}

impl Foo {
    unsafe extern "C" fn finalize(obj: *mut gobject_ffi::GObject) {
        callback_guard!();

        // Free private data by replacing it with None
        let private = gobject_ffi::g_type_instance_get_private(
            obj as *mut gobject_ffi::GTypeInstance,
            ex_foo_get_type(),
        ) as *mut Option<FooPrivate>;
        let _ = (*private).take();

        (*PRIV.parent_class).finalize.map(|f| f(obj));
    }
}

This new function could call into a safe Rust implementation, like it’s done for other virtual methods (see a bit later) but for finalize we have to do manual memory management and that’s all unsafe Rust. The way how we free the memory here is by replacing, that is take()ing the Some value out of the Option that contains our private struct, and then let it be dropped. Afterwards we have to chain up to the parent class’ implementation of finalize, which is done by calling map() on the Option that contains the function pointer.

All the function pointers in glib-sys and related crates is stored in Options to be able to handle the case of a NULL function pointer and an actual function pointer to a function.

Now for chaining up to the parent class’ finalize implementation, there’s a static, global variable containing a pointer to the parent class’ class struct, called PRIV. This is also initialized in the class_init function

struct FooClassPrivate {
    parent_class: *const gobject_ffi::GObjectClass,
}
static mut PRIV: FooClassPrivate = FooClassPrivate {
    parent_class: 0 as *const _,
};

impl FooClass {
    unsafe extern "C" fn init(klass: glib_ffi::gpointer, _klass_data: glib_ffi::gpointer) {
        [...]
        PRIV.parent_class =
            gobject_ffi::g_type_class_peek_parent(klass) as *const gobject_ffi::GObjectClass;
    }
}

While this is a static mut global variable, this is fine as it’s only ever written to once from class_init, and can only ever be accessed after class_init is done.

Defining New Virtual Methods

For defining new virtual methods, we would add a corresponding function pointer to the class struct and optionally initialize it to a default implementation in the class_init function, or otherwise keep it at NULL/None.

#[repr(C)]
pub struct FooClass {
    pub parent_class: gobject_ffi::GObjectClass,
    pub increment: Option<unsafe extern "C" fn(*mut Foo, inc: i32) -> i32>,
}

impl FooClass {
    unsafe extern "C" fn init(klass: glib_ffi::gpointer, _klass_data: glib_ffi::gpointer) {
        {
            let foo_klass = &mut *(klass as *mut FooClass);
            foo_klass.increment = Some(Foo::increment_trampoline);
        }
    }
}

The trampoline function provided here is responsible for converting from the C types to the Rust types, and then calling a safe Rust implementation of the virtual method.

impl Foo {
    unsafe extern "C" fn increment_trampoline(this: *mut Foo, inc: i32) -> i32 {
        callback_guard!();

        let private = (*this).get_priv();

        Foo::increment(&from_glib_borrow(this), private, inc)
    }

    fn increment(this: &FooWrapper, private: &FooPrivate, inc: i32) -> i32 {
        let mut val = private.counter.borrow_mut();

        *val += inc;

        *val
    }
}

To make it possible to call these virtual methods from the outside, a C function has to be defined again similar to the ones for non-virtual methods. Instead of calling the Rust implementation directly, this gets the class struct of the type that is passed in and then calls the function pointer for the virtual method implementation of that specific type.

#[no_mangle]
pub unsafe extern "C" fn ex_foo_increment(this: *mut Foo, inc: i32) -> i32 {
    callback_guard!();

    let klass = (*this).get_class();

    (klass.increment.as_ref().unwrap())(this, inc)
}

Subclasses would override this default implementation (or provide an actual implementation) exactly the same way, and also chain up to the parent class’ implementation like we saw before for GObject::finalize.

Properties

Similar to Objective-C and C#, GObject has support for properties. These are registered per type, have some metadata attached to them (property type, name, description, writability, valid value range, etc) and subclasses are inheriting them and can override them. The main difference between properties and struct fields is that setting/getting the property values is executing some code instead of just pointing at a memory location, and you can connect a callback to the property to be notified whenever its value changes. And they can be queried at runtime from a specific type, and set/get via their string names instead of actual C API. Allowed types for properties are everything that has a GObject type ID assigned, including all GObject subclasses, many fundamental types (integers, strings, …) and boxed types like our RString and SharedRString above.

Defining Properties

To define a property, we have to register it in the class_init function and also implement the GObject::get_property() and GObject::set_property() virtual methods (or only one of them for read-only / write-only properties). Internally inside the implementation of our GObject, the properties are identified by an integer index for which we define a simple enum, and when registered we get back a GParamSpec pointer that we should also store (for notifying about property changes for example).

#[repr(u32)]
enum Properties {
    Name = 1,
}

struct FooClassPrivate {
    parent_class: *const gobject_ffi::GObjectClass,
    properties: *const Vec<*const gobject_ffi::GParamSpec>,
}
static mut PRIV: FooClassPrivate = FooClassPrivate {
    parent_class: 0 as *const _,
    properties: 0 as *const _,
};

In class_init we then override the two virtual methods and register a new property, by providing the name, type, value of our enum corresponding to that property, default value and various other metadata. We then store the GParamSpec related to the property in a Vec, indexed by the enum value. In our example we add a string-typed “name” property that is readable and writable, but can only ever be written to during object construction.

impl FooClass {
    // Class struct initialization, called from GObject
unsafe extern "C" fn init(klass: glib_ffi::gpointer, _klass_data: glib_ffi::gpointer) {
    {
        [...]
        {
            let gobject_klass = &mut *(klass as *mut gobject_ffi::GObjectClass);
            gobject_klass.finalize = Some(Foo::finalize);
            gobject_klass.set_property = Some(Foo::set_property);
            gobject_klass.get_property = Some(Foo::get_property);

            let mut properties = Vec::new();

            let name_cstr = CString::new("name").unwrap();
            let nick_cstr = CString::new("Name").unwrap();
            let blurb_cstr = CString::new("Name of the object").unwrap();

            properties.push(ptr::null());
            properties.push(gobject_ffi::g_param_spec_string(
                name_cstr.as_ptr(),
                nick_cstr.as_ptr(),
                blurb_cstr.as_ptr(),
                ptr::null_mut(),
                gobject_ffi::G_PARAM_READWRITE | gobject_ffi::G_PARAM_CONSTRUCT_ONLY,
            ));
            gobject_ffi::g_object_class_install_properties(
                gobject_klass,
                properties.len() as u32,
                properties.as_mut_ptr() as *mut *mut _,
            );

            PRIV.properties = Box::into_raw(Box::new(properties));
        }
    }
}

Afterwards we define the trampoline implementations for the set_property and get_property virtual methods.

impl Foo {
    unsafe extern "C" fn set_property(
        obj: *mut gobject_ffi::GObject,
        id: u32,
        value: *mut gobject_ffi::GValue,
        _pspec: *mut gobject_ffi::GParamSpec,
    ) {
        callback_guard!();

        let this = &*(obj as *mut Foo);
        let private = (*this).get_priv();

        // FIXME: How to get rid of the transmute?
        match mem::transmute::<u32, Properties>(id) {
            Properties::Name => {
                // FIXME: Need impl FromGlibPtrBorrow for Value
                let name = gobject_ffi::g_value_get_string(value);
                Foo::set_name(
                    &from_glib_borrow(obj as *mut Foo),
                    private,
                    from_glib_none(name),
                );
            }
            _ => unreachable!(),
        }
    }

    unsafe extern "C" fn get_property(
        obj: *mut gobject_ffi::GObject,
        id: u32,
        value: *mut gobject_ffi::GValue,
        _pspec: *mut gobject_ffi::GParamSpec,
    ) {
        callback_guard!();

        let private = (*(obj as *mut Foo)).get_priv();

        // FIXME: How to get rid of the transmute?
        match mem::transmute::<u32, Properties>(id) {
            Properties::Name => {
                let name = Foo::get_name(&from_glib_borrow(obj as *mut Foo), private);
                // FIXME: Need impl FromGlibPtrBorrow for Value
                gobject_ffi::g_value_set_string(value, name.to_glib_none().0);
            }
            _ => unreachable!(),
        }
    }
}

In there we decide based on the index which property is meant, and then convert from/to the GValue container provided by GObject, and then call into safe Rust getters/setters.

impl Foo {
    fn get_name(_this: &FooWrapper, private: &FooPrivate) -> Option<String> {
        private.name.borrow().clone()
    }

    fn set_name(_this: &FooWrapper, private: &FooPrivate, name: Option<String>) {
        *private.name.borrow_mut() = name;
    }
}

This property can now be used via the GObject API, e.g. its value can be retrieved via g_object_get(obj, “name”, &pointer_to_a_char_pointer) in C.

Construct Properties

The property we defined above had one special feature: it can only ever be set during object construction. Similarly, every property that is writable can also be set during object construction. This works by providing a value to g_object_new() in the constructor function, which then causes GObject to pass this to our set_property() implementation.

#[no_mangle]
pub unsafe extern "C" fn ex_foo_new(name: *const c_char) -> *mut Foo {
    callback_guard!();

    let prop_name_name = "name".to_glib_none();
    let prop_name_str: Option<String> = from_glib_none(name);
    let prop_name_value = glib::Value::from(prop_name_str.as_ref());

    let mut properties = [
        gobject_ffi::GParameter {
            name: prop_name_name.0,
            value: prop_name_value.into_raw(),
        },
    ];
    let this = gobject_ffi::g_object_newv(
        ex_foo_get_type(),
        properties.len() as u32,
        properties.as_mut_ptr(),
    );

    gobject_ffi::g_value_unset(&mut properties[0].value);

    this as *mut Foo
}

Signals

GObject also supports signals. These are similar to events in e.g. C#, Qt or the C++ Boost signals library, and not to be confused with UNIX signals. GObject signals allow you to connect a callback that is called every time a specific event happens.

Signal Registration

Similarly to properties, these are registered in class_init together with various metadata, can be queried at runtime and are usually used by string name. Notification about property changes is implemented with signals, the GObject::notify signal.

Also similarly to properties, internally in our implementation the signals are used by an integer index. We also store that globally, indexed by a simple enum.

#[repr(u32)]
enum Signals {
    Incremented = 0,
}

struct FooClassPrivate {
    parent_class: *const gobject_ffi::GObjectClass,
    properties: *const Vec<*const gobject_ffi::GParamSpec>,
    signals: *const Vec<u32>,
}
static mut PRIV: FooClassPrivate = FooClassPrivate {
    parent_class: 0 as *const _,
    properties: 0 as *const _,
    signals: 0 as *const _,
};

In class_init we then register the signal for our type. For that we provide a name, the parameters of the signal (anything that can be stored in a GValue can be used for this again), the return value (we don’t have one here) and various other metadata. GObject then tells us the ID of the signal, which we store in our vector. In our case we define a signal named “incremented”, that is emitted every time the internal counter of the object is incremented and provides the current value of the counter and by how much it was incremented.

impl FooClass {
    // Class struct initialization, called from GObject
unsafe extern "C" fn init(klass: glib_ffi::gpointer, _klass_data: glib_ffi::gpointer) {
        [...]
        let mut signals = Vec::new();

        let name_cstr = CString::new("incremented").unwrap();
        let param_types = [gobject_ffi::G_TYPE_INT, gobject_ffi::G_TYPE_INT];

        // FIXME: Is there a better way?
        let class_offset = {
            let dummy: FooClass = mem::uninitialized();
            ((&dummy.incremented as *const _ as usize) - (&dummy as *const _ as usize)) as u32
        };

        signals.push(gobject_ffi::g_signal_newv(
            name_cstr.as_ptr(),
            ex_foo_get_type(),
            gobject_ffi::G_SIGNAL_RUN_LAST,
            gobject_ffi::g_signal_type_cclosure_new(ex_foo_get_type(), class_offset),
            None,
            ptr::null_mut(),
            None,
            gobject_ffi::G_TYPE_NONE,
            param_types.len() as u32,
            param_types.as_ptr() as *mut _,
        ));

        PRIV.signals = Box::into_raw(Box::new(signals));
    }
}

One special part here is the class_offset. GObject allows to (optionally) define a default class handler for the signal. This is always called when the signal is emitted, and is usually a virtual method that can be overridden by subclasses. During signal registration, the offset in bytes to the function pointer of that virtual method inside the class struct is provided.

#[repr(C)]
pub struct FooClass {
    pub parent_class: gobject_ffi::GObjectClass,
    pub increment: Option<unsafe extern "C" fn(*mut Foo, inc: i32) -> i32>,
    pub incremented: Option<unsafe extern "C" fn(*mut Foo, val: i32, inc: i32)>,
}

impl Foo {
    unsafe extern "C" fn incremented_trampoline(this: *mut Foo, val: i32, inc: i32) {
        callback_guard!();

        let private = (*this).get_priv();

        Foo::incremented(&from_glib_borrow(this), private, val, inc);
    }

    fn incremented(_this: &FooWrapper, _private: &FooPrivate, _val: i32, _inc: i32) {
        // Could do something here. Default/class handler of the "incremented"
        // signal that could be overriden by subclasses
    }
}

This is all exactly the same as for virtual methods, just that it will be automatically called when the signal is emitted.

Signal Emission

For emitting the signal, we have to provide the instance and the arguments in an array as GValues, and then emit the signal by the ID we got back during signal registration.

impl Foo {
    fn increment(this: &FooWrapper, private: &FooPrivate, inc: i32) -> i32 {
        let mut val = private.counter.borrow_mut();

        *val += inc;

        unsafe {
            let params = [this.to_value(), (*val).to_value(), inc.to_value()];
            gobject_ffi::g_signal_emitv(
                params.as_ptr() as *mut _,
                (*PRIV.signals)[Signals::Incremented as usize],
                0,
                ptr::null_mut(),
            );
        }

        *val
    }
}

While all parameters to the signal are provided as a GValue here, GObject calls our default class handler and other C callbacks connected to the signal with the corresponding C types directly. The conversion is done inside GObject and then the corresponding function is called via libffi. It is also possible to directly get the array of GValues instead though, by using the GClosure API, for which there are also Rust bindings.

Connecting to the signal can now be done via e.g. g_object_connect() from C.

C header

Similarly to the boxed types, we also have to define a C header for the exported GObject C API. This ideally would also be autogenerated from the macro based solution (e.g. with rusty-cheddar), but here we write it manually. This is mostly GObject boilerplate and conventions.

#include <glib-object.h>

G_BEGIN_DECLS

#define EX_TYPE_FOO            (ex_foo_get_type())
#define EX_FOO(obj)            (G_TYPE_CHECK_INSTANCE_CAST((obj),EX_TYPE_FOO,ExFoo))
#define EX_IS_FOO(obj)         (G_TYPE_CHECK_INSTANCE_TYPE((obj),EX_TYPE_FOO))
#define EX_FOO_CLASS(klass)    (G_TYPE_CHECK_CLASS_CAST((klass) ,EX_TYPE_FOO,ExFooClass))
#define EX_IS_FOO_CLASS(klass) (G_TYPE_CHECK_CLASS_TYPE((klass) ,EX_TYPE_FOO))
#define EX_FOO_GET_CLASS(obj)  (G_TYPE_INSTANCE_GET_CLASS((obj) ,EX_TYPE_FOO,ExFooClass))

typedef struct _ExFoo      ExFoo;
typedef struct _ExFooClass ExFooClass;

struct _ExFoo {
  GObject parent;
};

struct _ExFooClass {
  GObjectClass parent_class;

  gint (*increment) (ExFoo * foo, gint inc);
  void (*incremented) (ExFoo * foo, gint val, gint inc);
};

GType   ex_foo_get_type    (void);

ExFoo * ex_foo_new         (const gchar * name);

gint    ex_foo_increment   (ExFoo * foo, gint inc);
gint    ex_foo_get_counter (ExFoo * foo);
gchar * ex_foo_get_name    (ExFoo * foo);

G_END_DECLS

Interfaces

While GObject only allows single inheritance, it provides the ability to implement any number of interfaces on a class to provide a common API between independent types. These interfaces are similar to what exists in Java and C#, but similar to Rust traits it is possible to provide default implementations for the interface methods. Also similar to Rust traits, interfaces can declare pre-requisites: interfaces an implementor must also implement, or a base type it must inherit from.

In the repository, a Nameable interface with a get_name() method is implemented. Generally it all works exactly the same as with non-interface types and virtual methods. You register a type with GObject that inherits from G_TYPE_INTERFACE. This type only has a class struct, no instance struct. And instead of an instance struct, a typedef’d void * pointer is used. Behind that pointer would be the instance struct of the actual type implementing the interface. A default implementation of methods can be provided the same way as with virtual methods in class_init.

There are two main differences though. One is for calling an interface method

impl Nameable {
    // Helper functions
    fn get_iface(&self) -> &NameableInterface {
        unsafe {
            let klass = (*(self as *const _ as *const gobject_ffi::GTypeInstance)).g_class;
            let interface =
                gobject_ffi::g_type_interface_peek(klass as *mut c_void, ex_nameable_get_type());
            &*(interface as *const NameableInterface)
        }
    }
}

#[no_mangle]
pub unsafe extern "C" fn ex_nameable_get_name(this: *mut Nameable) -> *mut c_char {
    callback_guard!();

    let iface = (*this).get_iface();
    iface.get_name.map(|f| f(this)).unwrap_or(ptr::null_mut())
}

Instead of directly getting the class struct from the instance, we have to call some GObject API to get the interface struct of a specific interface type ID with the virtual methods.

The other difference is for implementation of the interface. Inside the get_type() function a new set of functions is registered, which are used similar to class_init for initialization of the interface struct

#[no_mangle]
pub unsafe extern "C" fn ex_foo_get_type() -> glib_ffi::GType {
        [...]
        // Implement Nameable interface here
        let nameable_info = gobject_ffi::GInterfaceInfo {
            interface_init: Some(FooClass::init_nameable_interface),
            interface_finalize: None,
            interface_data: ptr::null_mut(),
        };
        gobject_ffi::g_type_add_interface_static(
            TYPE,
            ::nameable::imp::ex_nameable_get_type(),
            &nameable_info,
        );
    });
}

impl FooClass {
    unsafe extern "C" fn init_nameable_interface(
        iface: glib_ffi::gpointer,
        _iface_data: glib_ffi::gpointer,
    ) {
        let iface = &mut *(iface as *mut ::nameable::imp::NameableInterface);
        iface.get_name = Some(Foo::nameable_get_name_trampoline);
    }
}

The interface also gets a C header, which looks basically the same as for normal classes.

Usage from C

As mentioned above a few times, we export a normal (GObject) C API. For that various headers have to be written, or ideally be generated later. These can be all found here.

Nothing special has to be taken care off for using this API from C, you simply link to the generated shared library, use the headers and then use it like any other GObject based C API.

Usage from Rust

I mentioned shortly above that in the mod.rs there are gtk-rs-style Rust bindings. And these are also what would be passed (the “Wrapper” arguments) to the safe Rust implementations of the methods.

Ideally these would be autogenerated from a macro, similarly how the gir tool can do already for C based GObject libraries (this is the tool to generate most of the GLib, GTK, etc bindings for Rust).

For usage of those bindings, I’ll just let the code speak for itself

#[test]
    fn test_counter() {
        let foo = Foo::new(Some("foo's name"));

        let incremented = Rc::new(RefCell::new((0i32, 0i32)));
        let incremented_clone = incremented.clone();
        foo.connect_incremented(move |_, val, inc| {
            *incremented_clone.borrow_mut() = (val, inc);
        });

        assert_eq!(foo.get_counter(), 0);
        assert_eq!(foo.increment(1), 1);
        assert_eq!(*incremented.borrow(), (1, 1));
        assert_eq!(foo.get_counter(), 1);
        assert_eq!(foo.increment(10), 11);
        assert_eq!(*incremented.borrow(), (11, 10));
        assert_eq!(foo.get_counter(), 11);
    }

    #[test]
    fn test_new() {
        let s = RString::new(Some("bla"));
        assert_eq!(s.get(), Some("bla".into()));

        let mut s2 = s.clone();
        s2.set(Some("blabla"));
        assert_eq!(s.get(), Some("bla".into()));
        assert_eq!(s2.get(), Some("blabla".into()));
    }

This does automatic memory management, allows to call base-class methods on instances of a subclass, provides access to methods, virtual methods, signals, properties, etc.

Usage from Python, JavaScript and Others

Now all this was a lot of boilerplate, but here comes the reason why it is probably all worth it. By exporting a GObject-style C API, we automatically get support for generating bindings for dozens of languages, without having to write any more code. This is possible thanks to the strong API conventions of GObject, and the GObject-Introspection project. Supported languages are for example Rust (of course!), Python, JavaScript (GJS and Node), Go, C++, Haskell, C#, Perl, PHP, Ruby, …

GObject-Introspection achieves this by scanning the C headers, introspecting the GObject types and then generating an XML based API description (which also contains information about ownership transfer!). This XML based API description can then be used by code generators for static, compiled bindings (e.g. Rust, Go, Haskell, …), but it can also be compiled to a so-called “typelib”. The typelib provides a C ABI that allows bindings to be generated at runtime, mostly used by scripting languages (e.g. Python and JavaScript).

To show the power of this, I’ve included a simple Python and JavaScript (GJS) application that uses all the types we defined above, and a Makefile that generates the GObject-Introspection metadata and can directly run the Python and JavaScript applications (“make run-python” and “make run-javascript”).

The Python code looks as follows

#! /usr/bin/python3

import gi
gi.require_version("Ex", "0.1")
from gi.repository import Ex

def on_incremented(obj, val, inc):
    print("incremented to {} by {}".format(val, inc))

foo = Ex.Foo.new("foo's name")
foo.connect("incremented", on_incremented)

print("foo name: " + str(foo.get_name()))
print("foo inc 1: " + str(foo.increment(1)))
print("foo inc 10: " + str(foo.increment(10)))
print("foo counter: " + str(foo.get_counter()))

bar = Ex.Bar.new("bar's name")
bar.connect("incremented", on_incremented)

print("bar name: " + str(bar.get_name()))
print("bar inc 1: " + str(bar.increment(1)))
print("bar inc 10: " + str(bar.increment(10)))
print("bar counter: " + str(bar.get_counter()))

print("bar number: " + str(bar.get_number()))
print("bar number (property): " + str(bar.get_property("number")))
bar.set_number(10.0)
print("bar number: " + str(bar.get_number()))
print("bar number (property): " + str(bar.get_property("number")))
bar.set_property("number", 20.0)
print("bar number: " + str(bar.get_number()))
print("bar number (property): " + str(bar.get_property("number")))

s = Ex.RString.new("something")
print("rstring: " + str(s.get()))
s2 = s.copy()
s2.set("something else")
print("rstring 2: " + str(s2.get()))

s = Ex.SharedRString.new("something")
print("shared rstring: " + str(s.get()))
s2 = s.ref()
print("shared rstring 2: " + str(s2.get()))

and the JavaScript (GJS) code as follows

#!/usr/bin/gjs

const Lang = imports.lang;
const Ex = imports.gi.Ex;

let foo = new Ex.Foo({name: "foo's name"});
foo.connect("incremented", function(obj, val, inc) {
    print("incremented to " + val + " by " + inc);
});

print("foo name: " + foo.get_name());
print("foo inc 1: " + foo.increment(1));
print("foo inc 10: " + foo.increment(10));
print("foo counter: " + foo.get_counter());

let bar = new Ex.Bar({name: "bar's name"});
bar.connect("incremented", function(obj, val, inc) {
    print("incremented to " + val + " by " + inc);
});

print("bar name: " + bar.get_name());
print("bar inc 1: " + bar.increment(1));
print("bar inc 10: " + bar.increment(10));
print("bar counter: " + bar.get_counter());

print("bar number: " + bar.get_number());
print("bar number (property): " + bar["number"]);
bar.set_number(10.0)
print("bar number: " + bar.get_number());
print("bar number (property): " + bar["number"]);
bar["number"] = 20.0;
print("bar number: " + bar.get_number());
print("bar number (property): " + bar["number"]);

let s = new Ex.RString("something");
print("rstring: " + s.get());
let s2 = s.copy();
s2.set("something else");
print("rstring2: " + s2.get());

let s = new Ex.SharedRString("something");
print("shared rstring: " + s.get());
let s2 = s.ref();
print("shared rstring2: " + s2.get());

Both are doing the same and nothing useful, they simple use all of the available API.

What next?

While everything here can be used as-is already (and I use a variation of this in gst-plugin-rs, a crate to write GStreamer plugins in Rust), it’s rather inconvenient. The goal of this blog post is to have a low-level explanation about how all this works in GObject with Rust, and to have a “template” to use for Nikos’ gnome-class macro. Federico is planning to work on this in the near future, and step by step move features from my repository to the macro. Work on this will also be done at the GNOME/Rust hackfest in November in Berlin, which will hopefully yield a lot of progress on the macro but also on the bindings in general.

In the end, this macro would ideally end up in the glib-rs bindings and can then be used directly by anybody to implement GObject subclasses in Rust. At that point, this blog post can hopefully help a bit as documentation to understand how the macro works.

on September 06, 2017 01:47 PM

September 05, 2017

Previously: v4.12.

Here’s a short summary of some of interesting security things in Sunday’s v4.13 release of the Linux kernel:

security documentation ReSTification
The kernel has been switching to formatting documentation with ReST, and I noticed that none of the Documentation/security/ tree had been converted yet. I took the opportunity to take a few passes at formatting the existing documentation and, at Jon Corbet’s recommendation, split it up between end-user documentation (which is mainly how to use LSMs) and developer documentation (which is mainly how to use various internal APIs). A bunch of these docs need some updating, so maybe with the improved visibility, they’ll get some extra attention.

CONFIG_REFCOUNT_FULL
Since Peter Zijlstra implemented the refcount_t API in v4.11, Elena Reshetova (with Hans Liljestrand and David Windsor) has been systematically replacing atomic_t reference counters with refcount_t. As of v4.13, there are now close to 125 conversions with many more to come. However, there were concerns over the performance characteristics of the refcount_t implementation from the maintainers of the net, mm, and block subsystems. In order to assuage these concerns and help the conversion progress continue, I added an “unchecked” refcount_t implementation (identical to the earlier atomic_t implementation) as the default, with the fully checked implementation now available under CONFIG_REFCOUNT_FULL. The plan is that for v4.14 and beyond, the kernel can grow per-architecture implementations of refcount_t that have performance characteristics on par with atomic_t (as done in grsecurity’s PAX_REFCOUNT).

CONFIG_FORTIFY_SOURCE
Daniel Micay created a version of glibc’s FORTIFY_SOURCE compile-time and run-time protection for finding overflows in the common string (e.g. strcpy, strcmp) and memory (e.g. memcpy, memcmp) functions. The idea is that since the compiler already knows the size of many of the buffer arguments used by these functions, it can already build in checks for buffer overflows. When all the sizes are known at compile time, this can actually allow the compiler to fail the build instead of continuing with a proven overflow. When only some of the sizes are known (e.g. destination size is known at compile-time, but source size is only known at run-time) run-time checks are added to catch any cases where an overflow might happen. Adding this found several places where minor leaks were happening, and Daniel and I chased down fixes for them.

One interesting note about this protection is that is only examines the size of the whole object for its size (via __builtin_object_size(..., 0)). If you have a string within a structure, CONFIG_FORTIFY_SOURCE as currently implemented will make sure only that you can’t copy beyond the structure (but therefore, you can still overflow the string within the structure). The next step in enhancing this protection is to switch from 0 (above) to 1, which will use the closest surrounding subobject (e.g. the string). However, there are a lot of cases where the kernel intentionally copies across multiple structure fields, which means more fixes before this higher level can be enabled.

NULL-prefixed stack canary
Rik van Riel and Daniel Micay changed how the stack canary is defined on 64-bit systems to always make sure that the leading byte is zero. This provides a deterministic defense against overflowing string functions (e.g. strcpy), since they will either stop an overflowing read at the NULL byte, or be unable to write a NULL byte, thereby always triggering the canary check. This does reduce the entropy from 64 bits to 56 bits for overflow cases where NULL bytes can be written (e.g. memcpy), but the trade-off is worth it. (Besdies, x86_64’s canary was 32-bits until recently.)

IPC refactoring
Partially in support of allowing IPC structure layouts to be randomized by the randstruct plugin, Manfred Spraul and I reorganized the internal layout of how IPC is tracked in the kernel. The resulting allocations are smaller and much easier to deal with, even if I initially missed a few needed container_of() uses.

randstruct gcc plugin
I ported grsecurity’s clever randstruct gcc plugin to upstream. This plugin allows structure layouts to be randomized on a per-build basis, providing a probabilistic defense against attacks that need to know the location of sensitive structure fields in kernel memory (which is most attacks). By moving things around in this fashion, attackers need to perform much more work to determine the resulting layout before they can mount a reliable attack.

Unfortunately, due to the timing of the development cycle, only the “manual” mode of randstruct landed in upstream (i.e. marking structures with __randomize_layout). v4.14 will also have the automatic mode enabled, which randomizes all structures that contain only function pointers.

A large number of fixes to support randstruct have been landing from v4.10 through v4.13, most of which were already identified and fixed by grsecurity, but many were novel, either in newly added drivers, as whitelisted cross-structure casts, refactorings (like IPC noted above), or in a corner case on ARM found during upstream testing.

lower ELF_ET_DYN_BASE
One of the issues identified from the Stack Clash set of vulnerabilities was that it was possible to collide stack memory with the highest portion of a PIE program’s text memory since the default ELF_ET_DYN_BASE (the lowest possible random position of a PIE executable in memory) was already so high in the memory layout (specifically, 2/3rds of the way through the address space). Fixing this required teaching the ELF loader how to load interpreters as shared objects in the mmap region instead of as a PIE executable (to avoid potentially colliding with the binary it was loading). As a result, the PIE default could be moved down to ET_EXEC (0x400000) on 32-bit, entirely avoiding the subset of Stack Clash attacks. 64-bit could be moved to just above the 32-bit address space (0x100000000), leaving the entire 32-bit region open for VMs to do 32-bit addressing, but late in the cycle it was discovered that Address Sanitizer couldn’t handle it moving. With most of the Stack Clash risk only applicable to 32-bit, fixing 64-bit has been deferred until there is a way to teach Address Sanitizer how to load itself as a shared object instead of as a PIE binary.

early device randomness
I noticed that early device randomness wasn’t actually getting added to the kernel entropy pools, so I fixed that to improve the effectiveness of the latent_entropy gcc plugin.

That’s it for now; please let me know if I missed anything. As a side note, I was rather alarmed to discover that due to all my trivial ReSTification formatting, and tiny FORTIFY_SOURCE and randstruct fixes, I made it into the most active 4.13 developers list (by patch count) at LWN with 76 patches: a whopping 0.6% of the cycle’s patches. ;)

Anyway, the v4.14 merge window is open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on September 05, 2017 11:01 PM

Hello MAASters! This is the development summary for the past couple of weeks:

MAAS 2.3 (current development release)

  • Hardware Testing Phase 2
    • Added parameters form for script parameters validation.
    • Accept and validate results from nodes.
    • Added hardware testing 7zip CPU benchmarking builtin script.
    • WIP – ability to send parameters to test scripts and process results of individual components. (e.g. will provide the ability for users to select which disk they want to test, and capture results accordingly)
    • WIP – disk benchmark test via Fio.
  • Network beaconing & better network discovery
    • MAAS controllers now send out beacon advertisements every 30 seconds, regardless of whether or not any solicitations were received.
  • Switch Support
    • Backend changes to automatically detect switches (during commissioning) and make use of the new switch model.
    • Introduce base infrastructure for NOS drivers, similar to the power management one.
    • Install the Rack Controller when deploying a supported Switch (Wedge 40, Wedge 100)
    • UI – Add a switch listing tab behind a feature flag.
  • Minor UI improvements
    • The version of MAAS installed on each controller is now reported on the controller details page.
  • python-libmaas
    • Added ability to power on, power off, and query the power state of a machine.
    • Added PowerState enum to make it easy to check the current power state of a machine.
    • Added ability to reference the children and parent interfaces of an interface.
    • Added ability to reference the owner of node.
    • Added base level `Node` object that `Machine`, `Device`, `RackController`, and `RegionController` extend from.
    • Added `as_machine`, `as_device`, `as_rack_controller`, and `as_region_controller` to the Node object. Allowing the ability to convert a `Node` into the type you need to perform an action on.
  • Bug fixes:
    • LP: #1676992 – force Postgresql restart on maas-region-controller installation.
    • LP: #1708512 – Fix DNS & Description misalignment
    • LP: #1711714 – Add cloud-init reporting for deployed Ubuntu Core systems
    • LP: #1684094 – Make context menu language consistent for IP ranges.
    • LP: #1686246 – Fix docstring for set-storage-layout operation
    • LP: #1681801 – Device discovery – Tooltip misspelled
    • LP: #1688066 – Add Spice graphical console to pod created VM’s
    • LP: #1711700 – Improve DNS reloading so its happens only when required.
    • LP: #1712423, #1712450, #1712422 – Properly handle a ScriptForm being sent an empty file.
    • LP: #1621175 – Generate password for BMC’s with non-spec compliant password policy
    • LP: #1711414 – Fix deleting a rack when it is installed via the snap
    • LP: #1702703 – Can’t run region controller without a rack controller installed.
on September 05, 2017 08:46 PM

September 04, 2017

meteosurf

MeteoSurf is a free multi-source weather forecasting App designed to provide wind and wave conditions of the Mediterranean Sea. It is an application for smartphones and tablets, built as a Progressive Web App able to supply detailed and updated maps and data showing heights of sea waves (and other information) in the Central Mediterranean. It is mainly targeted for surfers and wind-surfers but anyone who needs to know the sea conditions will take advantage from this app.

Data can be displayed as animated graphical maps, or as detailed table data. The maps refer to the whole Mediterranean Sea, while the table data is able to provide specific information for any of the major surf spots in the Med.

As of current version, MeteoSurf shows data collecting them from 3 different forecasting systems…

Read More… [by Fabio Marzocca]

on September 04, 2017 10:14 AM

Back in February, it was reported that a "smart" doll with wireless capabilities could be used to remotely spy on children and was banned for breaching German laws on surveillance devices disguised as another object.

Would you trust this doll?

For a number of years now there has been growing concern that the management technologies in recent Intel CPUs (ME, AMT and vPro) also conceal capabilities for spying, either due to design flaws (no software is perfect) or backdoors deliberately installed for US spy agencies, as revealed by Edward Snowden. In a 2014 interview, Intel's CEO offered to answer any question, except this one.

The LibreBoot project provides a more comprehensive and technical analysis of the issue, summarized in the statement "the libreboot project recommends avoiding all modern Intel hardware. If you have an Intel based system affected by the problems described below, then you should get rid of it as soon as possible" - eerily similar to the official advice German authorities are giving to victims of Cayla the doll.

All those amateur psychiatrists suggesting LibreBoot developers suffer from symptoms of schizophrenia have had to shut their mouths since May when Intel confirmed a design flaw (or NSA backdoor) in every modern CPU had become known to hackers.

Bill Gates famously started out with the mission to put a computer on every desk and in every home. With more than 80% of new laptops based on an Intel CPU with these hidden capabilities, can you imagine the NSA would not have wanted to come along for the ride?

Four questions everybody should be asking

  • If existing laws can already be applied to Cayla the doll, why haven't they been used to alert owners of devices containing Intel's vPro?
  • Are exploits of these backdoors (either Cayla or vPro) only feasible on a targeted basis, or do the intelligence agencies harvest data from these backdoors on a wholesale level, keeping a mirror image of every laptop owner's hard disk in one of their data centers, just as they already do with phone and Internet records?
  • How long will it be before every fast food or coffee chain with a "free" wifi service starts dipping in to the data exposed by these vulnerabilities as part of their customer profiling initiatives?
  • Since Intel's admissions in May, has anybody seen any evidence that anything is changing though, either in what vendors are offering or in terms of how companies and governments outside the US buy technology?

Share your thoughts

This issue was recently raised on the LibrePlanet mailing list. Please feel free to join the list and click here to reply on the thread.

on September 04, 2017 06:09 AM

September 01, 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h and during this time I did 4 days of front desk handling CVE triage (28 commits to the security tracker). I had a bit of time left and I opted to work on a package that had been lingering for a while: exiv2. It turns out the security researchers who requested the CVE did not even contact the upstream author so I opened 12 tickets on GitHub. The upstream author was unaware of those issues and is relatively unfamiliar with the general process of handling security updates. I started the work of reproducing each issue and so far they only affect the version 0.26 in experimental.

Misc Debian/Kali work

live-build and live-config. I pushed a few updates: dropping the useless xorriso –hardlinks option (as discussed in https://bugs.kali.org/view.php?id=4109), adding a .disk/mkisofs file on request of Thomas Schmitt, fixing a severe issue with the handling of locales configuration that broke wayland sessions entirely.

open-vm-tools and vmwgfx. The switch of GNOME to Wayland by default resulted in multiple regressions reported by Kali users, in particular for VMWare users where desktop resizing was no longer working. There was a patch available but it did not work for me, so I worked with Thomas Hellstrom (of VMWare) to identify the problems and he provided me an updated patch. I submitted this patch to Debian too (bug report, pull request).

Linux 4.12 also showed another regression for VMWare users where the screen would not be refreshed/updated when you are using Wayland/KMS. I did multiple tests for Thomas and provided the requested data so that they could create a fix (which I incorporated into Kali and should come to Debian through the upstream stable tree).

Packaging. I uploaded zim 0.67 to unstable. I fixed an RC bug on shiboken to get pyside and ubertooth back into testing. I had to hack the package to use gcc-6 on mips64el because that architecture is suffering from a severe gcc bug which probably broke a large part of the code compiled since the switch to gcc-7 (and which triggered a test failure in shiboken, fortunately)… I wonder if anybody will make sure to recompile all packages that might have been misbuilt.

Infrastructure. In a discussion on debian-devel, the topic of using tracker.debian.org to store “who is maintaining what” came up again. I responded to let know that this is something that I’d like to see done and that I have already taken measures to go into this direction. I wanted to make an experiment with my zim package but quickly came on a problem with ftpmaster’s lintian auto-rejects (which I submitted in #871575).

The BTS is now linking to tracker.debian.org on its web interface. To continue and give a push to this move, I scanned all the files in the qa SVN repository and updated many occurrences of packages.qa.debian.org with tracker.debian.org.

I also spotted a small problem in the way we handle autoremovals mails in tracker.debian.org, we often get them twice: I filed #871683 to get this fixed on release.debian.org.

Bug reports. vmdebootstrap creates unbootable qemu image (#872999). bugs in udebs are not shown on view by source package (#872784). New upstream release of ethtool (#873692). Upstream bugreport on systemd: support a systemd.swap=no boot command-line option.

I also shared some of my ideas/dreams in #859867 speaking of a helper tool to setup and maintain up-to-date build chroots and autopkgtest qemu images.

More bug fixes and pull requests. I created a patch to fix a build failure of systemd when /tmp is an overlayfs (#854400, the pull request has been discarded). I fixed the RC bug #853570 on ncrack and forwarded my changes upstream (here and here).

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on September 01, 2017 01:40 PM
There are a wealth of powerful static analysis tools available nowadays for analyzing C source code. These tools help to find bugs in code by just analyzing the source code without actually having to execute the code.   Over that past year or so I have been running the following static analysis tools on linux-next every weekday to find kernel bugs:
Typically each tool can take 10-25+ hours of compute time to analyze the kernel source; fortunately I have a large server at hand to do this.  The automated analysis creates an Ubuntu server VM, installs the required static analysis tools, clones linux-next and then runs the analysis.  The VMs are configured to minimize write activity to the host and run with 48 threads and plenty of memory to try to speed up the analysis process.

At the end of each run, the output from the previous run is diff'd against the new output and generates a list of new and fixed issues.  I then manually wade through these and try to fix some of the low hanging fruit when I can find free time to do so.

I've been gathering statistics from the CoverityScan builds for the past 12 months tracking the number of defects found, outstanding issues and number of defects eliminated:

As one can see, there are a lot of defects getting fixed by the Linux developers and the overall trend of outstanding issues is downwards, which is good to see.  The defect rate in linux-next is currently 0.46 issues per 1000 lines (out of over 13 million lines that are being scanned). A typical defect rate for a project this size is 0.5 issues per 1000 lines.  Some of these issues are false positives or very minor / insignficant issues that will not cause any run time issues at all, so don't be too alarmed by the statistics.

Using a range of static analysis tools is useful because each one has it's own strengths and weaknesses.  For example smatch and sparse are designed for sanity checking the kernel source, so they have some smarts that detect kernel specific semantic issues.  CoverityScan is a commercial product however they allow open source projects the size of the linux-kernel to be built daily and the web based bug tracking tool is very easy to use and CoverityScan does manage to reliably find bugs that other tools can't reach.  Cppcheck is useful as scans all the code paths by forcibly trying all the #ifdef'd variations of code - which is useful on the more obscure CONFIG mixes.

Finally, I use clang's scan-build and the latest verion of gcc to try and find the more typical warnings found by the static analysis built into modern open source compilers.

The more typical issues being found by static analysis are ones that don't generally appear at run time, such as in corner cases like error handling code paths, resource leaks or resource failure conditions, uninitialized variables or dead code paths.

My intention is to continue this process of daily checking and I hope to report back next September to review the CoverityScan trends for another year.
on September 01, 2017 11:24 AM