March 28, 2017

Welcome to the Ubuntu Weekly Newsletter. This is issue #503 for the weeks March 13 – 26, 2017, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Simon Quigley
  • OerHeks
  • Chris Guiver
  • Darin Miller
  • Alan Pope
  • Valorie Zimmerman
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on March 28, 2017 02:29 AM

March 27, 2017

LXD logo

USB devices in containers

It can be pretty useful to pass USB devices to a container. Be that some measurement equipment in a lab or maybe more commonly, an Android phone or some IoT device that you need to interact with.

Similar to what I wrote recently about GPUs, LXD supports passing USB devices into containers. Again, similarly to the GPU case, what’s actually passed into the container is a Unix character device, in this case, a /dev/bus/usb/ device node.

This restricts USB passthrough to those devices and software which use libusb to interact with them. For devices which use a kernel driver, the module should be installed and loaded on the host, and the resulting character or block device be passed to the container directly.

Note that for this to work, you’ll need LXD 2.5 or higher.

Example (Android debugging)

As an example which quite a lot of people should be able to relate to, lets run a LXD container with the Android debugging tools installed, accessing a USB connected phone.

This would for example allow you to have your app’s build system and CI run inside a container and interact with one or multiple devices connected over USB.

First, plug your phone over USB, make sure it’s unlocked and you have USB debugging enabled:

stgraber@dakara:~$ lsusb
Bus 002 Device 003: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 002: ID 0451:8041 Texas Instruments, Inc. 
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 021: ID 17ef:6047 Lenovo 
Bus 001 Device 031: ID 046d:082d Logitech, Inc. HD Pro Webcam C920
Bus 001 Device 004: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 005: ID 046d:0a01 Logitech, Inc. USB Headset
Bus 001 Device 033: ID 0fce:51da Sony Ericsson Mobile Communications AB 
Bus 001 Device 003: ID 0451:8043 Texas Instruments, Inc. 
Bus 001 Device 002: ID 072f:90cc Advanced Card Systems, Ltd ACR38 SmartCard Reader
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

Spot your phone in that list, in my case, that’d be the “Sony Ericsson Mobile” entry.

Now let’s create our container:

stgraber@dakara:~$ lxc launch ubuntu:16.04 c1
Creating c1
Starting c1

And install the Android debugging client:

stgraber@dakara:~$ lxc exec c1 -- apt install android-tools-adb
Reading package lists... Done
Building dependency tree 
Reading state information... Done
The following NEW packages will be installed:
 android-tools-adb
0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded.
Need to get 68.2 kB of archives.
After this operation, 198 kB of additional disk space will be used.
Get:1 http://archive.ubuntu.com/ubuntu xenial/universe amd64 android-tools-adb amd64 5.1.1r36+git20160322-0ubuntu3 [68.2 kB]
Fetched 68.2 kB in 0s (0 B/s) 
Selecting previously unselected package android-tools-adb.
(Reading database ... 25469 files and directories currently installed.)
Preparing to unpack .../android-tools-adb_5.1.1r36+git20160322-0ubuntu3_amd64.deb ...
Unpacking android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up android-tools-adb (5.1.1r36+git20160322-0ubuntu3) ...

We can now attempt to list Android devices with:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached

Since we’ve not passed any USB device yet, the empty output is expected.

Now, let’s pass the specific device listed in “lsusb” above:

stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce productid=51da
Device sony added to c1

And try to list devices again:

stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

To get a shell, you can then use:

stgraber@dakara:~$ lxc exec c1 -- adb shell
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
E5823:/ $

LXD USB devices support hotplug by default. So unplugging the device and plugging it back on the host will have it removed and re-added to the container.

The “productid” property isn’t required, you can set only the “vendorid” so that any device from that vendor will be automatically attached to the container. This can be very convenient when interacting with a number of similar devices or devices which change productid depending on what mode they’re in.

stgraber@dakara:~$ lxc config device remove c1 sony
Device sony removed from c1
stgraber@dakara:~$ lxc config device add c1 sony usb vendorid=0fce
Device sony added to c1
stgraber@dakara:~$ lxc exec c1 -- adb devices
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
CB5A28TSU6 device

The optional “required” property turns off the hotplug behavior, requiring the device be present for the container to be allowed to start.

More details on USB device properties can be found here.

Conclusion

We are surrounded by a variety of odd USB devices, a good number of which come with possibly dodgy software, requiring a specific version of a specific Linux distribution to work. It’s sometimes hard to accommodate those requirements while keeping a clean and safe environment.

LXD USB device passthrough helps a lot in such cases, so long as the USB device uses a libusb based workflow and doesn’t require a specific kernel driver.

If you want to add a device which does use a kernel driver, locate the /dev node it creates, check if it’s a character or block device and pass that to LXD as a unix-char or unix-block type device.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

on March 27, 2017 05:23 PM

A few days ago, someone shared with me a project to run video transcoding jobs in Kubernetes.

During her tests, made on a default Kubernetes installation on bare metal servers with 40 cores & 512GB RAM, she allocated 5 full CPU cores to each of the transcoding pods, then scaled up to 6 concurrent tasks per node, thus loading the machines at 75% (on paper). The expectation was that these jobs would run at the same speed as a single task.

The result was a slightly underwelming: while concurrency was going up, performance on individual task was going down. At maximum concurrency, they actually observed 50% decrease on single task performance.
I did some research to understand this behavior. It is referenced in several Kubernetes issues such as #10570,#171, in general via a Google Search. The documentation itself sheds some light on how the default scheduler work and why the performance can be impacted by concurrency on intensive tasks.

There are different methods to allocate CPU time to containers:

  • CPU Pining: each container gets a set of cores

CPU Pinning
If the host has enough CPU cores available, allocate 5 “physical cores” that will be dedicated to this pod/container

  • Temporal slicing: considering the host has N cores collectively representing an amount of compute time which you allocate to containers. 5% of CPU time means that for every 100ms, 5ms of compute are dedicated to the task.

Time Slicing
Temporal slicing, each container gets allocated randomly on all nodesObviously, pining CPUs can be interesting for some specific workloads but has a big problem of scale for the simple reason you could not run more pods than you have cores in your cluster.

As a result, Docker defaults to the second one, which also ensures you can have less than 1 CPU allocated to a container.

This has an impact on performance which also happens in HPC or any CPU intensive task.

Can we mitigate this risk? Maybe. Docker provides the cpuset option at the engine level. It’s not however leveraged by Kubernetes. However, LXD containers have the ability to be pined to physical cores via cpusets, in an automated fashion, as explained in this blog post by @stgraber.

This opens 2 new options for scheduling our workloads:

  • Slice up our hosts in several LXD Kubernetes Workers and see if pining CPUs for workers can help up;
  • Include a “burst” option with the native Kubernetes resource primitives, and see if that can help maximise compute throughput in our cluster.

Let’s see how these compare!

TL;DR

You don’t all have the time to read the whole thing so in a nutshell:

  • If you always allocate less than 1 CPU to your pods, concurrency doesn’t impact CPU-bound performance;
  • If you know in advance your max concurrency and it is not too high, then adding more workers with LXD and CPU pinning them always gets you better performance than native scheduling via Docker;
  • The winning strategy is always to super provision CPU limits to the max so that every bit of performance is allocated instantly to your pods

Note: these results are in AWS, where there is a hypervisor between the metal and the units. I am waiting for hardware with enough cores to complete the task. If you have hardware you’d like to throw at this, be my guest and I’ll help you run the tests.

The Plan

In this blog post, we will do the following:

  • Setup various Kubernetes clusters: pure bare metal, pure cloud, in LXD containers with strict CPU allocation.
  • Design a minimalistic Helm chart to easily create parallelism
  • Run benchmarks to scale concurrency (up to 32 threads/node)
  • Extract and process logs from these runs to see how concurrency impacts performance per core

Requirements

For this blog post, it is assumed that:

  • You are familiar with Kubernetes
  • You have notions of Helm charting or of Go Templates, as well as using Helm to deploy stuff
  • Having preliminary knowledge of the Canonical Distribution of Kubernetes (CDK) is a plus, but not required.
  • Downloading the code for this post:
git clone https://github.com/madeden/blogposts
cd blogposts/k8s-transcode

Methodology

Our benchmark is a transcoding task. It uses a ffmpeg workload, designed to minimize time to encode by exhausting all the resources allocated to compute as fast as possible. We use a single video for the encoding, so that all transcoding tasks can be compared. To minimize bottlenecks other than pure compute, we use a relatively low bandwidth video, stored locally on each host.

The transcoding job is run multiple times, with the following variations:

  • CPU allocation from 0.1 to 7 CPU Cores
  • Memory from 0.5 to 8GB RAM
  • Concurrency from 1 to 32 concurrent threads per host
  • (Concurrency * CPU Allocation) never exceeds the number of cores of a single host

We measure for each pod how long the encoding takes, then look at correlations between that and our variables.

Charting a simple transcoder

Transcoding with ffmpeg and Docker

When I want to do something with a video, the first thing I do is call my friend Ronan. He knows everything about everything for transcoding (and more)!

So I asked him something pretty straightforward: I want the most CPU intensive ffmpeg transcoding one liner you can think of.

He came back (in less than 30min) with not only the one liner, but also found a very neat docker image for it, kudos to Julien for making this. All together you get:


docker run --rm -v $PWD:/tmp jrottenberg/ffmpeg:ubuntu \
  -i /tmp/source.mp4 \
  -stats -c:v libx264 \
  -s 1920x1080 \
  -crf 22 \
  -profile:v main \
  -pix_fmt yuv420p \
  -threads 0 \
  -f mp4 -ac 2 \
  -c:a aac -b:a 128k \
  -strict -2 \
  /tmp/output.mp4

The key of this setup is the -threads 0 which tells ffmpeg that it’s an all you can eat buffet.
For test videos, HD Trailers or Sintel Trailers are great sources. I’m using a 1080p mp4 trailer for source.

Helm Chart

Transcoding maps directly to the notion of Job in Kubernetes. Jobs are batch processes that can be orchestrated very easily, and configured so that Kubernetes will not restart them when the job is done. The equivalent to Deployment Replicas is Job Parallelism.

To add concurrency, I initially use this notion. It proved a bad approach, making things more complicated than necessary to analyze the output logs. So I built a chart that creates many (numbered) jobs each running a single pod, so I can easily track them and their logs.


{{- $type := .Values.type -}}
{{- $parallelism := .Values.parallelism -}}
{{- $cpu := .Values.resources.requests.cpu -}}
{{- $memory := .Values.resources.requests.memory -}}
{{- $requests := .Values.resources.requests -}}
{{- $multiSrc := .Values.multiSource -}}
{{- $src := .Values.defaultSource -}}
{{- $burst := .Values.burst -}}
---
{{- range $job, $nb := until (int .Values.parallelism) }}
apiVersion: batch/v1
  kind: Job
  metadata:
   name: {{ $type | lower }}-{{ $parallelism }}-{{ $cpu | lower }}-{{ $memory | lower }}-{{ $job }}
  spec:
   parallelism: 1
   template:
   metadata:
   labels:
     role: transcoder
  spec:
    containers:
    - name: transcoder-{{ $job }}
      image: jrottenberg/ffmpeg:ubuntu
      args: [
        "-y",
        "-i", "/data/{{ if $multiSrc }}source{{ add 1 (mod 23 (add 1 (mod $parallelism (add $job 1)))) }}.mp4{{ else }}{{ $src }}{{ end }}",
        "-stats",
        "-c:v",
        "libx264",
        "-s", "1920x1080",
        "-crf", "22",
        "-profile:v", "main",
        "-pix_fmt", "yuv420p",
        "-threads", "0",
        "-f", "mp4",
        "-ac", "2",
        "-c:a", "aac",
        "-b:a", "128k",
        "-strict", "-2",
        "/data/output-{{ $job }}.mp4"
      ]
      volumeMounts:
      - mountPath: /data
        name: hostpath
      resources:
        requests:
{{ toYaml $requests | indent 12 }}
        limits:
          cpu: {{ if $burst }}{{ max (mul 2 (atoi $cpu)) 8 | quote }}{{ else }}{{ $cpu }}{{ end }}
          memory: {{ $memory }}
      restartPolicy: Never
    volumes:
    - name: hostpath
      hostPath:
        path: /mnt
---
{{- end }}

The values.yaml file that goes with this is very very simple:


# Number of // tasks
parallelism: 8
# Separator name
type: bm
# Do we want several input files
# if yes, the chart will use source${i}.mp4 with up to 24 sources
multiSource: false
# If not multi source, name of the default file
defaultSource: sintel_trailer-1080p.mp4
# Do we want to burst. If yes, resource limit will double request.
burst: false
resources:
requests:
cpu: "4"
memory: 8Gi
max:
cpu: "25"

That’s all you need. Of course, all sources are in the repo for your usage, you don’t have to copy paste this.

Creating test files

Now we need to generate a LOT of values.yaml files to cover many use cases. The reachable values would vary depending on your context. My home cluster has 6 workers with 4 cores and 32GB RAM each, so I used

  • 1, 6, 12, 18, 24, 48, 96 and 192 concurrent jobs (up to 32/worker)
  • reverse that for the CPUs (from 3 to 0.1 in case of parallelism=192)
  • 1 to 16GB RAM

In the cloud, I had 16 core workers with 60GB RAM, so I did the tests only on 1 to 7 CPU cores per task.

I didn’t do anything clever here, just a few bash loops to generate all my tasks. They are in the repo if needed.

Deploying Kubernetes

MAAS / AWS

The method to deploy on MAAS is the same I described in my previous blog about DIY GPU Cluster. Once you have MAAS installed and Juju configured to talk to it, you can adapt and use the bundle file in src/juju/ via:

juju deploy src/juju/k8s-maas.yaml

for AWS, use the k8s-aws.yaml bundle, which specifies c4.4xlarge as the default instances. When it’s done, download he configuration for kubectl then initialize Helm with

juju show-status kubernetes-worker-cpu --format json | \
jq --raw-output '.applications."kubernetes-worker-cpu".units | keys[]' | \
xargs -I UNIT juju ssh UNIT "sudo wget https://download.blender.org/durian/trailer/sintel_trailer-1080p.mp4 -O /mnt/sintel_trailer-1080p.mp4"
done
juju scp kubernetes-master/0:config ~/.kube/config
helm init

Variation for LXD

LXD on AWS is a bit special, because of the network. It breaks some of the primitives that are frequently used with Kubernetes such as the proxying of pods, which have to go through 2 layers of networking instead of 1. As a result,

  • kubectl proxy doesn’t work ootb
  • more importantly, helm doesn’t work because it consumes a proxy to the Tiller pod by default
  • However, transcoding doesn’t require network access but merely a pod doing some work on the file system, so that is not a problem.

The least expensive path to resolve the issue I found is to deploy a specific node that is NOT in LXD but a “normal” VM or node. This node will be labeled as a control plane node, and we modify the deployments for tiller-deploy and kubernetes-dashboard to force them on that node. Making this node small enough will ensure no transcoding get ever scheduled on it.

I could not find a way to fully automate this, so here is a sequence of actions to run:

juju deploy src/juju/k8s-lxd-c-.yaml

This deploys the whole thing and you need to wait until it’s done for the next step. Closely monitor juju status until you see that the deployment is OK, but flannel doesn’t start (this is expected, no worries).

Then adjust the LXD profile for each LXD node must to allow nested containers. In a near future (roadmapped for 2.3), Juju will gain the ability to declare the profiles it wants to use for LXD hosts. But for now, we need to build that manually:

NB_CORES_PER_LXD=4 #This is the same number used above to deploy
for MACHINE in 1 2
do
./src/bin/setup-worker.sh ${MACHINE} ${NB_CORES_PER_LXD}
done

If you’re watching juju status you will see that flannel suddenly starts working. All good! Now download he configuration for kubectl then initialize Helm with


juju scp kubernetes-master/0:config ~/.kube/config
helm init

We need to identify the Worker that is not a LXD container, then label it as our control plane node:

kubectl label $(kubectl get nodes -o name | grep -v lxd) controlPlane=true
kubectl label $(kubectl get nodes -o name | grep lxd) computePlane=true

Now this is where it become manual we need to edit successively rc/monitoring-influxdb-grafana-v4, deploy/heapster-v1.2.0.1, deploy/tiller-deploy and deploy/kubernetes-dashboard, to add

nodeSelector:
controlPlane: “true”

in the definition of the manifest. Use

kubectl edit -n kube-system rc/monitoring-influxdb-grafana-v4

After that, the cluster is ready to run!

Running transcoding jobs

Starting jobs

We have a lot of tests to run, and we do not want to spend too long managing them, so we build a simple automation around them

cd src
TYPE=aws
CPU_LIST="1 2 3"
MEM_LIST="1 2 3"
PARA_LIST="1 4 8 12 24 48"
for cpu in ${CPU_LIST}; do
  for memory in ${CPU_LIST}; do
    for para in ${PARA_LIST}; do
      [ -f values/values-${para}-${TYPE}-${cpu}-${memory}.yaml ] && \
        { helm install transcoder --values values/values-${para}-${TYPE}-${cpu}-${memory}.yaml
          sleep 60
          while [ "$(kubectl get pods -l role=transcoder | wc -l)" -ne "0" ]; do
           sleep 15
          done
        }
     done
  done
done

This will run the tests about as fast as possible. Adjust the variables to fit your local environment

First approach to Scheduling

Without any tuning or configuration, Kubernetes makes a decent job of spreading the load over the hosts. Essentially, all jobs being equal, it spreads them like a round robin on all nodes. Below is what we observe for a concurrency of 12.

NAME READY STATUS RESTARTS AGE IP NODE
bm-12–1–2gi-0–9j3sh 1/1 Running 0 9m 10.1.70.162 node06
bm-12–1–2gi-1–39fh4 1/1 Running 0 9m 10.1.65.210 node07
bm-12–1–2gi-11–261f0 1/1 Running 0 9m 10.1.22.165 node01
bm-12–1–2gi-2–1gb08 1/1 Running 0 9m 10.1.40.159 node05
bm-12–1–2gi-3-ltjx6 1/1 Running 0 9m 10.1.101.147 node04
bm-12–1–2gi-5–6xcp3 1/1 Running 0 9m 10.1.22.164 node01
bm-12–1–2gi-6–3sm8f 1/1 Running 0 9m 10.1.65.211 node07
bm-12–1–2gi-7–4mpxl 1/1 Running 0 9m 10.1.40.158 node05
bm-12–1–2gi-8–29mgd 1/1 Running 0 9m 10.1.101.146 node04
bm-12–1–2gi-9-mwzhq 1/1 Running 0 9m 10.1.70.163 node06

The same spread is realized also for larger concurrencies, and at 192, we observe 32 jobs per host in every case. Some screenshots of kubeUI and Grafana of my tests

Jobs in parallel
KubeUI showing 192 concurrent podsHalf a day testing
Compute Cycles at different concurrenciesLXD Fencing CPUs
LXD pining Kubernetes Workers to CPUsK8s full usage
Aoutch! About 100% on the whole machine

Collecting and aggregating results

Raw Logs

This is where it becomes a bit tricky. We could use an ELK stack and extract the logs there, but I couldn’t find a way to make it really easy to measure our KPIs.
Looking at what Docker does in terms of logging, you need to go on each machine and look into /var/lib/docker/containers//-json.log
Here we can see that each job generates exactly 82 lines of log, but only some of them are interesting:

  • First line: gives us the start time of the log
{“log”:”ffmpeg version 3.1.2 Copyright © 2000–2016 the FFmpeg developers\n”,”stream”:”stderr”,”time”:”2017–03–17T10:24:35.927368842Z”}
  • line 13: name of the source
{“log”:”Input #0, mov,mp4,m4a,3gp,3g2,mj2, from ‘/data/sintel_trailer-1080p.mp4’:\n”,”stream”:”stderr”,”time”:”2017–03–17T10:24:35.932373152Z”}
  • last line: end of transcoding timestamp
{“log”:”[aac @ 0x3a99c60] Qavg: 658.896\n”,”stream”:”stderr”,”time”:”2017–03–17T10:39:13.956095233Z”}

For advanced performance geeks, line 64 also gives us the transcode speed per frame, which can help profile the complexity of the video. For now, we don’t really need that.

Mapping to jobs

The raw log is only a Docker uuid, and does not help use very much to understand to what job it relates. Kubernetes gracefully creates links in /var/log/containers/ mapping the pod names to the docker uuid:

bm-1–0.8–1gi-0-t8fs5_default_transcoder-0-a39fb10555134677defc6898addefe3e4b6b720e432b7d4de24ff8d1089aac3a.log

So here is what we do:
  1. Collect the list of logs on each host:
for i in $(seq 0 1 ${MAX_NODE_ID}); do
  [ -d stats/node0${i} ] || mkdir -p node0${i}
  juju ssh kubernetes-worker-cpu/${i} "ls /var/log/containers | grep -v POD | grep -v 'kube-system'" > stats/node0${i}/links.txt
  juju ssh kubernetes-worker-cpu/${i} "sudo tar cfz logs.tgz /var/lib/docker/containers"
  juju scp kubernetes-worker-cpu/${i}:logs.tgz stats/node0${i}/
  cd node0${i}/
  tar xfz logs.tgz --strip-components=5 -C ./
  rm -rf config.v2.json host* resolv.conf* logs.tgz var shm
  cd ..
done

2. Extract import log lines (adapt per environment for nb of nodes…)

ENVIRONMENT=lxd
MAX_NODE_ID=1
echo "Host,Type,Concurrency,CPU,Memory,JobID,PodID,JobPodID,DockerID,TimeIn,TimeOut,Source" | tee ../db-${ENVIRONMENT}.csv
for node in $(seq 0 1 ${MAX_NODE_ID}); do
  cd node0${node}
  while read line; do
    echo "processing ${line}"
    NODE="node0${node}"
    CSV_LINE="$(echo ${line} | head -c-5 | tr '-' ',')" # node it's -c-6 for logs from bare metal or aws, -c-5 for lxd
    UUID="$(echo ${CSV_LINE} | cut -f8 -d',')"
    JSON="$(sed -ne '1p' -ne '13p' -ne '82p' ${UUID}-json.log)"
    TIME_IN="$(echo $JSON | jq --raw-output '.time' | head -n1 | xargs -I {} date --date='{}' +%s)"
    TIME_OUT="$(echo $JSON | jq --raw-output '.time' | tail -n1 | xargs -I {} date --date='{}' +%s)"
    SOURCE=$(echo $JSON | grep from | cut -f2 -d"'")
    echo "${NODE},${CSV_LINE},${TIME_IN},${TIME_OUT},${SOURCE}" | tee -a ../../db-${ENVIRONMENT}.csv
  done < links.txt
  cd ..
done

Once we have all the results, we load to Google Spreadsheet and look into the results…

Results Analysis

Impact of Memory

Once the allocation is above what is necessary for ffmpeg to transcode a video, memory is a non-impacting variable at the first approximation. However, at the second level we can see a slight increase in performance in the range of 0.5 to 1% between 1 and 4GB allocated.
Nevertheless, this factor was not taken into account.

Inflence RAM
RAM does not impact performance (or only marginally)

Impact of CPU allocation & Pinning

Regardless of the deployment method (AWS or Bare Metal), there is a change in behavior when allocating less or more than 1 CPU “equivalent”.

Being below or above the line

Running CPU allocation under 1 gives the best consistency across the board. The graph shows that the variations are contained, and what we see is an average variation of less than 4% in performance.


Low CPU per pod gives low influence of concurrencyRunning jobs with CPU request Interestingly, the heatmap shows that the worse performance is reached when ( Concurrency * CPU Counts ) ~ 1. I don’t know how to explain that behavior. Ideas?

Heat map CPU lower than 1
If total CPU is about 1 the performance is the worse.

Being above the line

As soon as you allocate more than a CPU, concurrency directly impacts performance. Regardless of the allocation, there is an impact, with concurrency 3.5 leading to about 10 to 15% penalty. Using more workers with less cores will increase the impact, up to 40~50% at high concurrency

As the graphs show, not all concurrencies are made equal. The below graphs show duration function of concurrency for various setups.

aws vs lxd 2
AWS with or without LXD, 2 cores / jobaws vs lxd 4
With 4 Cores
and 5 cores / jobWhen concurrency is low and the performance is well profiled, then slicing hosts thanks to LXD CPU pinning is always a valid strategy.

By default, LXD CPU-pinning in this context will systematically outperform the native scheduling of Docker and Kubernetes. It seems a concurrency of 2.5 per host is the point where Kubernetes allocation becomes more efficient than forcing the spread via LXD.

However, unbounding CPU limits for the jobs will let Kubernetes use everything it can at any point in time, and result in an overall better performance.

When using this last strategy, the performance is the same regardless of the number of cores requested for the jobs. The below graph summarizes all results:

AWS duration function of concurrency
All results: unbounding CPU cores homogenizes performance

Impact of concurrency on individual performance

Concurrency impacts performance. The below table shows the % of performance lost because of concurrency, for various setups.

Performance penalty fct concurrency
Performance is impacted from 10 to 20% when concurrency is 3 or more

Conclusion

In the context of transcoding or another CPU intensive task,

  • If you always allocate less than 1 CPU to your pods, concurrency doesn’t impact CPU-bound performance; Still, be careful about the other aspects. Our use case doesn’t depend on memory or disk IO, yours could.
  • If you know in advance your max concurrency and it is not too high, then adding more workers with LXD and CPU pinning them always gets you better performance than native scheduling via Docker. This has other interesting properties, such as dynamic resizing of workers with no downtime, and very fast provisioning of new workers. Essentially, you get a highly elastic cluster for the same number of physical nodes. Pretty awesome.
  • The winning strategy is always to super provision CPU limits to the max so that every bit of performance is allocated instantly to your pods. Of course, this cannot work in every environment, so be careful when using this, and test if it fits with your use case before applying in production.

These results are in AWS, where there is a hypervisor between the metal and the units. I am waiting for hardware with enough cores to complete the task. If you have hardware you’d like to throw at this, be my guest and I’ll help you run the tests.

Finally and to open up a discussion, a next step could also be to use GPUs to perform this same task. The limitation will be the number of GPUs available in the cluster. I’m waiting for some new nVidia GPUs and Dell hardware, hopefully I’ll be able to put this to the test.

There are some unknowns that I wasn’t able to sort out. I made the result dataset of ~3000 jobs open here, so you can run your own analysis! Let me know if you find anything interesting!

on March 27, 2017 04:46 PM

Globe Picture Andrius Aleksandravičius

In the early days of High Performance Computing (HPC), ‘Big Data’ was just called ‘Data’ and organizations spent millions of dollars to buy mainframes or large data processing/warehousing systems just to gain incremental improvements in the manipulation of information. Today, IT Pros and systems administrators are under more pressure than ever to make the most of these legacy bare metal hardware investments. However, with more and more compute workloads moving to the public cloud, and the natural pressure to do more with less, IT pros are finding it difficult to find balance with existing infrastructure and the new realities of the cloud.  Until now, these professionals have not found the balance needed to achieve more efficiency while using what they already have in-house.

Businesses have traditionally made significant investments in hardware. However, as the cloud has disrupted traditional business models, IT Pros needed to find a way to combine the flexibility of the cloud with the power and security of their bare metal servers or internal hardware infrastructure. Canonical’s MAAS (Metal as a Service) solution allows IT organizations to discover, commission, and (re)deploy bare metal servers within most operating system environments like Windows, Linux, etc.. As new services and applications are deployed, MAAS can be used to dynamically re-allocate physical resources to match workload requirements. This means organizations can deploy both virtual and physical machines across multiple architectures and virtual environments, at scale.

MAAS improves the lives of IT Pros!

MAAS was designed to make complex hardware deployments faster, more efficient, and with more flexibility. One of the key areas where MAAS has found significant success is in High Performance Computing (HPC) and Big Data. HPC relies on aggregating computing power to solve large data-centric problems in subjects like banking, healthcare, engineering, business, science, etc. Many large organizations are leveraging MAAS to modernize their OS deployment toolchain (a set of tool integrations that support the development, deployment, operations tasks) and lower server provisioning times.

These organizations found their tools were outdated thereby prohibiting them from deploying large numbers of servers. Server deployments were slow, modular/monolithic, and could not integrate with tools, drivers, and APIs. By deploying MAAS they were able to speedup their server deployment times as well as integrate with their orchestration platform and configuration management tools like Chef, Ansible, and Puppet, or software modeling solutions like Canonical’s Juju.

For example, financial institutions are using MAAS to deploy Windows servers in their data centre during business hours to support applications and employee systems. Once the day is done, they use MAAS to redeploy the data centre server infrastructure to support Ubuntu Servers and perform batch processing and transaction settlement for the day’s activities. In the traditional HPC world, these processes would take days or weeks to perform, but with MAAS, these organizations are improving their efficiency, reduce infrastructure costs by using existing hardware, while giving these institutions the ability to close out the day’s transitions faster and more efficiently thus giving financial executives the ability to spend more time with their families and having bragging rights at cocktail parties.

HPC is just one great use cases for MAAS where companies can recognize immediate value from their bare metal hardware investments. Over the next weeks we will go deeper into the various use cases for MAAS, but in the meantime, we invite you to try MAAS for yourself on any of the major public clouds using Conjure Up.

If you would like to learn more about MAAS or see a demo, contact us directly.

on March 27, 2017 04:29 PM
SREs Needed (Berlin Area)

We are looking for skilled people for SRE / DevOPS work.

So without further ado, here is the job offering :)

SRE / DevOps

Do you want to be part of an engineering team that focus on building solutions that maximizes use of emerging technologies to transform our business to achieve superior value and scalability? Do you want a career opportunity that combines your skills as an engineer and passion for video gaming? Are you fascinated by technologies behind the internet and cloud computing? If so, join us!

As a part of Sony Computer Entertainment, Gaikai is leading the cloud gaming revolution, putting console-quality video games on any device, from TVs to consoles to mobile devices and beyond.

Our SRE's focus is on three things: overall ownership of production, production code quality, and deployments.

The succesfull candidate, will be self-directed and able to participate in the decision making process at various levels.

We expect our SREs to have opinions on the state of our service, and provide critical feedback during various phases of the operational lifecycle. We are engaged throughout the S/W development lifecycle, ensuing the operational readiness and stability of our service.

Requirements

Minimum of 5+ years working experience in Software Development and/or Linux Systems Administration role.
Strong interpersonal, written and verbal communication skills.
Available to participate in a scheduled on-call rotation.

Skills & Knowledge

Proficient as a Linux Production Systems Engineer, with experience managing large scale Web Services infrastructure.
Development experience in one or more of the following programming languages:

  • Python (preferred)
  • Bash, Java, Node.js, C++ or Ruby

In addition, experience with one or more of the following:

  • NoSQL at scale (eg Hadoop, Mongo clusters, and/or sharded Redis)
  • Event Aggregation technologies. (eg. ElasticSearch)
  • Monitoring & Alerting, and Incident Management toolsets
  • Virtual infrastructure (deployment and management) at scale
  • Release Engineering (Package management and distribution at scale)
  • S/W Performance analysis and load testing (QA or SDET experience: a plus)

Location

  • Berlin, Germany

Who is hiring?

  • Gaikai / Sony Interactive Entertainment

When you are on LinkedIn, you can directly go and apply for this job.
If you want, but you are not forced to, please refer to me.

on March 27, 2017 12:19 PM

Hi, Mesa 17.0.2 backports can now be installed from the updates ppa. Have fun testing, and feel free to file any bugs you find using ‘apport-bug mesa’.


on March 27, 2017 08:13 AM

March 26, 2017

Spring is here and the release of Ubuntu 17.04 is just around the corner. I've been using it for two weeks and I can't say I'm disappointed! But one new feature that never disappoints me is appearance of the community wallpapers that were selected from the Ubuntu Free Culture Showcase!

Every cycle, talented artists around the world create media and release it under licenses that encourage sharing and adaptation. For Ubuntu 17.04, 96 images were submitted to the Ubuntu 17.04 Free Culture Showcase photo pool on Flickr, where all eligible submissions can be found.

But now the results are in, and the top choices, voted on by certain members of the Ubuntu community, and I'm proud to announce the winning images that will be included in Ubuntu 17.04:

A big congratulations to the winners, and thanks to everyone who submitted a wallpaper. You can find these wallpapers (along with dozens of other stunning wallpapers) today at the links above, or in your desktop wallpaper list after you upgrade or install Ubuntu 17.04 on April 13th.

on March 26, 2017 08:35 AM

March 25, 2017

What I've Been Up To

Simon Quigley

It's been a long time since I've blogged about anything. I've been busy with lots of different things. Here's what I've been up to.

Lubuntu

First off, let's talk about Lubuntu. A couple different actions (or lack of) have happened.

Release Management

Walter Lapchynski recently passed on the position of Lubuntu Release Manager to me. He has been my mentor ever since I joined Ubuntu/Lubuntu in July of 2015, and I'm honored to be able to step up to this role.

Here's what I've done as Release Manager from then to today:

Sunsetted Lubuntu PowerPC daily builds for Lubuntu Zesty Zapus.

This was something we had been looking at for a while, and it just happened to happen immediately after I became Release Manager. It wasn't really our hand pushing this forward, per se. The Ubuntu Technical Board voted to end the PowerPC architecture in the Ubuntu archive for Zesty before this, and I thought it was a good time to carry this forward for Lubuntu.

Helped get Zesty Zapus Beta 1 out the door for Ubuntu and for Lubuntu as well.

Discussed Firefox and ALSA's future in Ubuntu.

When Firefox 52 was released into xenial-updates, it broke Firefox's sound functionality for Lubuntu 16.04 users, as Lubuntu 16.04 LTS uses ALSA, and despite what a certain Ubuntu site says, was because it was disabled in the default build of Firefox, not completely removed. I won't get into it (I don't want to start a flame war), but this wasn't really something Lubuntu messed up, as the original title (and content) of the article ("Lubuntu users are left with no sound after upgrading Firefox") implied.

I recently brought this up for discussion (I didn't know that part just mentioned when I sent the email linked above), and for the time being it will be re-enabled in the Ubuntu build. As we continue to update to future Firefox releases, this will result in bitrot, so eventually we need to switch off of Firefox in the future.

I'm personally against switching to Chromium, as it's not lightweight and it's a bit bloated. I have also recently started using Firefox, and it's been a lot faster for me than Chromium was. But, that's a discussion for another day, and within the next month, I will most likely bring it up for discussion on the lubuntu-devel mailing list. I'll probably write another blog post when I send that email, but we'll see.

Got Zesty Zapus Final Beta/Beta 2 out the door for Lubuntu.

LXQt

Lubuntu 17.04 will not ship with LXQt.

That's basically the bottom line here. ;)

I've been working to start a project on Launchpad that will allow me to upload packages to a PPA and have it build an image from that, but I'm still waiting to hear back on a few things for that.

You may be asking, "So why don't we have LXQt yet?" The answer to that question is, I've been busy with a new job, school, and many other things in life and haven't gotten the chance to heavily work on it much. I have a plan in mind, but it all depends on my free time from this point on.

That being said, if you want to get involved, please don't be afraid to send an email to the Lubuntu developers mailing list. We're all really friendly, and we'll be very willing to get you started, no matter your skill level. This is exactly the reason why LXQt is taking so long. It's because I'm pretty much the only one working on this specific subproject, and I don't have all the time in the world.

Donations

While this isn't specifically highlighting any work I've done in this area, I'd like to provide some information on this.

Lubuntu has been looking for a way to accept donations for a long time. Donations to Lubuntu would help fund:

  • Work on Lubuntu (obviously).
  • Work on upstream projects we use and install by default (LXQt, for example, in the future).
  • Travel to conferences for Lubuntu team members.
  • Much more...

A goal that I specifically have with this is to be transparent as possible about any donations we recieve and specifically where they go. But, we have to get it set up first.

While I am still a minor in the country I reside and (most likely) cannot make any legal decisions about funds (yet), Walter has been looking for a lawyer to help sort out something along the lines of a "Lubuntu Foundation" (or something like that) to manage the funds in a way that doesn't give only one person control. So if you know a lawyer (or are one) that would be willing to help us set that up, please contact me or Walter when you can.

Ubuntu Weekly Newsletter

Before Issue 500 of the Ubuntu Weekly Newsletter, Elizabeth K. Joseph (Lyz) was in the driver's seat of the team. She helped get every release out on time every week without fail (with the exception of two-week issues, but that's irrelevant right now). Before I go on, I just want to say a big thank you to Lyz. Your contributions were really appreciated. :)

She had taken the time to show me not only how to write summaries in the very beginning, but how to edit, and even publish a whole issue. I'm very grateful for the time she spent with me, and I can't thank her enough.

Fast forward to 501, I ended up stepping up to get the email sent to summary writers and ultimately the whole issue published. I was nervous, as I had never published an issue on my own (Lyz and I had always split the tasks), but I successfully pressed the right buttons and got it out. Before publishing, I had some help from Paul White (his last issue contributing, thank you as well) and others to get summaries done and the issue edited.

Since then, I've pretty much stepped up to fill in the gaps for Lyz. I wouldn't necessarily consider anything official yet, but for now, this is where I'll stay.

But, it's tough to get issues of UWN out. I have a new respect for everything Lyz did and all of the hard work she put into each issue. This is a short description of what happens each week:

  • Collect issues during the week, put it on the Google Document.
  • On Friday, clean up the doc and send out to summary writers.
  • Over the weekend, people write summaries.
  • On Sunday, it's copied to the wiki, stats are added, and it's sent out to editors.
  • On Monday, it's published.

Wash, rinse, and repeat.

It's incredibly easy to write summaries. In fact, the email was just sent out earlier to summary writers. If you want to take a minute or two (that's all it takes for contributing a summary) to help us out, hop on to the Google Document, refer to the style guidelines linked at the top, and help us out. Then, when you're done, put your name on the bottom if you want to be credited. Every little bit helps!

Other things

About this website

  • I think I can finally implement a comments section so people can leave easy feedback. This is a huge step forward, given that I write the HTML for this website completely from scratch.
  • I wrote a hacky Python script that I can use for writing blog posts. I can just write everything in Markdown, and it will do all the magic bits. I manually inspect it, then just git add, git commit, and git push it.
  • I moved the website to GitLab, and with the help of Thomas Ward, got HTTPS working.

For the future

  • I've been inspired by some of the Debian people blogging about their monthly contributions to FLOSS, so I'm thinking that's what I'll do. It'll be interesting to see what I actually do in a month's time... who knows what I'll find out? :)

So that's what I've been up to. :)

on March 25, 2017 01:27 AM

March 24, 2017

This is the first in a series of posts about the Ubuntu Server Team’s git importer (usd). There is a lot to discuss: why it’s necessary, the algorithm, using the tooling for doing merges, using the tooling for contributing one-off fixes, etc. But for this post, I’m just going to give a quick overview of what’s available and will follow-up in future posts with those details.

The importer was first announced here and then a second announcement was made here. But both those posts are pretty out-of-date now… I have written a relatively current guide to merging which does talk about the tooling here, and much of that content will be re-covered in future blog posts.

The tooling is browse-able here and can be obtained via

git clone https://git.launchpad.net/usd-importer

This will provide a usd command in the local repository’s bin directory. That command resembles git as being the launching point for interacting with imported trees — both for importing them and for using them:

usage: usd [-h] [-P PARENTFILE] [-L PULLFILE]
 build|build-source|clone|import|merge|tag ...

Ubuntu Server Dev git tool

positional arguments:
 build|build-source|clone|import|merge|tag
 
 build - Build a usd-cloned tree with dpkg-buildpackage
 build-source - Build a source package and changes file
 clone - Clone package to a directory
 import - Update a launchpad git tree based upon the state of the Ubuntu and Debian archives
 merge - Given a usd-import'd tree, assist with an Ubuntu merge
 tag - Given a usd-import'd tree, tag a commit respecting DEP14

...

More information is available at https://wiki.ubuntu.com/UbuntuDevelopment/Merging/GitWorkflow.

You can run usd locally without arguments to view the full help.

Imported trees currently live here. This will probably change in the future as we work with the Launchpad team to integrate the functionality. As you can see, we have 411 repositories currently (as of this post) and that’s a consequence of having the importer running automatically. Every 20 minutes or so, the usd-cron script checks if there are any new publishes of source packages listed in usd-cron-packages.txt in Debian or Ubuntu and runs usd import on them, if so.

I think that’s enough for the first post! Just browsing the code and the imported trees is pretty interesting (running gitk on an imported repository gives you a very interesting visual of Ubuntu development). I’ll dig into details in the next post (probably of many).


on March 24, 2017 11:40 PM

As posted on the ubuntu-server mailing list we had our first Ubuntu Server Bug Squashing Day (USBSD) on Wednesday, March 22, 2017. While we may not have had a large community showing, the event was still a success and their is momentum to make this a regular event going forward (more on that below…). This post is about the goals behind USBSD.

[Throughout the following I will probably refer to users by their IRC nicks. When I know their real names, I will try and use them as well at least once so real-person association is available.]

The intent of the USBSD is two-fold:

  1. The Server Team has a triage rotation for all bugs filed against packages in main, which is purely an attempt to provide adequate responses to ‘important’ — ensuring we have ‘good’ bug reports that are actionable and then to put them on to the Server Team’s queue (via subscribing ~ubuntu-server). The goal for triage is not to solve the bugs, it’s simply to respond and put it on the ‘to-fix’ list (which is visible here. But we don’t want that list to just grow without bound (what good is it to respond to a bug but never fix it?), so we need to dedicate some time to working to get a bug to closure (or at least to the upload/sponsorship stage).
  2. Encourage community-driven ownership of bug-fixes and packages. While Robie Basak (rbasak), Christian Ehrhardt (cpaelzer), Josh Powers (powersj) and myself (nacc) all work for Canonical on the Server Team on the release side of things (meaning merges, bug-fixes, etc), there simply is not enough time in each cycle for the four of us alone to address every bug filed. And it’s not to say that the only developers working on packages an Ubuntu Server user cares are us four. But from a coordination perspective for every package in main that is ‘important’ to Ubuntu Server, we are often at least involved. I do not want to diminish by any means any contribution to Ubuntu Server, but it does feel like the broader community contributions have slowed down with recent releases. That might be a good thing ™ in that packages don’t have as many bugs, or it might just be that bugs are getting filed and no one is working on them. By improving our tooling and processes around bugs, we can lower barriers to entry for new contributors and ideally grow ownership and knowledge of packages relevant to Ubuntu Server.

That is a rather long-winded introduction to the goals. Did we meet them?

To the first point, it was a positive experience for those of us working on bugs on the day to have a dedicated place to coordinate and discuss solutions (on IRC at FreeNode/#ubuntu-server as well as well on the Etherpad we used [requires authentication and membership in the ~ubuntu-etherpad Launchpad team]. And I believe a handful of bugs were driven to completion.

To the second point, I was not pinged much at all (if at all) during the US business day on USBSD #1. That was a bit disappointing. But I saw that cpaelzer helped a few different users with several classes of bugs and that was awesome to wake up to! He also did a great job of documenting his bugwork/interactions on the Etherpad.

Follow-on posts will talk about ways we can improve and hopefully document some patterns for bugwork that we discover via USBSDs.

In the meanwhile, we’re tentatively scheduling USBSD #2 for April 5, 2017!


on March 24, 2017 11:14 PM
Hi all, I have an awesome laptop I bought from my son, a hardcore gamer. So used, but also very beefy and well-cared-for. Lately, however, it has begun to freeze, by which I mean: the screen is not updated, and no keyboard inputs are accepted. So I can't even REISUB; the only cure is the power button.

I like to leave my laptop running overnight for a few reasons -- to get IRC posts while I sleep, to serve *ubuntu ISO torrents, and to run Folding@Home.

Attempting to cure the freezing, I've updated my graphics driver, rolled back to an older kernel, removed my beloved Folding@Home application, turned on the fan overnight, all to no avail. After adding lm-sensors and such, it didn't seem likely to be overheating, but I'd like to be sure about that.

Lately I turned off screen dimming at night and left a konsole window on the desktop running `top`. This morning I found a freeze again, with nothing apparent in the top readout:


So I went looking on the internet and found this super post: Using KSysGuard: System monitor tool for KDE. The first problem was that when I hit Control+Escape, I could not see the System Load tab he mentioned or any way to create a custom tab. However, when I started Ksysguard from the commandline, it matches the screenshots in the blog.

Here is my custom tab:


So tonight I'll leave that on my screen along with konsole running `top` and see if there is any more useful information.
on March 24, 2017 09:55 PM

As some of you will know, I am a consultant that helps companies build internal and external communities, collaborative workflow, and teams. Like any consultant, I have different leads that I need to manage, I convert those leads into opportunities, and then I need to follow up and convert them into clients.

Managing my time is one of the most critical elements of what I do. I want to maximize my time to be as valuable as possible, so optimizing this process is key. Thus, the choice of CRM has been an important one. I started with Insightly, but it lacked a key requirement: integration.

I hate duplicating effort. I spend the majority of my day living in email, so when a conversation kicks off as a lead or opportunity, I want to magically move that from my email to the CRM. I want to be able to see and associate conversations from my email in the CRM. I want to be able to see calendar events in my CRM. Most importantly, I don’t want to be duplicating content from one place to another. Sure, it might not take much time, but the reality is that I am just going to end up not doing it.

Evaluations

So, I evaluated a few different platforms, with a strong bias to SalesforceIQ. The main attraction there was the tight integration with my email. The problem with SalesforceIQ is that it is expensive, it has limited integration beyond email, and it gets significantly more expensive when you want more control over your pipeline and reporting. SalesforceIQ has the notion of “lists” where each is kind of like a filtered spreadsheet view. For the basic plan you get one list, but beyond that you have to go up a plan in which I get more lists, but it also gets much more expensive.

As I courted different solutions I stumbled across ProsperWorks. I had never heard of it, but there were a number of features that I was attracted to.

ProsperWorks

Firstly, ProsperWorks really focuses on tight integration with Google services. Now, a big chunk of my business is using Google services. Prosperworks integrates with Gmail, but also Google Calendar, Google Docs, and other services.

They ship a Gmail plugin which makes it simple to click on a contact and add them to ProsperWorks. You can then create an opportunity from that contact with a single click. Again, this is from my email: this immediately offers an advantage to me.

ProsperWorks CRM

Yep, that’s not my Inbox. It is an image yanked off the Internet.

When viewing each opportunity, ProsperWorks will then show associated Google Calendar events and I can easily attach Google Docs documents or other documents there too. The opportunity is presented as a timeline with email conversations listed there, but then integrated note-taking for meetings, and other elements. It makes it easy to summarize the scope of the deal, add the value, and add all related material. Also, adding additional parties to the deal is simple because ProsperWorks knows about your contacts as it sucks it up from your Gmail.

While the contact management piece is less important to me, it is also nice that it brings in related accounts for each contact automatically such as Twitter, LinkedIn, pictures, and more. Again, this all reduces the time I need to spend screwing around in a CRM.

Managing opportunities across the pipeline is simple too. I can define my own stages and then it basically operates like Trello and you just drag cards from one stage to another. I love this. No more selecting drop down boxes and having to save contacts. I like how ProsperWorks just gets out of my way and lets me focus on action.

…also not my pipeline view. Thanks again Google Images!

I also love that I can order these stages based on “inactivity”. Because ProsperWorks integrates email into each opportunity, it knows how many inactive days there has been since I engaged with an opportunity. This means I can (a) sort my opportunities based on inactivity so I can keep on top of them easily, and (b) I can set reminders to add tasks when I need to follow up.

ProsperWorks CRM

The focus on inactivity is hugely helpful when managing lots of concurrent opportunities.

As I was evaluating ProsperWorks, there was one additional element that really clinched it for me: the design.

ProsperWorks looks and feels like a Google application. It uses material design, and it is sleek and modern. It doesn’t just look good, but it is smartly designed in terms of user interaction. It is abundantly clear that whoever does the interaction and UX design at ProsperWorks is doing an awesome job, and I hope someone there cuts this paragraph out and shows it to them. If they do, you kick ass!

Of course, ProsperWorks does a load of other stuff that is helpful for teams, but I am primarily assessing this from a sole consultant’s perspective. In the end, I pulled the trigger and subscribed, and I am delighted that I did. It provides a great service, is more cost efficient than the alternatives, provides an integrated solution, and the company looks like they are doing neat things.

Feature Requests

While I dig ProsperWorks, there are some things I would love to encourage the company to focus on. So, ProsperWorks folks, if you are reading this, I would love to see you focus on the following. If some of these already exist, let me know and I will update this post. Consider me a resource here: happy to talk to you about these ideas if it helps.

Wider Google Calendar integration

Currently the gcal integration is pretty neat. One limitation though is that it depends on a gmail.com domain. As such, calendar events where someone invites my jonobacon.com email doesn’t automatically get added to the opportunity (and dashboard). It would be great to be able to associate another email address with an account (e.g. a gmail.com and jonobacon.com address) so when calendar events have either or both of those addresses they are sucked into opportunities. It would also be nice to select which calendars are viewed: I use different calendars for different things (e.g. one calendar for booked work, one for prospect meetings, one for personal etc). Feature Request Link

It would also be great to have ProsperWorks be able to ease scheduling calendar meetings in available slots. I want to be able to talk to a client about scheduling a call, click a button in the opportunity, and ProsperWorks will tell me four different options for call times, I can select which ones I am interested in, and then offer these times to the client, where they can pick one. ProsperWorks knows my calendar, this should be doable, and would be hugely helpful. Feature Request Link

Improve the project management capabilities

I have a dream. I want my CRM to also offer simple project management capabilities. ProsperWorks does have a ‘projects’ view, but I am unclear on the point of it.

What I would love to see is simple project tracking which integrates (a) the ability to set milestones with deadlines and key deliverables, and (b) Objective Key Results. This would be huge: I could agree on a set of work complete with deliverables as part of an opportunity, and then with a single click be able to turn this into a project where the milestones would be added and I could assign tasks, track notes, and even display a burndown chart to see how on track I am within a project. Feature Request Link

This doesn’t need to be a huge project management system, just a simple way of adding milestones, their child tasks, tracking deliverables, and managing work that leads up to those deliverables. Even if ProsperWorks just adds simple Evernote functionality where I can attach a bunch of notes to a client, this would be hugely helpful.

Optimize or Integrate Task Tracking

Tracking tasks is an important part of my work. The gold standard for task tracking is Wunderlist. It makes it simple to add tasks (not all tasks need deadlines), and I can access them from anywhere.

I would love to ProsperWorks to either offer that simplicity of task tracking (hit a key, whack in a title for a task, and optionally add a deadline instead of picking an arbitrary deadline that it nags me about later), or integrate with Wunderlist directly. Feature Request Link

Dashboard Configurability

I want my CRM dashboard to be something I look at every day. I want it to tell me what calendar events I have today, which opportunities I need to follow up with, what tasks I need to complete, and how my overall pipeline is doing. ProspectWorks does some of this, but doesn’t allow me to configure this view. For example, I can’t get rid of the ‘Invite Team Members’ box, which is entirely irrelevant to me as an individual consultant. Feature Request Link

So, all in all, nice work, ProsperWorks! I love what you are doing, and I love how you are innovating in this space. Consider me a resource: I want to see you succeed!

UPDATE: Updated with feature request links.

The post My Move to ProsperWorks CRM and Feature Requests appeared first on Jono Bacon.

on March 24, 2017 05:13 PM
After a few improvements in uNav I'm so proud of the current version, specially with the last useful feature.

But an image will explain it better. You'll choose your transport mode and the orange line is the public transport :))

New uNav 0.67
Enjoy the freedom in your Ubuntu Phone or tablet!
on March 24, 2017 03:03 PM

C++ Cheat Sheet

Luis de Bethencourt

I spend most of my time writing and reading C code, but every once in a while I get to play with a C++ project and find myself doing frequent reference checks to cppreference.com. I wrote myself the most concise cheat sheet I could that still shaved off the majority of those quick checks. Maybe it helps other fellow programmers who occasionally dabble with C++.

class ClassName {
  int priv_member;  // private by default
protected:
  int protect_member;
public:
  ClassName() // constructor
  int get_priv_mem();  // just prototype of func
  virtual ~ClassName() {} // destructor
};

int ClassName::get_priv_mem() {  // define via scope
  return priv_member;
}

class ChildName : public ClassName, public CanDoMult {
public:
  ChildName() {
    protect_member = 0;
  } ...
};

class Square {
  friend class Rectangle; ... // can access private members
};


Containers: container_type<int>
 list -> linked list
  front(), back(), begin(), end(), {push/pop}_{front/back}(), insert(), erase()
 deque ->double ended queue
  [], {push/pop}_{front/back}(), insert(), erase(), front(), back(), begin()
 queue/stack -> adaptors over deque
  push(), pop(), size(), empty()
  front(), back() <- queue
  top() <- stack
 unordered_map -> hashtable
  [], at(), begin(), end(), insert(), erase(), count(), empty(), size()
 vector -> dynamic array
  [], at(), front(), back(), {push/pop}_back, insert(), erase(), size()
 map -> tree
  [], at(), insert(), erase(), begin(), end(), size(), empty(), find(), count()

 unordered_set -> hashtable just keys
 set -> tree just keys
on March 24, 2017 01:18 PM

Up to now, Linux packages on mono-project.com have come in two flavours – RPM built for CentOS 7 (and RHEL 7), and .deb built for Debian 7. Universal packages that work on the named distributions, and anything newer.

Except that’s not entirely true.

Firstly, there have been “compatibility repositories” users need to add, to deal with ABI changes in libtiff, libjpeg, and Apache, since Debian 7. Then there’s the packages for ARM64 and PPC64el – neither of those architectures is available in Debian 7, so they’re published in the 7 repo but actually built on 8.

A large reason for this is difficulty in our package publishing pipeline – apt only allows one version-architecture mix in the repository at once, so I can’t have, say, 4.8.0.520-0xamarin1 built on AMD64 on both Debian 7 and Ubuntu 16.04.

We’ve been working hard on a new package build/publish pipeline, which can properly support multiple distributions, based on Jenkins Pipeline. This new packaging system also resolves longstanding issues such as “can’t really build anything except Mono” and “Architecture: All packages still get built on Jo’s laptop, with no public build logs”

So, here’s the old build matrix:

Distribution Architectures
Debian 7 ARM hard float, ARM soft float, ARM64 (actually Debian 8), AMD64, i386, PPC64el (actually Debian 8)
CentOS 7 AMD64

And here’s the new one:

Distribution Architectures
Debian 7 ARM hard float (v7), ARM soft float, AMD64, i386
Debian 8 ARM hard float (v7), ARM soft float, ARM64, AMD64, i386, PPC64el
Raspbian 8 ARM hard float (v6)
Ubuntu 14.04 ARM hard float (v7), ARM64, AMD64, i386, PPC64el
Ubuntu 16.04 ARM hard float (v7), ARM64, AMD64, i386, PPC64el
CentOS 6 AMD64, i386
CentOS 7 AMD64

The compatibility repositories will no longer be needed on recent Ubuntu or Debian – just use the right repository for your system. If your distribution isn’t listed… sorry, but we need to draw a line somewhere on support, and the distributions listed here are based on heavy analysis of our web server logs and bug requests.

You’ll want to change your package manager repositories to reflect your system more accurately, once Mono vNext is published. We’re debating some kind of automated handling of this, but I’m loathe to touch users’ sources.list without their knowledge.

CentOS builds are going to be late – I’ve been doing all my prototyping against the Debian builds, as I have better command of the tooling. Hopefully no worse than a week or two.

on March 24, 2017 10:06 AM
Lubuntu Zesty Zapus Final Beta (soon to be 17.04) has been released! We have a couple papercuts listed in the release notes, so please take a look. A big thanks to the whole Lubuntu team and contributors for helping pull this release together. You can grab the images from here: http://cdimage.ubuntu.com/lubuntu/releases/zesty/beta-2/
on March 24, 2017 03:16 AM

The Ubuntu team is pleased to announce the final beta release of the Ubuntu 17.04 Desktop, Server, and Cloud products.

Codenamed "Zesty Zapus", 17.04 continues Ubuntu’s proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution. The team has been hard at work through this cycle, introducing new features and fixing bugs.

This beta release includes images from not only the Ubuntu Desktop, Server, and Cloud products, but also the Kubuntu, Lubuntu, Ubuntu GNOME, UbuntuKylin, Ubuntu MATE, Ubuntu Studio, and Xubuntu flavours.

We’re also pleased with this release to welcome Ubuntu Budgie to the family of Ubuntu community flavours.

The beta images are known to be reasonably free of showstopper CD build or installer bugs, while representing a very recent snapshot of 17.04 that should be representative of the features intended to ship with the final release expected on April 13th, 2017.

Ubuntu, Ubuntu Server, Cloud Images

Yakkety Final Beta includes updated versions of most of our core set of packages, including a current 4.10 kernel, and much more.

To upgrade to Ubuntu 17.04 Final Beta from Ubuntu 16.10, follow these instructions:

The Ubuntu 17.04 Final Beta images can be downloaded at:

Additional images can be found at the following links:

As fixes will be included in new images between now and release, any daily cloud image from today or later (i.e. a serial of 20170323 or higher) should be considered a beta image. Bugs found should be filed against the appropriate packages or, failing that, the cloud-images project in Launchpad.

The full release notes for Ubuntu 17.04 Final Beta can be found at:

https://wiki.ubuntu.com/ZestyZapus/ReleaseNotes

Kubuntu

Kubuntu is the KDE based flavour of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.

The Final Beta images can be downloaded at:

More information on Kubuntu Final Beta can be found here:

Lubuntu

Lubuntu is a flavor of Ubuntu that targets to be lighter, less resource hungry and more energy-efficient by using lightweight applications and LXDE, The Lightweight X11 Desktop Environment, as its default GUI.

The Final Beta images can be downloaded at:

More information on Lubuntu Final Beta can be found here:

Ubuntu Budgie

Ubuntu Budgie is community developed desktop, integrating Budgie Desktop Environment with Ubuntu at its core.

The Final Beta images can be downloaded at:

More information on Ubuntu Budgie Final Beta can be found here:

Ubuntu GNOME

Ubuntu GNOME is a flavor of Ubuntu featuring the GNOME desktop environment.

The Final Beta images can be downloaded at:

More information on Ubuntu GNOME Final Beta can be found here:

UbuntuKylin

UbuntuKylin is a flavor of Ubuntu that is more suitable for Chinese users.

The Final Beta images can be downloaded at:

More information on UbuntuKylin Final Beta can be found here:

Ubuntu MATE

Ubuntu MATE is a flavor of Ubuntu featuring the MATE desktop environment.

The Final Beta images can be downloaded at:

More information on UbuntuMATE Final Beta can be found here:

Ubuntu Studio

Ubuntu Studio is a flavor of Ubuntu that provides a full range of multimedia content creation applications for each key workflows: audio, graphics, video, photography and publishing.

The Final Beta images can be downloaded at:

More information about Ubuntu Studio Final Beta can be found here:

Xubuntu

Xubuntu is a flavor of Ubuntu that comes with Xfce, which is a stable, light and configurable desktop environment.

The Final Beta images can be downloaded at:

More inormation about Xubuntu Final Beta can be found here:

Regular daily images for Ubuntu, and all flavours, can be found at:

Ubuntu is a full-featured Linux distribution for clients, servers and clouds, with a fast and easy installation and regular releases. A tightly-integrated selection of excellent applications is included, and an incredible variety of add-on software is just a few clicks away.

Professional technical support is available from Canonical Limited and hundreds of other companies around the world. For more information about support, visit http://www.ubuntu.com/support

If you would like to help shape Ubuntu, take a look at the list of ways you can participate at: http://www.ubuntu.com/community/participate

Your comments, bug reports, patches and suggestions really help us to improve this and future releases of Ubuntu. Instructions can be found at: https://help.ubuntu.com/community/ReportingBugs

You can find out more about Ubuntu and about this beta release on our website, IRC channel and wiki.

To sign up for future Ubuntu announcements, please subscribe to Ubuntu’s very low volume announcement list at:

Originally posted to the ubuntu-announce mailing list on Thu Mar 23 22:00:58 UTC 2017 by Adam Conrad on behalf of the Ubuntu Release Team

on March 24, 2017 02:32 AM

Today the Kubuntu team is happy to announce that Kubuntu Zesty Zapus (17.04) Beta 2 is released . With this Beta 2 pre-release, you can see and test what we are preparing for 17.04, which we will be releasing April 13, 2017.

Kubuntu 17.04 Beta 2

 

NOTE: This is Beta 2 Release. Kubuntu Beta Releases are NOT recommended for:

* Regular users who are not aware of pre-release issues
* Anyone who needs a stable system
* Anyone uncomfortable running a possibly frequently broken system
* Anyone in a production environment with data or work-flows that need to be reliable

Getting Kubuntu 17.04 Beta 2:
* Upgrade from 16.10: run `do-release-upgrade -d` from a command line.
* Download a bootable image (ISO) and put it onto a DVD or USB Drive : http://cdimage.ubuntu.com/kubuntu/releases/zesty/beta-2/

Release notes: https://wiki.ubuntu.com/ZestyZapus/Beta2/Kubuntu

on March 24, 2017 12:08 AM

March 23, 2017

The Community Leadership Summit is taking place on the 6th – 7th May 2017 in Austin, USA.

The event brings together community managers and leaders, projects, and initiatives to share and learn how we build strong, engaging, and productive communities. The event takes place the weekend before OSCON in the same venue, the Austin Contention Center. It is entirely FREE to attend and welcomes everyone, whether you are a community veteran or just starting out your journey!

The event is broken into three key components.

Firstly, we have an awesome set of keynotes this year:

Secondly, the bulk of the event is an unconference where the attendees volunteer session ideas and run them. Each session is a discussion where the topic is discussed, debated, and we reach final conclusions. This results in a hugely diverse range of sessions covering topics such as event management, outreach, social media, governance, collaboration, diversity, building contributor programs, and more. These discussions are incredible for exploring and learning new ideas, meeting interesting people, building a network, and developing friendships.

Finally, we have social events on both evenings where you can meet and network with other attendees. Food and drinks are provided by data.world and Mattermost. Thanks to both for their awesome support!

Join Us

The Community Leadership Summit is entirely FREE to attend. If you would like to join, we would appreciate if you could register (this helps us with expected numbers). I look forward to seeing you there in Austin on the 6th – 7th May 2017!

The post Community Leadership Summit 2017: 6th – 7th May in Austin appeared first on Jono Bacon.

on March 23, 2017 04:40 PM

S10E03 – Aloof Puny Wren - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

We discuss website owners filing bugs with Mozilla, GitLab acquiring Gitter, Moodle remote code execution, Windows 10 adverts, KDE Slimbook, 32-bit PowerPC EOL in Ubuntu, a new Vala release and the merger of Sonar GNU/Linux and Vinux.

It’s Season Ten Episode Three of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on March 23, 2017 03:00 PM

When we started to look at how to confine applications enough to allow an appstore that allows for anyone to upload applications we knew that Apparmor could do the filesystem and IPC restrictions, but we needed something to manage the processes. There are kernel features that work well for this, but we didn’t want to reinvent the management of them, and we realized that Upstart already did this for other services in the system. That drove us to decide to use Upstart for managing application processes as well. In order to have a higher level management and abstraction interface we started a small library called upstart-app-launch and we were off. Times change and so do init daemons, so we renamed the project ubuntu-app-launch expecting to move it to systemd eventually.

Now we’ve finally fully realized that transition and ubuntu-app-launch runs all applications and untrusted helpers as systemd services.

bye, bye, Upstart. Photo from: https://pixabay.com/en/goodbye-waving-boy-river-boat-705165/

For the most part, no one should notice anything different. Applications will start and stop in the same way. Even users of ubuntu-app-launch shouldn’t notice a large difference in how the library works. But for people tinkering with the system they will notice a few things. Probably the most obvious is that application log files are no longer in ~/.cache/upstart. Now the log files for applications are managed by journald, which as we get all the desktop services ported to use systemd, will mean that you can see integrated events from multiple processes. So if Unity8 is rejecting your connection you’ll be able to see that next to the error from your application. This should make debugging your applications easier. You’ll also be able to redirect messages off a device realtime, which will help debugging your application on a phone or tablet.

For those who are more interested in details we’re using systemd’s transient unit feature. This allows us to create the unit on the fly with multiple instances of each application. Under Upstart we used a job with instances for each application, but now that we’re taking on more typical desktop style applications we needed to be able to support multi-instance applications, which would have been hard to manage with that approach. We’re generating the service name using this pattern:

ubuntu-app-launch--$(application type)--$(application id)--$(time stamp).service

The time stamp is used to make a unique name for applications that are multi-instance. For applications that ask us to maintain a single instance for them the time stamp is not included.

Hopefully that’s enough information to get you started playing around with applications running under systemd. And if you don’t care to, you shouldn’t even notice this transition.

on March 23, 2017 05:00 AM

March 22, 2017

Chef Intermediate Training

Jonathan Riddell

I did a day’s training at the FLOSS UK conference in Manchester on Chef. Anthony Hodson came from Chef (a company with over 200 employees) to provide this intermediate training which covered writing receipes using test driven development.  Thanks to Chef and Anthony and FLOSS UK for providing it cheap.  Here’s some notes for my own interest and anyone else who cares.

Using chef generate we started a new cookbook called http.

This cookbook contains a .kitchen.yml file.  Test Kitchen is a chef tool to run tests on chef recipes.  ‘kitchen list’ will show the machines it’s configured to run.  Default uses Virtualbox and centos/ubuntu.  Can be changed to Docker or whatever.  ‘kitchen create’ will make them. ‘kitchen converge to deploy. ‘kitchen login’ to log into v-machine. ‘kitchen verify’ run tests.  ‘kitchen test’ will destroy then setup and verify, takes a bit longer.

Write the test first.  If you’re not sure what the test should be write stub/placeholder statements for what you do know then work out the code.

ChefSpec (an RSpec language) is the in memory unit tests for receipes, it’s quicker and does finer grained tests than the Kitchen tests (which use InSpec and do black box tests on the final result).  Run with  chef exec rspec ../default-spec.rb  rspec shows a * for a stub.

Beware if a test passes first time, it might be a false positive.

ohai is a standalone or chef client tool which detects the node attributes and passes to the chef client.  We didn’t get onto this as it was for a follow on day.

Pry is a Ruby debugger.  It’s a Gem and part of chefdk.

To debug recipes use pry in the receipe, drops you into a debug prompt for checking the values are what you think they are.

I still find deploying chef a nightmare, it won’t install in the normal way on my preferred Scaleway server because they’re ARM, by default it needs a Chef server but you can just use chef-client with –local-mode and then there’s chef solo, chef zero and knife solo which all do things that I haven’t quite got my head round.  All interesting to learn anyway.

 

Facebooktwittergoogle_pluslinkedinby feather
on March 22, 2017 03:57 PM

Our stand occupied the same space as last year with a couple of major
changes this time around – the closure of a previously adjacent aisle
resulting in an increase in overall stand space (from 380 to 456 square
metres). With the stand now open on just two sides, this presented the
design team with some difficult challenges:

  • Maximising site lines and impact upon approach
  • Utilising our existing components – hanging banners, display units,
    alcoves, meeting rooms – to work effectively within a larger space
  • Directing the flow of visitors around the stand

Design solution

Some key design decisions and smaller details:

  • Rotating the hanging fabric banners 90 degrees and moving them
    to the very front of the stand
  • Repositioning the welcome desk to maximise visibility from
    all approaches
  • Improved lighting throughout – from overhead banner illumination
    to alcoves and within all meeting rooms
  • Store room end wall angled 45 degrees to increase initial site line
  • Raised LED screens for increased visibility
  • Four new alcoves with discrete fixings for all 10x alcove screens
  • Bespoke acrylic display units for AR helmets and developer boards
  • Streamlined meeting room tables with new cable management
  • Separate store and staff rooms

Result

With thoughtful planning and attention to detail, our brand presence
at this years MWC was the strongest yet.

Initial design sketches

Plan and site line 3D render

 


Design intent drawings

 

 

 

 

 

3D lettering and stand graphics

 

 

 

 

 

on March 22, 2017 01:19 PM

Your own Zesty Zapus

Elizabeth K. Joseph

As we quickly approach the release of Ubuntu 17.04, Zesty Zapus, coming up on April 13th, you may be thinking of how you can mark this release.

Well, thanks to Tom Macfarlane of the Canonical Design Team you have one more goodie in your toolkit, the SVG of the official Zapus! It’s now been added to the Animal SVGs section of the Official Artwork page on the Ubuntu wiki.

Zesty Zapus

Download the SVG version for printing or using in any other release-related activities from the wiki page or directly here.

Over here, I’m also all ready with the little “zapus” I picked up on Amazon.

Zesty Zapus toy
on March 22, 2017 04:01 AM

March 21, 2017

LXD logo

GPU inside a container

LXD supports GPU passthrough but this is implemented in a very different way than what you would expect from a virtual machine. With containers, rather than passing a raw PCI device and have the container deal with it (which it can’t), we instead have the host setup with all needed drivers and only pass the resulting device nodes to the container.

This post focuses on NVidia and the CUDA toolkit specifically, but LXD’s passthrough feature should work with all other GPUs too. NVidia is just what I happen to have around.

The test system used below is a virtual machine with two NVidia GT 730 cards attached to it. Those are very cheap, low performance GPUs, that have the advantage of existing in low-profile PCI cards that fit fine in one of my servers and don’t require extra power.
For production CUDA workloads, you’ll want something much better than this.

Note that for this to work, you’ll need LXD 2.5 or higher.

Host setup

Install the CUDA tools and drivers on the host:

wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
sudo apt update
sudo apt install cuda

Then reboot the system to make sure everything is properly setup. After that, you should be able to confirm that your NVidia GPU is properly working with:

ubuntu@canonical-lxd:~$ nvidia-smi 
Tue Mar 21 21:28:34 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   26C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1                  Not Supported                                         |
+-----------------------------------------------------------------------------+

And can check that the CUDA tools work properly with:

ubuntu@canonical-lxd:~$ /usr/local/cuda-8.0/extras/demo_suite/bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: GeForce GT 730
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3059.4

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3267.4

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			30805.1

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

Container setup

First lets just create a regular Ubuntu 16.04 container:

ubuntu@canonical-lxd:~$ lxc launch ubuntu:16.04 c1
Creating c1
Starting c1

Then install the CUDA demo tools in there:

lxc exec c1 -- wget http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1604/x86_64/cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
lxc exec c1 -- dpkg -i cuda-repo-ubuntu1604_8.0.61-1_amd64.deb
lxc exec c1 -- apt update
lxc exec c1 -- apt install cuda-demo-suite-8-0 --no-install-recommends

At which point, you can run:

ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
NVIDIA-SMI has failed because it couldn't communicate with the NVIDIA driver. Make sure that the latest NVIDIA driver is installed and running.

Which is expected as LXD hasn’t been told to pass any GPU yet.

LXD GPU passthrough

LXD allows for pretty specific GPU passthrough, the details can be found here.
First let’s start with the most generic one, just allow access to all GPUs:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:47:54 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   27C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu
Device gpu removed from c1

Now just pass whichever is the first GPU:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu id=0
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:50:37 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu
Device gpu removed from c1

You can also specify the GPU by vendorid and productid:

ubuntu@canonical-lxd:~$ lspci -nnn | grep NVIDIA
02:06.0 VGA compatible controller [0300]: NVIDIA Corporation GK208 [GeForce GT 730] [10de:1287] (rev a1)
02:07.0 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)
02:08.0 VGA compatible controller [0300]: NVIDIA Corporation GK208 [GeForce GT 730] [10de:1287] (rev a1)
02:09.0 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)
ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu vendorid=10de productid=1287
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:52:40 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:06.0     N/A |                  N/A |
| 30%   30C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
|   1  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   27C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
|    1                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu
Device gpu removed from c1

Which adds them both as they are exactly the same model in my setup.

But for such cases, you can also select using the card’s PCI ID with:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu pci=0000:02:08.0
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- nvidia-smi
Tue Mar 21 21:56:52 2017       
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 375.39                 Driver Version: 375.39                    |
|-------------------------------+----------------------+----------------------+
| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
|===============================+======================+======================|
|   0  GeForce GT 730      Off  | 0000:02:08.0     N/A |                  N/A |
| 30%   27C    P0    N/A /  N/A |      0MiB /  2001MiB |     N/A      Default |
+-------------------------------+----------------------+----------------------+
                                                                               
+-----------------------------------------------------------------------------+
| Processes:                                                       GPU Memory |
|  GPU       PID  Type  Process name                               Usage      |
|=============================================================================|
|    0                  Not Supported                                         |
+-----------------------------------------------------------------------------+
ubuntu@canonical-lxd:~$ lxc config device remove c1 gpu 
Device gpu removed from c1

And lastly, lets confirm that we get the same result as on the host when running a CUDA workload:

ubuntu@canonical-lxd:~$ lxc config device add c1 gpu gpu
Device gpu added to c1
ubuntu@canonical-lxd:~$ lxc exec c1 -- /usr/local/cuda-8.0/extras/demo_suite/bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...

 Device 0: GeForce GT 730
 Quick Mode

 Host to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3065.4

 Device to Host Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			3305.8

 Device to Device Bandwidth, 1 Device(s)
 PINNED Memory Transfers
   Transfer Size (Bytes)	Bandwidth(MB/s)
   33554432			30825.7

Result = PASS

NOTE: The CUDA Samples are not meant for performance measurements. Results may vary when GPU Boost is enabled.

Conclusion

LXD makes it very easy to share one or multiple GPUs with your containers.
You can either dedicate specific GPUs to specific containers or just share them.

There is no of the overhead involved with usual PCI based passthrough and only a single instance of the driver is running with the containers acting just like normal host user processes would.

This does however require that your containers run a version of the CUDA tools which supports whatever version of the NVidia drivers is installed on the host.

Extra information

The main LXD website is at: https://linuxcontainers.org/lxd
Development happens on Github at: https://github.com/lxc/lxd
Mailing-list support happens on: https://lists.linuxcontainers.org
IRC support happens in: #lxcontainers on irc.freenode.net
Try LXD online: https://linuxcontainers.org/lxd/try-it

on March 21, 2017 10:08 PM

Useful ARM References

David Tomaschik

I started playing the excellent IOARM wargame on netgarage. No, don’t be expecting spoilers, hints, or walk-throughs, I’m not that kind of guy. This is merely a list of interesting reading I’ve discovered to help me understand the ARM architecture and ARM assembly.

on March 21, 2017 07:00 AM

March 20, 2017



Over ten years ago, my Ubuntu journey began.

On October 7th, 2006, I drove with my wife, Kimberly Kirkland, to help setup her new classroom, in Elgin, Texas.  This was her very first job as a teacher -- 4th grade, starting about a month into the school year as the school added a classroom to their crowded schedule at the very last minute.

After hanging a few posters on the wall, I found 4 old, broken iMac G3's, discarded in the closet.  With a couple of hours on my hands, I pulled each one apart and put together two functional computers.  But with merely 128MB of RAM, rotary hard disks, and a 32-bit PowerPC processor, MacOS 9 wasn't even remotely functional.

Now, I've been around Linux since 1997, but always Red Hat Linux.  In fact, I had spent most of the previous year (2005) staffed by IBM on site at Red Hat in Westford, MA, working on IBM POWER machines.

I had recently heard of this thing called Edubuntu -- a Linux distribution with games and tools and utilities specifically tailored for the classroom -- which sounded perfect for Kim's needs!

After a bit of fiddling with xorg.conf, I eventually got Ubuntu running on the machine.

In fact, it was shortly after that, when I first setup my Launchpad account (2006-10-11) and posted my first comment, with patch, and workaround instructions to Bug #22976 on 2006-12-14:


About a year later, I applied for a job with Canonical and started working on the Ubuntu Server team in February of 2008.  It's been a wonderful decade, helping bring Ubuntu, Linux, free, and open source software to the world.  And in a way, it sort of all started for me in my wife's first classroom.

But there's another story in here, a much more important story, actually.  And it's not my story, it's my wife's. It's Kimberly's story, as a brand new public school teacher...

You see, she was a 20-something year old, recently out of college and with her very first job in the public school system.  She wouldn't even see her first paycheck for another 6 weeks.  And she was setting up her classroom with whatever hand-me-downs, donations, or yard sale items she could find.  And I'm not talking about the broken computers.  I'm talking about the very basics.  Pens, pencils, paper, books, wall hangings -- the school supplies that most of us take for granted entirely.

Some schools and school districts are adequately funded, and can provide for their students, teachers, and classrooms.  Many parents are able to send their kids to school with the supplies requested on their lists -- glue, scissors, folders, bags, whatever.

But so, so, so many are not.  Schools that don't provide supplies.  And parents that can't afford it.  Thousands of kids in every school district in the world empty handed.

Do you know who makes up the slack?

Teachers.

Yes, our dearly beloved, underpaid, overworked, underappreciated, teachers.  They bring the extra pencils, tissues, staplers, and everything else in between, that their kids need.  And it's everywhere, all across the country, our teachers pick up that slack.  And I know this because it's not just my wife, not just Texas.  My mom and dad are both school teachers in Louisiana.  We know teachers all over the world where this is the case.  Teachers spend hundreds -- sometimes thousands -- of their own hard earned dollars to help their students in need and make their classrooms more suitable for learning.

I'm super proud to say that my wife Kim has spent the last year studying and researching education-focused, local and national charities, learning how they work and who they help.

Understanding the landscape, Kim has co-founded a non-profit organization based here in Austin, Texas -- Classroom Connection -- to collect funds to help distribute school supplies to teachers for their students in need.


After a successful GoFundMe campaignClassroom Connection is now fully operational.  You can contribute in any of three ways:

Our kids are our future.  And their success starts with a rich education experience.  We can always do a little more to secure that future.  Which brings us back to Ubuntu, the philosophy:
"I am who I am, because of who we all are."
Thanks,
:-Dustin
on March 20, 2017 09:03 PM

Twitter seems ever dominant and important for communication. Years ago I added a microblogging feed to Planet KDE but that still needed people to add themselves and being all idealistic I added support for anything with an RSS feed assuming people would use more-free identi.ca. But identi.ca went away and Twitter I think removed their RSS ability but got ever more important and powerful.or the relaunched theme a couple of years ago we added some Twitter feeds but they were hidden away and little used.

So today I’ve made them show by default and available down the side.  There’s one which is for all feeds with a #kde tag and one with @kdecommunity feed. You can hide them by clicking the Microblogging link at the top. Let me know what you think.

Update: my Bootstrap CSS failed and on medium sized monitors it moved all the real content down to below the Twitter feeds rather than floating to the side so I’ve moved them to the bottom instead of the side.  Anyone who knows Bootstrap better than me able to help fix?

I’ve also done away with the planetoids. zh.planetkde.org, fr.planetkde.org, pim.planetkde.org and several others. These were little used and when I asked representatives from the communities about them they didn’t even know they existed. Instead we have categories which you can see with the Configure Feed menu at the top to select languages.

I allowed the <embed> tag which allow for embedding YouTube videos and other bits.  Don’t abuse it folks 🙂

Finally Planet KDE moved back to where it belongs: kde.org. Because KDE is a community, it should not be afraid of its community.

Let me know of any issues or improvements that could be made.

Facebooktwittergoogle_pluslinkedinby feather
on March 20, 2017 06:00 PM

Backstory

In 2016, RC-car company Arrma released the Outcast, calling it a stunt truck. That label lead to some joking around in the UltimateRC forum. One member had trouble getting his Outcast to stunt. Utrak said “The stunt car didn’t stunt do hobby to it, it’ll stunt “. frystomer went: “If it still doesn’t stunt, hobby harder.” and finally stewwdog was like: “I now want a shirt that reads ‘Hobby harder, it’ll stunt’.” He wasn’t alone, so I created a first, very rough sketch.

Process

After a positive response, I decided to make it look like more of a stunt in another sketch:

Meanwhile, talk went to onesies and related practical considerations. Pink was also mentioned, thus I suddenly found myself confronted with a mental image that I just had to get out:

To find the right alignment and perspective, I created a Blender scene with just the text and boxes and cylinders to represent the car. The result served as template for drawing the actual image in Krita, using my trusty Wacom Intuos tablet.

Result

hobby_harder_121_on_white_1024x0958

This design is now available for print on T-shirts, other apparel, stickers and a few other things, via Redbubble.


Filed under: Illustration, Planet Ubuntu Tagged: Apparel, Blender, Krita, RC, T-shirt
on March 20, 2017 11:07 AM

Software can be complex and daunting, even more so in distributed systems. So when you or your company decide you’re going to give it a shot, it’s easy to get enamored with the technology and not think about the other things you and your team are going to need to learn to make participating rewarding for everyone.

When we first launched the Canonical Distribution of Kubernetes, our team was new, and while we knew how Kubernetes worked, and what ops expertise we were going to start to bring to market right away, we found the initial huge size of the Kubernetes community to be outright intimidating. Not the people of course, they’re great, it’s the learning curve that can really seem large. So we decided to just dive in head first, and then write about our experiences. While some of the things I mention here work great for individuals, if you have a team on individuals working on Kubernetes at your company then I hope some of these tips will be useful to you. This is by no means an exhaustive list, I’m still finding new things everyday.

Find your SIGs

Kubernetes is divided into a bunch of Special Interest Groups(SIGs). You can find a list here. Don’t be alarmed! Bookmark this page, I use this as my starting off point anytime we needed to find something out in more detail that we could find in the docs or public list. On this page, you’ll find contact information for the leads, and more importantly, when those SIGs meet. Meetings are open to public and (usually) recorded. Find someone on your team to attend these meetings regularly. This is important for a few reasons:

  • k8s moves fast, and if there’s an area you care about, you can miss important information about a feature you care about.
  • It’s high bandwidth since SIGs meet regularly, you won’t find long drawn out technical discussions on the mailing lists like you would on a project that only uses lists, these discussions move much faster when people talk face to face.
  • You get to meet people and put faces to names.
  • People get to see you and recognize your name (and optionally, your face). This will help you later on if you’re stuck and need help or if you want to start participating more.
  • Each team has a slack channel and google group (mailing list), so I prefer to sit in those channels as well as they usually have important information announced there, meeting reminders, and links to the important documents for that SIG.

There’s a SIG just for contributor experience

SIG-contribex - As it ends up there’s an entire SIG who work on improving contributor experience. I found this SIG relatively late when we started, you’ll find that asking quetions here will save you time in the long run. Even if you’re not asking questions yourself you can learn about how the mechanics of project works just by listening in on the conversation.

So many things in /community

https://github.com/kubernetes/community - This should be one of your first starting points, if not the starting point. I put the SIGs above this because I’ve found for most people they’re initially interested in one key area, and you can just go to that SIG directly first to get started then come back to this. That doesn’t mean this isn’t important, if I get lost in something this is usually the place I start to look for something. Try to get everyone on your team to have at least a working understanding of the concepts here, and of course don’t forget the CLA and Code of Conduct.

There’s a YouTube Channel

https://www.youtube.com/c/KubernetesCommunity - I found this channel to be very useful for “catching up”. Many SIGs publish their meetings relatively quickly, and tossing in the channel in the background can help you keep track of what’s going on.

If you don’t have the time do dig into all the SIG meetings, you can concentrate on the weekly community meeting, which is held weekly and a summary of many of things happening around the different SIGs. The community meetings also have demos, so it’s interesting to see how the ecosystem is building tools around k8s; if you can only make one meeting a week, this is probably the one to go to.

The Community Calendar and meetings

This sounds like advanced common sense but there’s a community calendar of events.

Additionally, I found that adding the SIG meetings to our team calendar helps. We like to rotate people around meetings so that they can get experience in what is happening around the project and to ensure that worst case if someone can’t make a meeting someone is there to take notes. If you’re getting started, do yourself a favor volunteer to take notes at a SIG meeting, you will find that you’ll need to pay closer attention to the material and for me it helps me understand concepts better when I have to write it down in a way that makes sense for others.

We also found it useful to not flood one meeting with multiple people. If it’s something important, sure, but if you just want to keep an eye on what’s going on there you can only send one person and then have that person give people a summary at your team standup or whatever. There are so many meetings that you don’t want to fall into the trap of having people sitting in meetings all day instead of getting things done.

OWNERS

Whatever area you’re working on, go up the tree and eventually you’ll find an OWNERS file that list who owns/reviews that section of the code or docs or whatever. I use this as a little checklist when I join the SIG meetings to keep track of who is who. When I eventually went to a SIG meeting at Kubecon, it was nice to meet people who will be reviewing your work or you’ll be having a working relationship with.

Find a buddy

At some point you’ll be sitting in slack and you’ll see some poor person who is asking the same sorts of questions you were. That’s one of the first places you can start to help, just find someone and start talking to them. For me it was “Hey I noticed you sit in SIG-onprem too, you doing bare metal? How’s it going for you?”

It’s too big!

This used to worry me because the project is so large I figured I would never understand the entire thing. That’s ok. It’s totally fine to not know every single thing that’s going on, that’s why people have these meetings and summaries in the first place, just concentrate on what’s important to you and the rest will start to fall into place.

But we only consume Kubernetes, why participate?

One of the biggest benefits of consuming an open source project is taking advantage of the open development process. At some point something you care about will be discussed in the community then you should take advantage of the economies of scale that having so many people working on something gives you. Even if you’re only using k8s on a beautifully set up public cloud from a vendor where you don’t have to worry about the gruesome details, your organization can still learn from all the work that is happening around the ecosystem. I learn about new tools and tips every single day, and even if your participation is “read-only”, you’ll find that there’s value in sharing expertise with peers.

Ongoing process

This post is already too long, so I’ll just have to keep posting more as I keep learning more. If you’ve got any tips to share please leave a comment, or post and send me a link to link to.

on March 20, 2017 10:16 AM

March 19, 2017

Please welcome our newest Member, Paddy Landau.

Paddy has been a long time contributor to the forums, having helped others with their Ubuntu issues for almost 9 years.

Paddys application thread can be viewed here, wiki page here and launchpad account here.

Congratulations from the Forums Council!

If you have been a contributor to the forums and wish to apply for Ubuntu Membership, please follow the process outlined here.


on March 19, 2017 11:20 AM

GOT and PLT for pwning.

David Tomaschik

So, during the recent 0CTF, one of my teammates was asking me about RELRO and the GOT and the PLT and all of the ELF sections involved. I realized that though I knew the general concepts, I didn’t know as much as I should, so I did some research to find out some more. This is documenting the research (and hoping it’s useful for others).

All of the examples below will be on an x86 Linux platform, but the concepts all apply equally to x86-64. (And, I assume, other architectures on Linux, as the concepts are related to ELF linking and glibc, but I haven’t checked.)

High-Level Introduction

So what is all of this nonsense about? Well, there’s two types of binaries on any system: statically linked and dynamically linked. Statically linked binaries are self-contained, containing all of the code necessary for them to run within the single file, and do not depend on any external libraries. Dynamically linked binaries (which are the default when you run gcc and most other compilers) do not include a lot of functions, but rely on system libraries to provide a portion of the functionality. For example, when your binary uses printf to print some data, the actual implementation of printf is part of the system C library. Typically, on current GNU/Linux systems, this is provided by libc.so.6, which is the name of the current GNU Libc library.

In order to locate these functions, your program needs to know the address of printf to call it. While this could be written into the raw binary at compile time, there’s some problems with that strategy:

  1. Each time the library changes, the addresses of the functions within the library change, when libc is upgraded, you’d need to rebuild every binary on your system. While this might appeal to Gentoo users, the rest of us would find it an upgrade challenge to replace every binary every time libc received an update.
  2. Modern systems using ASLR load libraries at different locations on each program invocation. Hardcoding addresses would render this impossible.

Consequently, a strategy was developed to allow looking up all of these addresses when the program was run and providing a mechanism to call these functions from libraries. This is known as relocation, and the hard work of doing this at runtime is performed by the linker, aka ld-linux.so. (Note that every dynamically linked program will be linked against the linker, this is actually set in a special ELF section called .interp.) The linker is actually run before any code from your program or libc, but this is completely abstracted from the user by the Linux kernel.

Relocations

Looking at an ELF file, you will discover that it has a number of sections, and it turns out that relocations require several of these sections. I’ll start by defining the sections, then discuss how they’re used in practice.

.got
This is the GOT, or Global Offset Table. This is the actual table of offsets as filled in by the linker for external symbols.
.plt
This is the PLT, or Procedure Linkage Table. These are stubs that look up the addresses in the .got.plt section, and either jump to the right address, or trigger the code in the linker to look up the address. (If the address has not been filled in to .got.plt yet.)
.got.plt
This is the GOT for the PLT. It contains the target addresses (after they have been looked up) or an address back in the .plt to trigger the lookup. Classically, this data was part of the .got section.
.plt.got
It seems like they wanted every combination of PLT and GOT! This just seems to contain code to jump to the first entry of the .got. I’m not actually sure what uses this. (If you know, please reach out and let me know! In testing a couple of programs, this code is not hit, but maybe there’s some obscure case for this.)

TL;DR: Those starting with .plt contain stubs to jump to the target, those starting with .got are tables of the target addresses.

Let’s walk through the way a relocation is used in a typical binary. We’ll include two libc functions: puts and exit and show the state of the various sections as we go along.

Here’s our source:

1
2
3
4
5
6
7
8
9
// Build with: gcc -m32 -no-pie -g -o plt plt.c

#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv) {
  puts("Hello world!");
  exit(0);
}

Let’s examine the section headers:

1
2
3
4
5
6
7
8
9
There are 36 section headers, starting at offset 0x1fb4:

Section Headers:
  [Nr] Name              Type            Addr     Off    Size   ES Flg Lk Inf Al
  [12] .plt              PROGBITS        080482f0 0002f0 000040 04  AX  0   0 16
  [13] .plt.got          PROGBITS        08048330 000330 000008 00  AX  0   0  8
  [14] .text             PROGBITS        08048340 000340 0001a2 00  AX  0   0 16
  [23] .got              PROGBITS        08049ffc 000ffc 000004 04  WA  0   0  4
  [24] .got.plt          PROGBITS        0804a000 001000 000018 04  WA  0   0  4

I’ve left only the sections I’ll be talking about, the full program is 36 sections!

So let’s walk through this process with the use of GDB. (I’m using the fantastic GDB environment provided by pwndbg, so some UI elements might look a bit different from vanilla GDB.) We’ll load up our binary and set a breakpoint just before puts gets called and then examine the flow step-by-step:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
pwndbg> disass main
Dump of assembler code for function main:
   0x0804843b <+0>:	lea    ecx,[esp+0x4]
   0x0804843f <+4>:	and    esp,0xfffffff0
   0x08048442 <+7>:	push   DWORD PTR [ecx-0x4]
   0x08048445 <+10>:	push   ebp
   0x08048446 <+11>:	mov    ebp,esp
   0x08048448 <+13>:	push   ebx
   0x08048449 <+14>:	push   ecx
   0x0804844a <+15>:	call   0x8048370 <__x86.get_pc_thunk.bx>
   0x0804844f <+20>:	add    ebx,0x1bb1
   0x08048455 <+26>:	sub    esp,0xc
   0x08048458 <+29>:	lea    eax,[ebx-0x1b00]
   0x0804845e <+35>:	push   eax
   0x0804845f <+36>:	call   0x8048300 <puts@plt>
   0x08048464 <+41>:	add    esp,0x10
   0x08048467 <+44>:	sub    esp,0xc
   0x0804846a <+47>:	push   0x0
   0x0804846c <+49>:	call   0x8048310 <exit@plt>
End of assembler dump.
pwndbg> break *0x0804845f
Breakpoint 1 at 0x804845f: file plt.c, line 7.
pwndbg> r
Breakpoint *0x0804845f
pwndbg> x/i $pc
=> 0x804845f <main+36>:	call   0x8048300 <puts@plt>

Ok, we’re about to call puts. Note that the address being called is local to our binary, in the .plt section, hence the special symbol name of puts@plt. Let’s step through the process until we get to the actual puts function.

1
2
3
pwndbg> si
pwndbg> x/i $pc
=> 0x8048300 <puts@plt>:	jmp    DWORD PTR ds:0x804a00c

We’re in the PLT, and we see that we’re performing a jmp, but this is not a typical jmp. This is what a jmp to a function pointer would look like. The processor will dereference the pointer, then jump to resulting address.

Let’s check the dereference and follow the jmp. Note that the pointer is in the .got.plt section as we described above.

1
2
3
4
5
6
7
pwndbg> x/wx 0x804a00c
0x804a00c:	0x08048306
pwndbg> si
0x08048306 in puts@plt ()
pwndbg> x/2i $pc
=> 0x8048306 <puts@plt+6>:	push   0x0
   0x804830b <puts@plt+11>:	jmp    0x80482f0

Well, that’s weird. We’ve just jumped to the next instruction! Why has this occurred? Well, it turns out that because we haven’t called puts before, we need to trigger the first lookup. It pushes the slot number (0x0) on the stack, then calls the routine to lookup the symbol name. This happens to be the beginning of the .plt section. What does this stub do? Let’s find out.

1
2
3
4
5
pwndbg> si
pwndbg> si
pwndbg> x/2i $pc
=> 0x80482f0: push   DWORD PTR ds:0x804a004
   0x80482f6: jmp    DWORD PTR ds:0x804a008

Now, we push the value of the second entry in .got.plt, then jump to the address stored in the third entry. Let’s examine those values and carry on.

1
2
pwndbg> x/2wx 0x804a004
0x804a004:  0xf7ffd918  0xf7fedf40

Wait, where is that pointing? It turns out the first one points into the data segment of ld.so, and the 2nd into the executable area:

1
2
3
0xf7fd9000 0xf7ffb000 r-xp    22000 0      /lib/i386-linux-gnu/ld-2.24.so
0xf7ffc000 0xf7ffd000 r--p     1000 22000  /lib/i386-linux-gnu/ld-2.24.so
0xf7ffd000 0xf7ffe000 rw-p     1000 23000  /lib/i386-linux-gnu/ld-2.24.so

Ah, finally, we’re asking for the information for the puts symbol! These two addresses in the .got.plt section are populated by the linker/loader (ld.so) at the time it is loading the binary.

So, I’m going to treat what happens in ld.so as a black box. I encourage you to look into it, but exactly how it looks up the symbols is a little bit too low level for this post. Suffice it to say that eventually we will reach a ret from the ld.so code that resolves the symbol.

1
2
3
4
5
pwndbg> x/i $pc
=> 0xf7fedf5b:  ret    0xc
pwndbg> ni
pwndbg> info symbol $pc
puts in section .text of /lib/i386-linux-gnu/libc.so.6

Look at that, we find ourselves at puts, exactly where we’d like to be. Let’s see how our stack looks at this point:

1
2
3
4
pwndbg> x/4wx $esp
0xffffcc2c: 0x08048464  0x08048500  0xffffccf4  0xffffccfc
pwndbg> x/s *(int *)($esp+4)
0x8048500:  "Hello world!"

Absolutely no trace of the trip through .plt, ld.so, or anything but what you’d expect from a direct call to puts.

Unfortunately, this seemed like a long trip to get from main to puts. Do we have to go through that every time? Fortunately, no. Let’s look at our entry in .got.plt again, disassembling puts@plt to verify the address first:

1
2
3
4
5
6
7
8
9
10
pwndbg> disass 'puts@plt'
Dump of assembler code for function puts@plt:
   0x08048300 <+0>:	jmp    DWORD PTR ds:0x804a00c
   0x08048306 <+6>:	push   0x0
   0x0804830b <+11>:	jmp    0x80482f0
End of assembler dump.
pwndbg> x/wx 0x804a00c
0x804a00c:	0xf7e4b870
pwndbg> info symbol 0xf7e4b870
puts in section .text of /lib/i386-linux-gnu/libc.so.6

So now, a call puts@plt results in a immediate jmp to the address of puts as loaded from libc. At this point, the overhead of the relocation is one extra jmp. (Ok, and dereferencing the pointer which might cause a cache load, but I suspect the GOT is very often in L1 or at least L2, so very little overhead.)

How did the .got.plt get updated? That’s why a pointer to the beginning of the GOT was passed as an argument back to ld.so. ld.so did magic and inserted the proper address in the GOT to replace the previous address which pointed to the next instruction in the PLT.

Pwning Relocations

Alright, well now that we think we know how this all works, how can I, as a pwner, make use of this? Well, pwning usually involves taking control of the flow of execution of a program. Let’s look at the permissions of the sections we’ve been dealing with:

1
2
3
4
5
6
7
8
9
10
Section Headers:
  [Nr] Name              Type            Addr     Off    Size   ES Flg Lk Inf Al
  [12] .plt              PROGBITS        080482f0 0002f0 000040 04  AX  0   0 16
  [13] .plt.got          PROGBITS        08048330 000330 000008 00  AX  0   0  8
  [14] .text             PROGBITS        08048340 000340 0001a2 00  AX  0   0 16
  [23] .got              PROGBITS        08049ffc 000ffc 000004 04  WA  0   0  4
  [24] .got.plt          PROGBITS        0804a000 001000 000018 04  WA  0   0  4

Key to Flags:
  W (write), A (alloc), X (execute), M (merge), S (strings), I (info),

We’ll note that, as is typical for a system supporting NX, no section has both the Write and eXecute flags enabled. So we won’t be overwriting any executable sections, but we should be used to that.

On the other hand, the .got.plt section is basically a giant array of function pointers! Maybe we could overwrite one of these and control execution from there. It turns out this is quite a common technique, as described in a 2001 paper from team teso. (Hey, I never said the technique was new.) Essentially, any memory corruption primitive that will let you write to an arbitrary (attacker-controlled) address will allow you to overwrite a GOT entry.

Mitigations

So, since this exploit technique has been known for so long, surely someone has done something about it, right? Well, it turns out yes, there’s been a mitigation since 2004. Enter relocations read-only, or RELRO. It in fact has two levels of protection: partial and full RELRO.

Partial RELRO (enabled with -Wl,-z,relro):

  • Maps the .got section as read-only (but not .got.plt)
  • Rearranges sections to reduce the likelihood of global variables overflowing into control structures.

Full RELRO (enabled with -Wl,-z,relro,-z,now):

  • Does the steps of Partial RELRO, plus:
  • Causes the linker to resolve all symbols at link time (before starting execution) and then remove write permissions from .got.
  • .got.plt is merged into .got with full RELRO, so you won’t see this section name.

Only full RELRO protects against overwriting function pointers in .got.plt. It works by causing the linker to immediately look up every symbol in the PLT and update the addresses, then mprotect the page to no longer be writable.

Summary

The .got.plt is an attractive target for printf format string exploitation and other arbitrary write exploits, especially when your target binary lacks PIE, causing the .got.plt to be loaded at a fixed address. Enabling Full RELRO protects against these attacks by preventing writing to the GOT.

References

on March 19, 2017 07:00 AM

RSS Reading – NewsBlur

Bryan Quigley

Bye Tiny

Some recent hacking attempts at my site had convinced me to reduce the number of logins I had to protect on my personal site.   That’s what motivated a move from the -still- awesome Tiny Tiny RSS that I’ve been using since Google Reader ended.   I only follow 13 sites and maintaining my own install simply doesn’t make sense.

* None of the hacking attempts appeared to be targeting Tiny Tiny RSS ~ but then again I’m not sure if I would have noticed if they were.

Enter NewsBlur

My favorite site for finding alternatives to software quickly settled on a few obvious choices.  Then I noticed that one of them was both Open Source and Hosted on their own servers with a freemium model.

It was NewsBlur

I decided to try it out and haven’t looked back.  The interface is certainly different than Tiny (and after 3 years I was very used to Tiny ) but I haven’t really thought about it after the first week.   The only item I found a bit difficult to use was arranging folders ~ I’d really prefer drag and drop.   I only needed to do it once so not a big deal.

The free account has some limitations such as a limit to the number of feeds (64), limit to how fast they update, and no ability to save stories.   The premium account is only $24 a year which seems very reasonable if you want to support this service or need those features.  As of this writing there were about 5800 premium and about 5800 standard users, which seems like a healthy ratio.

Some security notes: the site get’s an A on  SSLLabs.com but they do have HSTS turned explicitly off.   I’m guessing they can’t enable HSTS because they need to serve pictures directly off of other websites that are HTTP only.

NewsBlur’s code is on Github including how to setup your own NewsBlur instance (it’s designed to run on 3 separate servers) or for testing/development.   I found it particularly nice that the guide the site operator will check if NewsBlur goes down is public.  Now, that’s transparency!

They have a bunch of other advanced features (still in free version) that I haven’t even tried yet, such as:

  • finding other stories you would be interested (Launch Intel)
  • subscribing to email newsletters to view in the feed
  • Apps for Android, iPhone and suggested apps for many other OSes
  • Global sharing on NewsBlur
  • Your own personal (public in free version) blurblog to share stories and your comments on them

Give NewsBlur a try today.  Let me know if you like it!

I’d love to see more of this nice combination of hosted web service (with paid & freemium version) and open source project.  Do you have a favorite project that follows this model?   Two others that I know of are Odoo and draw.io.

on March 19, 2017 03:34 AM

March 18, 2017

Today at 15:58 UTC the Kubuntu Council approved Darin Miller’s application for becoming a Kubuntu Member.

Darin has been coming to the development channel and taking part in the informal developer meetings on Big Blue Button for a while now, helping out were he can with the packaging and continuous integration.  His efforts have already made a huge difference.

Here’s a snippet of his interview:

<DarinMiller> I have contributed very little independently, but I have helped fix lintian issues, control files deps, and made a very minor mod to one of the KA scripts.
<clivejo> minor mod?
<acheronuk> very useful mod IIR ^^^
<clivejo> I think you are selling yourself short there!
-*- clivejo was very frustrated with the tooling prior to that fix
<DarinMiller> From coding perspective, it was well within my skillset, so the mod seemed minor to me.
<clivejo> well it was much appreciated
<yofel> when did you start hanging out here and how did you end up in this channel?
<DarinMiller> That’s another reason I like this team. I feel my efforts are appreciated.
<DarinMiller> And that encourages me to want to do more.

He is obviously a very modest chap and the Kubuntu team would like to offer him a very warm welcome, as well as greeting him with our hugs and the list of jobs / work to be done!

For those interested here’s Darin’s wiki page: https://wiki.kubuntu.org/~darinmiller and his Launchpad page: https://launchpad.net/~darinmiller

The meeting log is available here.

on March 18, 2017 04:50 PM

March 17, 2017

nginx es un servidor web usado en varios escenarios, uno de los cuales es el de aprox reverso (reverse proxy).
La principal funcionalidad de un proxy reverso es recibir las solicitudes web (u otros protocolos) de clientes desde una zona en red no segura (DMZ u otras) y reenviar las solicitudes a un servidor ubicado en una ubicación segura y con controles (Intranet).

Con nginx como proxy reverso se pueden mitigar riesgos de seguridad, implementar controles de seguridad y limitar el tipo/numero de conexiones en caso de ataques DoS o DDoS.

Prerequisitos:

– Servidor web en una red interna (IIS, Apache, nginx, etc)
– Servidor con acceso a la red DMZ y a la red interna
– Instalación de Ubuntu 16.04 en la máquina virtual (recomendable con servidor SSH para realizar gestión remota)

1. Instalación de servidor nginx

Para instalar nginx en Ubuntu se usa el siguiente comando:

sudo apt install nginx

Una vez instalado se procede a realizar la configuración del servicio de proxy reverso.

2. Configuración de nginx

La configuración de sitios nginx se realiza principalmente con los archivos que se encuentran en las carpetas /etc/nginx/sites-available/ y /etc/nginx/sites-enabled/

En la carpeta /etc/nginx/sites-available/ se crean los archivos con los sitios que se encuentran disponibles, y en la carpeta /etc/nginx/sites-enabled/ normalmente se crean enlaces simbólicos que apuntan a los sitios que queremos habilitar de la carpeta /etc/nginx/sites-available/

Para configurar un sitio web ejemplo.com que sea proxy reverso hacia un servidor interno con ip ip_srv_interna (10.0.0.2) se realiza la creación de archivo ejemplo.com:

server {
  listen 80; #Puerto de escucha
  server_name ejemplo.com; #Nombre o IP de servidor
    location / {
    proxy_pass http://10.0.0.2:80;
  }
}

Podemos verificar que el archivo de configuración sea correcto con el comando:

sudo nginx -t
nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
nginx: configuration file /etc/nginx/nginx.conf test is successful

Si parecer un mensaje como el siguiente:

nginx: [emerg] invalid parameter "server_name" in /etc/nginx/sites-enabled/ejemplo.com:6
nginx: configuration file /etc/nginx/nginx.conf test failed

Debemos verificar que el archivo en la linea 6 o cercanas a esta tenga las configuraciones correctas.

Una vez tenemos listo el archivo de configuración del sitio, se crea el enlace simbólico en la carpeta de sitios habilitados

sudo ln -s /etc/nginx/sites-available/ejemplo.com /etc/nginx/sites-enabled/ejemplo.com

Con esto ya podemos realizar una recarga del servicio y comprobar que funcione nuestro proxy reverso:

sudo systemctl reload nginx

3. Habilitar firewall

Dado que el servidor estará expuesto hacia Internet, es recomendable realizar la activación el Firewall. Si bien existen distintos métodos en Linux, últimamente las principales distribuciones se han decantado en usar herramientas para gestionar Firewall, firewalld en distribuciones basadas en Red Hat y ufw en distribuciones basadas en Ubuntu.

Para instalar la herramienta de gestión del Firewall en Ubuntu se usa el siguiente comando:

sudo apt install ufw

Una vez instalada se usa el siguiente comando para habilitarlo:

sudo ufw enable

Se puede verificar el estado del Firewall las reglas con el siguiente comando:

 sudo ufw status verbose

Para habilitar el servicio web (http y https):

sudo ufw allow http

Como tenemos habilitado el servicio SSH, es recomendable limitar el acceso desde una sub red o una tarjeta de red específica, en este caso se limitará el acceso para equipos en la sub red 10.0.0.0/8 normalmente usada para redes internas y restringirá el acceso desde otras redes, como Internet.

sudo ufw allow from 10.0.0.0/8 to any port 22
sudo ufw deny ssh

Luego, configuramos las acciones por defecto de las conexiones, bloqueando todas las entrantes y habilitando las salientes con excepción de las reglas antes creadas:

sudo ufw default allow outgoing
Default outgoing policy changed to 'allow'
(be sure to update your rules accordingly)

sudo ufw default deny incoming
Default incoming policy changed to 'deny'
(be sure to update your rules accordingly)

Verificamos el estado de las reglas de nuevo:

sudo ufw status verbose

Status: active
Logging: on (low)
Default: deny (incoming), allow (outgoing), disabled (routed)
New profiles: skip

To Action From
-- ------ ----
22 ALLOW IN 10.0.0.0/8
80 ALLOW IN Anywhere
22 DENY IN Anywhere
80 (v6) ALLOW IN Anywhere (v6)
22 (v6) DENY IN Anywhere (v6)

4. Realizar configuraciones reforzamiento de seguridad y otras a nginx

Si bien el proxy reverso ya es funcional, es posible realizar configuraciones adicionales para mitigar ataques de diverso tipo, enviar información del cliente hacia el servidor web interno para que sea más transparente el funcionamiento y ubicar los logs del sitio en una ruta específica.
Teniendo en cuenta esto, se modificará el archivo ejemplo.com con las configuraciones deseadas.
En los archivos de configuración de nginx se realizan comentarios usando el símbolo numeral (o como se ha nombrado tanto en estos días, hastag) #.
A continuación se muestra el archivo ejemplo.com con distintas configuraciones y su descripción:

sudo nano /etc/nginx/sites-enabled/ejemplo.com

limit_conn_zone $binary_remote_addr zone=addr:10m; #Espacio en memoria reservado para la zona

server {
  listen 80; #Puerto de escucha
  server_name ejemplo.com; #Nombre de servidor
  client_body_timeout 15s; #Timeout de maximo de conexiones a body 15 segundos para luego cerrar sesion para evitar ataques Slowloris
  client_header_timeout 15s; #Timeout de maximo de conexiones a header 15 segundos para luego cerrar sesion para evitar ataques Slowloris

  access_log /var/log/nginx/ejemplo.access.log; #Log de accesos a servidor
  location / {
    limit_conn addr 25; #Limitar el numero de conexiones simultaneas desde una IP a 25 evitando ataques DoS
    proxy_pass http://10.0.0.2:80;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto $scheme;
    #deny 8.8.8.8 #Denegar acceso de IPs que puedan estar realizando ataques de DDoS
    }
}

Una vez se tengan todos los cambios deseados, se realiza de nuevo la recarga del servicio de nginx.

sudo systemctl reload nginx

Para realizar la prueba se puede ingresar desde un navegador a la IP pública o el dominio http://ejemplo.com, se mostrará la respuesta web del servidor interno proporcionada a través de nginx.

También se puede comprobar la respuesta de header del servidor usando el comando curl, donde se evidencia que el servidor que esta entregando las respuestas es el servidor de proxy reverso nginx:

curl -I -k http://ejemplo.com/
HTTP/1.1 200 OK
Server: nginx/1.10.0 (Ubuntu)
Date: Fri, 17 Mar 2017 14:47:55 GMT
Content-Type: text/html
Content-Length: 701
Connection: keep-alive
Last-Modified: Fri, 10 Mar 2017 20:58:39 GMT
Accept-Ranges: bytes
ETag: "326cc417e198d21:0"

5. Habilitar el Control de Acceso Obligatorio Mandatory Access Control (MAC), AppArmor

Es posible asegurar aun más nuestro proxy reverso usando AppArmor, el cual restringe el acceso de archivos a los servicios evitando casos en los que se reciban ataques a nginx usando vulnerabilidades y que las mismas no incidan en otros servicios del sistema salvo por nginx. En distribuciones basadas en Red Hat se encuentra una plataforma similar para asegurar procesos llamada SELinux, aunque con objetivos similares su configuración es completamente distinto.

Ubuntu viene con AppArmor instalado y habilitado por defecto. Sin embargo se requieren algunas herramientas y perfiles de ejemplo para crear un perfil de aseguramiento para nginx, las mismas se instalan de la siguiente manera:

sudo apt-get install apparmor-utils

Se crea un perfil de AppArmor para nginx con los siguientes comandos

cd /etc/apparmor.d/
sudo aa-autodep nginx

Habilitar perfil en modo de registro, esto para que se generen registros de los archivos que ha requerido nginx para su funcionamiento

sudo aa-complain nginx

Realizar reinicio del servicio de nginx y hacer uso del servicio ingresando a http://ejemplo.com desde un navegador, para que el registro de AppArmor detecte los archivos que usa el servicio cuando esta en ejecución.

sudo systemctl restart nginx

Ingresar desde un navegador a ejemplo.com

Luego de que se AppArmor ha tomado registro de los archivos que se han requerido, se usa el siguiente comando para que el sistema modifique el perfil autorizando el acceso a los hallazgos, esto se realiza con la tecla A en cada una de las preguntas de solicitud de permisos.

sudo aa-logprof

Esto se puede hacer debido a que el servicio esta en funcionamiento normal y no recibe ataques, de lo contrario es recomendable realizar revisión de cada uno de los permisos que se solicitan.
Este es un ejemplo de archivo de perfil generado

# Last Modified: Fri Mar 17 12:28:11 2017
#include &lt;tunables/global&gt;

/usr/sbin/nginx flags=(complain) {
#include &lt;abstractions/base&gt;
#include &lt;abstractions/lxc/container-base&gt;
/etc/group r,
/etc/nginx/conf.d/ r,
/etc/nginx/mime.types r,
/etc/nginx/nginx.conf r,
/etc/nginx/sites-enabled/ r,
/etc/nsswitch.conf r,
/etc/passwd r,
/etc/ssl/openssl.cnf r,
/usr/sbin/nginx mr,
/var/log/nginx/error.log w,
}

Si se requiere se puede editar el perfil de forma manual con el comando:

sudo nano /etc/apparmor.d/usr.sbin.nginx

Se realiza la recarga del servicio AppArmor y se reinicia nginx, con esto se tendrá el servicio asegurado por AppArmor.

sudo /etc/init.d/apparmor reload
sudo systemctl restart nginx

Se puede comprobar su funcionamiento verificando que nginx esta siendo controlado con el siguiente comando:

sudo apparmor_status

apparmor module is loaded.
 13 profiles are loaded.
 12 profiles are in enforce mode.
 /sbin/dhclient
 /usr/bin/lxc-start
 /usr/lib/NetworkManager/nm-dhcp-client.action
 /usr/lib/NetworkManager/nm-dhcp-helper
 /usr/lib/connman/scripts/dhclient-script
 /usr/lib/lxd/lxd-bridge-proxy
 /usr/lib/snapd/snap-confine
 /usr/lib/snapd/snap-confine//mount-namespace-capture-helper
 lxc-container-default
 lxc-container-default-cgns
 lxc-container-default-with-mounting
 lxc-container-default-with-nesting
 1 profiles are in complain mode.
 /usr/sbin/nginx
 3 processes have profiles defined.
 0 processes are in enforce mode.
 3 processes are in complain mode.
 /usr/sbin/nginx (2369)
 /usr/sbin/nginx (2370)
 /usr/sbin/nginx (2371)
 0 processes are unconfined but have a profile defined.

El alcance de esta guía es para un servicio web que no usa TLS (SSL), por lo que se requieren pasos adicionales para realizar el montaje de un proxy reverso seguro usando certificados.

Si tienen dudas, o si tienen sugerencias para mejorar algún aspecto de seguridad no duden en comentar!

Fuentes:
https://www.nginx.com/resources/admin-guide/reverse-proxy/
https://nginx.org/en/docs/
https://en.wikipedia.org/wiki/Reverse_proxy
https://help.ubuntu.com/community/UFW
https://wiki.ubuntu.com/UncomplicatedFirewall
https://www.linode.com/docs/security/firewalls/configure-firewall-with-ufw
https://www.digitalocean.com/community/tutorials/how-to-create-an-apparmor-profile-for-nginx-on-ubuntu-14-04/

on March 17, 2017 07:45 PM

After 9 years, 20 Ubuntu releases, 14,455 edits and 359,186 miles in the air, it’s time for me to move on to something else. I’ve been privileged to work with some of the brightest people in the industry, and for that I’ll always be grateful.

But I won’t be going far, I’ll be starting next month at Heptio, where I will get to work with them in supporting and advancing Kubernetes. The community is rapidly expanding, and I’m looking forward to contributing to the machinery that helps keep it a great place for people to participate. If you’ve not yet given k8s a spin, you should, it’s great stuff and only getting better every release.

There are too many people who I’d like to thank, but you know who you are, and I’ll still see everyone around at conferences. I’ll be around for the next few weeks to tie things up, and then onward and upward!

on March 17, 2017 11:06 AM

March 16, 2017

S10E02 – Wiry Labored Sense - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Season Ten Episode Two of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

The same line up as last week are here again for another episode.

In this week’s show:

  • We discuss what we’ve been up to recently, Mark has been gardening and Martin has been spending BitCoin.
  • We review the Entroware Aether laptop.
    • Reviewed specification: i5 CPU, 8GB RAM, 1920×1080 display, 128GB SSD. £694.95.
    • If you want to enable playback of DRM protected DVDs on Ubuntu then run the following in a terminal:
      • sudo apt install libdvd-pkg and follow the prompts.
  • We share a Command Line Lurve:
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on March 16, 2017 03:00 PM
In a statement issued on March 14, 2017, the Free Software Foundation declares that the new GitHub Terms of Service don't conflict with copyleft, but still recommends to use other hosting sites.
on March 16, 2017 02:30 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 154 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Antoine Beaupré did 3 hours (out of 13h allocated, thus keeping 10 extra hours for March).
  • Balint Reczey did 13 hours (out of 13 hours allocated + 1.25 hours remaining, thus keeping 1.25 hours for March).
  • Ben Hutchings did 19 hours (out of 13 hours allocated + 15.25 hours remaining, he gave back the remaining hours to the pool).
  • Chris Lamb did 13 hours.
  • Emilio Pozuelo Monfort did 12.5 hours (out of 13 hours allocated, thus keeping 0.5 hour for March).
  • Guido Günther did 8 hours.
  • Hugo Lefeuvre did nothing and gave back his 13 hours to the pool.
  • Jonas Meurer did 14.75 hours (out of 5 hours allocated + 9.75 hours remaining).
  • Markus Koschany did 13 hours.
  • Ola Lundqvist did 4 hours (out of 13h allocated, thus keeping 9 hours for March).
  • Raphaël Hertzog did 3.75 hours (out of 10 hours allocated, thus keeping 6.25 hours for March).
  • Roberto C. Sanchez did 5.5 hours (out of 13 hours allocated + 0.25 hours remaining, thus keeping 7.75 hours for March).
  • Thorsten Alteholz did 13 hours.

Evolution of the situation

The number of sponsored hours increased slightly thanks to Bearstech and LiHAS joining us.

The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file 39. The number of open issues continued its slight increase, this time it could be explained by the fact that many contributors did not spend all the hours allocated (for various reasons). There’s nothing worrisome at this point.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on March 16, 2017 01:25 PM

For the longest time, the plan was to equip KDE neon’s Developer Editions with translations. As the Developer Editions are built directly from our Git repositories and we do not maintain translations alongside the source code, there is a bit of a problem as the build somehow needs to bridge the gap between code and translations.

It’s fortunate that I also happen to work on ReleaseMe, a KDE tarball release application, and rebuilt it from scratch years ago already, so it supports third party usage of some of its functionality.

At this year’s FOSDEM, Plasma developer David Edmundson asked for translations in the Developer Editions. And, so, here we are. Both KDE neon Developer Editions now include translations live from our Subversion repository. They also include our x-test language allowing you to easily find improperly internationalized strings. Coverage is currently limited to KDE Frameworks and Plasma software.

The majority of tech to accomplish this is hidden in the internals of ReleaseMe itself. On the high-level this entails nothing more than resolving the KDE project and then getting its translations into the Git tree.

projects = ReleaseMe::Project.from_repo_url(url)
unless projects.size == 1
  raise "failed to resolve project #{repo_name} :: #{projects}"
end
project = projects[0]

l10n = ReleaseMe::L10n.new(l10n_origin, project.identifier,
project.i18n_path)
l10n.default_excluded_languages = [] # Include even x-test.
l10n.get(Dir.pwd)

(Underneath there’s, of course, lots of fiddly nonsense going on ;))

Enjoy!

on March 16, 2017 10:21 AM