November 22, 2017

Announcing snapcraft 2.35

Ubuntu Insights

The snapcraft team is pleased to announce that version 2.35 has been released.

Contributions

This release saw some excellent contributions from outside the snapcraft core team, and we want to give a shout out to those folks. A team thank you to:

New in this release

Core

Containers

Each build instance created now correctly works out isolated temporary folder locations for those users running many builds in parallel. There is also better detection of existing or missing lxd installations so first time users can better understand any problems with the host they are currently trying to use.

When running snapcraft from the snap, snapcraft now injects itself into the actual snap instead of apt installing the deb (for the case of today of only supporting one base), providing parity with the local environment at hand.

Work has been added to get rid of all the corner cases and provide useful feedback to users and making the experience feel more native.
Additionally, support has been added for using remote lxd instances.

To enable the persistent build containers feature the SNAPCRAFT_CONTAINER_BUILDS environment variable needs to be set.

Here’s an example of using a remote lxd instance:

asciicast

Recording

On this new version we added more information to the build manifest, like the contents of lock files, the debs and snaps installed in the machine, information from uname and the fingerprint of the container used for the build. To record the build manifest, set the environment variable SNAPCRAFT_BUILD_INFO. The manifest will be saved and distributed inside the snap. After the build, you can inspect it in the prime/snap/manifest.yaml.

asciicast

Command Line Interface

new command: pack

This new pack command replaces the now deprecated use of snap <snap-dir> with the goal of decoupling the concept of working on an actual snapcraft project and packing up a directory layout into a snap.

new command: refresh

This command is only available when persistent build containers are enabled and exists to make the environment feel as native as possible. Prior to the existence of this command, building continuously in a container triggered a refresh of the packaging archive every time, now this refresh only takes place on container creation or when called through snapcraft refresh.

new command: edit-collaborators

This command will eventually replace the use of the store invites mechanism to setup other people as collaborators to the project. It is currently hidden as the production snap store has it currently disabled. A future release once things have stabilized will expose the command to users. It is harmless to use today as a proper error will show up.

In the meantime, here is how it works when using the integration store:
asciicast

OS Support

Solus

Initial support for running the snapcraft snap on solus has been added. It should work well enough for things like performing store operations, packing up snaps; or if lxd is installed and setup, most operations should work through use of persistent build containers or cleanbuild.

We look forward to knowing how this initial experience performs.

Ubuntu 14.04

Snapcraft currently only really runs well on Ubuntu 16.04, but we’re working on adding support for other releases and Linux distributions. This is the first release where you can use the Snapcraft snap on Ubuntu 14.04 (Trusty). This is particularly important for snaps based on ROS (Robot Operating System) Indigo, which targets Trusty. Here’s a demo of just that:

asciicast

Plugins

dotnet

This plugin developed by Rajesh, a .NET developer at Microsoft, allows you to create .NET 2.x based snaps, currently embedding the runtime with plans to enhance it to understand content snaps of .NET runtimes which could be leveraged by projects.

The syntax is pretty straightforward and builds on language understood by upstream so getting started for a current .NET developer should feel like a pleasant journey.

Here is the plugin in action:
asciicast

ruby

This release sees the addition of a Ruby plugin, written by James Beedy. It supports a number of different Ruby versions by building them from source, which takes a little while but makes it pretty versatile. It could definitely use some exercise! Here’s an example of building a snap of the Travis gem:

asciicast

catkin

The Catkin plugin has long supported rosdep to resolve and fetch system dependencies (i.e. Debian packages). However, rosdep also supports resolving pip dependencies. This release adds support for those, so they don’t need to specified elsewhere in the snapcraft.yaml.

Final notes

To get the source for this release, check it out on github.

A great place to collaborate and discuss features, bugs and ideas on snapcraft are the forums. Please also feel free to file a bug.

Happy snapcrafting!
— Sergio and the team

on November 22, 2017 10:36 PM

The FIXME hackerspace in Lausanne, Switzerland is preparing a VR Hackathon on the weekend of 1-3 December.

Competitors and visitors are welcome, please register here.

Some of the free software technologies in use include Blender and Mozilla VR.

on November 22, 2017 07:25 PM

November 21, 2017

Bloomberg, Walmart, eBay, Samsung, Dell. Ever wonder how some of the world’s largest enterprises run on Ubuntu? This December, we are hosting our first ever Ubuntu Enterprise Summit to tell you how and help guide your own organisation whether it be running the cloud in a large telco to deriving revenue from your next IoT initiative. The Ubuntu Enterprise Summit is a two day event of webinars on December 5th and 6th where you can join Canonical’s product managers, technical leads, partners and customers to get an inside look at why some of the world’s largest companies have chosen Ubuntu. Whether you are focused on the cloud or are living life at the edge, the webinars will also look at trends and the considerations for your organisation when implementing such technologies. To kick off the event on December 5th, Canonical CEO and founder Mark Shuttleworth will deliver a keynote talk on 21st Century Infrastructure. Following Mark’s opening, there will be a series of other events and you can register now for those that spark your interest by clicking on the links below

Tuesday, December 5th

 

Building 21st Century Infrastructure

Speaker- Mark Shuttleworth, CEO and Founder, Canonical and Ubuntu

Time- 8-9AM PST / 11AM- 12PM EST / 4- 5PM GMT

More Info

Special Kubernetes Announcement from KubeCon

Speaker- Marco Ceppi

Time- 9-10AM PST / 12PM- 1PM EST / 5- 6PM GMT

More Info

Get ready for multi-cloud

Speaker- Mark Baker, Field Product Manager, Canonical

Time- 10-11AM PST / 1- 2PM EST / 6- 7PM GMT

More Info

Hybrid cloud & financial services – how to compete with cloud native new entrants

Speaker- Chris Kenyon, SVP, Worldwide Sales & Business Development, Canonical

Time- 11AM- 12PM PST / 2- 3PM EST / 7- 8PM GMT

More Info

 

Wednesday, December 6th

 

Ubuntu: What’s the security story?

Speaker- Dustin Kirkland, VP, Product Development

Time- 7- 8AM PST / 10- 11AM EST / 3PM- 4PM GMT

More Info

How City Network solves the challenges for the modern financial company

Speaker- Johan Christenson, CEO of City Network

Time- 8- 9AM PST / 11AM-12PM EST / 4PM- 5PM GMT

More Info

Appstores: The path to IoT revenue post-sale

Speaker- Mike Bell, EVP, Devices & IoT

Time- 9- 10AM PST / 12- 1PM EST / 5PM- 6PM GMT

More Info

Cloud to edge: Building the software defined telco infrastructure

Speaker- Nathan Rader, Director of NFV Strategy

Time- 10-11AM PST / 1- 2PM EST / 6PM- 7PM GMT

More Info

Introduction to MAAS: building the agile data centre

Speaker- Mark Shuttleworth, CEO & Founder & Andres Rodriguez, MAAS Product Manager

Time- 11AM-12PM PST / 2PM-3PM EST / 7PM- 8PM GMT

More Info

If you can’t make the webinar of your choice, they will also be available to view post-event so you don’t miss out.

We look forward to seeing you at the Ubuntu Enterprise Summit!

on November 21, 2017 07:16 PM

Dear Ubuntu Community,

we’re happy to report that the five vacant seats on the LoCo Council3 have been, at long last, filled. Your new council members are as follows:

  1. Nathan Haines (incumbent) (@nhaines)
  2. Carla Sella (@carla-sella)
  3. Kyle Fazzari (@kyrofa)
  4. Ken VanDine (@kenvandine)
  5. Gustavo Silva (@gsilvapt)

A big congratulation to them!

The local community project is an international effort which strives to evangelize and support the Ubuntu project around the world. The LoCo Council acts on the delegation of the Community Council to support this worldwide movement. Most notably they have been involved in reviewing verification requests.

The Community Council will be working closely with the LoCo Council in the upcoming months to give new value to both the LoCo project and the LoCo Council itself, so exciting times are ahead!


This post was initially posted on the Ubuntu Community Hub by Martin Wimpress from the Ubuntu Community Council.

on November 21, 2017 04:39 PM

MAAS 2.3.0 (final) Released!

Andres Rodriguez

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 (final) is now available!
This new MAAS release introduces a set of exciting features and improvements to the overall user experience. It now becomes the focus of maintenance, as it fully replaces MAAS 2.2
In order to provide with sufficient notice, please be aware that 2.3.0 will replace MAAS 2.2 in the Ubuntu Archive in the coming weeks. In the meantime, MAAS 2.3 is available in PPA and as a Snap.
PPA’s Availability
MAAS 2.3.0 is currently available in ppa:maas/next for the coming week.
sudo add-apt-repository ppa:maas/next
sudo apt-get update
sudo apt-get install maas
Please be aware that MAAS 2.3 will replace MAAS 2.2 in ppa:maas/stable within a week.
Snap Availability
For those wanting to use the snap, you can obtain it from the stable channel:
sudo snap install maas –devmode –stable

MAAS 2.3.0 (final)

Important announcements

Machine network configuration now deferred to cloud-init.

Starting from MAAS 2.3, machine network configuration is now handled by cloud-init. In previous MAAS (and curtin) releases, the network configuration was performed by curtin during the installation process. In an effort to improve robustness, network configuration has now been consolidated in cloud-init. MAAS will continue to pass network configuration to curtin, which in turn, will delegate the configuration to cloud-init.

Ephemeral images over HTTP

As part of the effort to reduce dependencies and improve reliability, MAAS ephemeral (network boot) images are no longer loaded using iSCSI (tgt). By default, the ephemeral images are now obtained using HTTP requests to the rack controller.

After upgrading to MAAS 2.3, please ensure you have the latest available images. For more information please refer to the section below (New features & improvements).

Advanced network configuration for CentOS & Windows

MAAS 2.3 now supports the ability to perform network configuration for CentOS and Windows. The network configuration is performed via cloud-init. MAAS CentOS images now use the latest available version of cloud-init that includes these features.

New features & improvements

CentOS network configuration

MAAS can now perform machine network configuration for CentOS 6 and 7, providing networking feature parity with Ubuntu for those operating systems. The following can now be configured for MAAS deployed CentOS images:

  • Bonds, VLAN and bridge interfaces.
  • Static network configuration.

Our thanks to the cloud-init team for improving the network configuration support for CentOS.

Windows network configuration

MAAS can now configure NIC teaming (bonding) and VLAN interfaces for Windows deployments. This uses the native NetLBFO in Windows 2008+. Contact us for more information (https://maas.io/contact-us).

Improved Hardware Testing

MAAS 2.3 introduces a new and improved hardware testing framework that significantly improves the granularity and provision of hardware testing feedback. These improvements include:

  • An improved testing framework that allows MAAS to run each component individually. This allows MAAS to run tests against storage devices for example, and capture results individually.
  • The ability to describe custom hardware tests with a YAML definition:
    • This provides MAAS with information about the tests themselves, such as script name, description, required packages, and other metadata about what information the script will gather. All of which will be used by MAAS to render in the UI.
    • Determines whether the test supports a parameter, such as storage, allowing the test to be run against individual storage devices.
    • Provides the ability to run tests in parallel by setting this in the YAML definition.
  • Capture performance metrics for tests that can provide it.
    • CPU performance tests now offer a new ‘7z’ test, providing metrics.
    • Storage performance tests now include a new ‘fio’ test providing metrics.
    • Storage test ‘badblocks’ has been improved to provide the number of badblocks found as a metric.
  • The ability to override a machine that has been marked ‘Failed testing’. This allows administrators to acknowledge that a machine is usable despite it having failed testing.

Hardware testing improvements include the following UI changes:

  • Machine Listing page
    • Displays whether a test is pending, running or failed for the machine components (CPU, Memory or Storage.)
    • Displays whether a test not related to CPU, Memory or Storage has failed.
    • Displays a warning when the machine has been overridden and has failed tests, but is in a ‘Ready’ or ‘Deployed’ state.
  • Machine Details page
    • Summary tab – Provides hardware testing information about the different components (CPU, Memory, Storage).
    • Hardware Tests /Commission tab – Provides an improved view of the latest test run, its runtime as well as an improved view of previous results. It also adds more detailed information about specific tests, such as status, exit code, tags, runtime and logs/output (such as stdout and stderr).
    • Storage tab – Displays the status of specific disks, including whether a test is OK or failed after running hardware tests.

For more information please refer to https://docs.ubuntu.com/maas/2.3/en/nodes-hw-testing.

Network discovery & beaconing

In order to confirm network connectivity and aide with the discovery of VLANs, fabrics and subnets, MAAS 2.3 introduces network beaconing.

MAAS now sends out encrypted beacons, facilitating network discovery and monitoring. Beacons are sent using IPv4 and IPv6 multicast (and unicast) to UDP port 5240. When registering a new controller, MAAS uses the information gathered from the beaconing protocol to ensure that newly registered interfaces on each controller are associated with existing known networks in MAAS. This aids MAAS by providing better information on determining the network topology.

Using network beaconing, MAAS can better correlate which networks are connected to its controllers, even if interfaces on those controller are not configured with IP addresses. Future uses for beaconing could include validation of networks from commissioning nodes, MTU verification, and a better user experience for registering new controllers.

Upstream Proxy

MAAS 2.3 now enables an upstream HTTP proxy to be used while allowing MAAS deployed machines to continue to use the caching proxy for the repositories. Doing so provides greater flexibility for closed environments, including:

  • Enabling MAAS itself to use a corporate proxy while allowing machines to continue to use the MAAS proxy.
  • Allowing machines that don’t have access to a corporate proxy to gain network access using the MAAS proxy.

Adding upstream proxy support also includes an improved configuration on the settings page. Please refer to Settings > Proxy for more details.

Ephemeral Images over HTTP

Historically, MAAS has used ‘tgt’ to provide images over iSCSI for the ephemeral environments (e.g commissioning, deployment environment, rescue mode, etc). MAAS 2.3 changes the default behaviour by now providing images over HTTP.

These images are now downloaded directly by the initrd. The change means that the initrd loaded on PXE will contact the rack controller to download the image to load in the ephemeral environment. Support for using ‘tgt’ is being phased out in MAAS 2.3, and will no longer be supported from MAAS 2.4 onwards.

For users who would like to continue to use & load their ephemeral images via ‘tgt’, they can disable http boot with the following command.

  maas <user> maas set-config name=http_boot value=False

UI Improvements

Machines, Devices, Controllers

MAAS 2.3 introduces an improved design for the machines, devices and controllers detail pages that include the following changes.

  • “Summary” tab now only provides information about the specific node (machine, device or controller), organised across cards.
  • “Configuration” has been introduced, which includes all editable settings for the specific node (machine, device or controllers).
  • “Logs” consolidates the commissioning output and the installation log output.

Other UI improvements

Other UI improvements that have been made for MAAS 2.3 include:

  • Added DHCP status column on the ‘Subnet’s tab.
  • Added architecture filters
  • Updated VLAN and Space details page to no longer allow inline editing.
  • Updated VLAN page to include the IP ranges tables.
  • Zones page converted to AngularJS (away from YUI).
  • Added warnings when changing a Subnet’s mode (Unmanaged or Managed).
  • Renamed “Device Discovery” to “Network Discovery”.
  • Discovered devices where MAAS cannot determine the hostname now show the hostname as “unknown” and greyed out instead of using the MAC address manufacturer as the hostname.

Rack Controller Deployment

MAAS 2.3 can now automatically deploy rack controllers when deploying a machine. This is done by providing cloud-init user data, and once a machine is deployed, cloud-init will install and configure the rack controller. Upon rack controller registration, MAAS will automatically detect the machine is now a rack controller and it will be transitioned automatically. To deploy a rack controller, users can do so via the API (or CLI), e.g:

maas <user> machine deploy <system_id> install_rackd=True

Please note that this features makes use of the MAAS snap to configure the rack controller on the deployed machine. Since snap store mirrors are not yet available, this will require the machine to have access to the internet to be able to install the MAAS snap.

Controller Versions & Notifications

MAAS now surfaces the version of each running controller and notifies the users of any version mismatch between the region and rack controllers. This helps administrators identify mismatches when upgrading their MAAS on a multi-node MAAS cluster, such as within a HA setup.

Improved DNS Reloading

This new release introduces various improvements to the DNS reload mechanism. This allows MAAS to be smarter about when to reload DNS after changes have been automatically detected or made.

API Improvements

The machines API endpoint now provides more information on the configured storage and provides additional output that includes volume_groups, raids, cache_sets, and bcaches fields.

Django 1.11 support

MAAS 2.3 now supports the latest Django LTS version, Django 1.11. This allows MAAS to work with the newer Django version in Ubuntu Artful, which serves as a preparation for the next Ubuntu LTS release.

  • Users running MAAS in Ubuntu Artful will use Django 1.11.
  • Users running MAAS in Ubuntu Xenial will continue to use Django 1.9.
on November 21, 2017 03:34 PM

Denied

Ante Karamatić

Denied

Denied

Nakon što sam jučer odustao od Hitra, traženje parkinga oko zgrade Fine u Vukovarskoj je čista ludost, danas sam pokušao ponovno. Za razliku od poslovnice u Šibeniku, ova u Zagrebu mi nije ni htjela uzeti papire. Obrazloženje je bilo “Ispunili ste rukom obrazac” i “Trebate donijeti kopiju riječnika stranih riječi za riječ ‘solutions'”. Spomenuti obrazac je dostupan samo kao PDF, pa pretpostavljam da bi sad trebao kupiti softver kojim ću urediti datoteku koja mi treba kako bi državu pitao mogu li firmu nazvati kako hoću. I recimo da bi shvatio nepoznavanje engleskog jezika i potrebu za definicijom riječi ‘solutions’ kada bi to bilo prvi put da se javlja u sudskom registru. Ali nije, postoje deseci, ako ne i stotine, trgovačkih društava sa rječju ‘solutions’.

Moja je konačna odluka nastaviti pokušavati svaki dan do kraja ovog tjedna. Ako ne uspijem do petka, trgovačko društvo neću otvoriti u Republici Hrvatskoj.

on November 21, 2017 07:36 AM

I am a wee bit late in posting for Ubuntu Community Appreciation Day yet again.

There has been a bit of busyness. Even though I have been on layoff from my paid job as a civil servant for the United States Government, I have been active in church affairs. With a number of church leaders absent this past Sunday I had to cover a few things. That leads to me being thankful for much in the Ubuntu world.

I am thankful to Martin Wimpress and crew for having Ubuntu MATE available for Raspberry Pi. I run it on my RPi2. A screenshot duplicated from the MATE site:

Ubuntu MATE 16.04 on Raspberry PiUbuntu MATE 16.04 on Raspberry Pi

You can get more details about that great software here.

I am also very thankful for LaTeX2e and Tex Live. It has been a great thing to have to prepare devotional materials for church. I am thankful for the MOTU folks maintaining Gummi which is the editor I use on Xubuntu. Xubuntu is what I run on my laptop that goes many places with me. Tex Live is run both on the laptop and on the Raspberry Pi 2 at home.

I am thankful to Alan Pope for helping to shepherd folks building snaps. Alan also has a wonderful website dedicated to an encounter with AI gone awry. I commend the viewing of that site to everybody possible.

I am thankful for Colin Watson keeping Launchpad alive. I may be one of the few using bzr but that’s where the source to this blog lives.

And last, I am thankful to folks running LibraryThing as they help me keep track of the books I own. They gave me a subject breakdown here:

An embedded graphic you might not seeAn embedded graphic you might not see

Have a great day!

Creative Commons License
Late Post For Ubuntu Community Appreciation Day 2017 by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://ubuntu-mate.org/raspberry-pi/.

on November 21, 2017 05:17 AM

November 20, 2017

I have to say you, one thousand times: THANKS Diogo!

Because you really believe in free(dom) software and Ubuntu, you are doing a so better Community and you are always available for give me a hand and specially because you are an awesome person |o/
on November 20, 2017 07:33 PM

I’ve been using Kitten Block for years, since I don’t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn’t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced.

However, if your primary goal is just to block the websites in question rather than seeing kitten pictures as such (let’s face it, the internet is not short of alternative sources of kitten pictures), then it’s easy to do with uBlock Origin. After installing the extension if necessary, go to Tools → Add-ons → Extensions → uBlock Origin → Preferences → My filters, and add www.dailymail.co.uk and www.express.co.uk, each on its own line. (Of course you can easily add more if you like.) Voilà: instant tranquility.

Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone.

on November 20, 2017 12:00 AM

November 17, 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 197 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Antoine Beaupré did 21h (out of 16h allocated + 8.75h remaining, thus keeping 3.75h for November).
  • Ben Hutchings did 20 hours (out of 15h allocated + 9 extra hours, thus keeping 4 extra hours for November).
  • Brian May did 10 hours.
  • Chris Lamb did 18 hours.
  • Emilio Pozuelo Monfort did 7 hours (out of 20.75 hours allocated + 1.5 hours remaining, thus keeping 15.25 hours for November).
  • Guido Günther did 6.5 hours (out of 11h allocated + 1 extra hour, thus keeping 5.5h for November).
  • Hugo Lefeuvre did 20h.
  • Lucas Kanashiro did 2 hours (out of 5h allocated, thus keeping 3 hours for November).
  • Markus Koschany did 19 hours (out of 20.75h allocated, thus keeping 1.75 extra hours for November).
  • Ola Lundqvist did 7.5h (out of 7h allocated + 0.5 extra hours).
  • Raphaël Hertzog did 13.5 hours (out of 12h allocated + 1.5 extra hours).
  • Roberto C. Sanchez did 11 hours (out of 20.75 hours allocated + 14.75 hours remaining, thus keeping 24.50 extra hours for November, he will give back remaining hours at the end of the month).
  • Thorsten Alteholz did 20.75 hours.

Evolution of the situation

The number of sponsored hours increased slightly to 183 hours per month. With the increasing number of security issues to deal with, and with the number of open issues not really going down, I decided to bump the funding target to what amounts to 1.5 full-time position.

The security tracker currently lists 50 packages with a known CVE and the dla-needed.txt file 36 (we’re a bit behind in CVE triaging apparently).

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on November 17, 2017 02:31 PM

November 16, 2017

This week we’ve been upgrading from OpenWRT to LEDE and getting wiser, or older. GoPro open source the Cineform codec, Arch Linux drops i686, Intel and AMD collaborate on new Intel product family, 13 AD&D games have been released by GoG and IBM release a new typeface called Plex.

It’s Season Ten Episode Thirty-Seven of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on November 16, 2017 03:00 PM
The 4.3.0 introduces a menu icon and a launcher for ucaresystem-core. Once installed or updated, you will find a uCareSystem Core entry in your menu that you can click if you want to launch ucaresystem-core. Now think for a moment a friend of yours, or your parents that are not comfortable with the terminal. HowContinue reading "ucaresystem core 4.3.0 : Launch it from from your applications menu"
on November 16, 2017 10:06 AM

Ubuntu's Guitar Pick

Mohamad Faizul Zulkifli




Rare and collectible item. Suitable for guitar hobbyist.
posted from Bloggeroid
on November 16, 2017 05:04 AM

I’ve been using Kitten Block for years, since I don’t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn’t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced.

However, if your primary goal is just to block the websites in question rather than seeing kitten pictures as such (let’s face it, the internet is not short of alternative sources of kitten pictures), then it’s easy to do with uBlock Origin. After installing the extension if necessary, go to Tools → Add-ons → Extensions → uBlock Origin → Preferences → My filters, and add www.dailymail.co.uk and www.express.co.uk, each on its own line. (Of course you can easily add more if you like.) Voilà: instant tranquility.

Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone.

on November 16, 2017 12:00 AM

November 15, 2017

Kubuntu Most Wanted

Kubuntu General News

Kubuntu Cafe Live, is our new community show. This new format show is styled using a magazine format. We created lots of space for community involvement, breaking the show into multiple segments, and we want to get you involved. We are looking for Presenters, Trainers, Writers and Hosts.

  • Are you looking for an opportunity to present your idea, or application ?
  • Would you like to teach our community about an aspect of Kubuntu, or KDE ?
  • Would you like to be a show, article or news writer ?
  • Interested in being a host on Kubuntu Cafe Live ?

Contact Rick Timmis or Valorie Zimmerman to get started

The Kubuntu Cafe features a very broad variety of show segments. These include free format unconference segments which can accommodate your ideas. Dojo for teaching and training, Community Feedback, Developers Update, News & Views.

For upcoming show schedule please check the kubuntu calendar

Check out the show to see the new format,

 

on November 15, 2017 10:12 PM

Francois and Nemen at the FIXME hackerspace (Lausanne) weekly meeting are experimenting with the Ring peer-to-peer softphone:

Francois is using Raspberry Pi and PiCam to develop a telepresence network for hackerspaces (the big screens in the middle of the photo).

The original version of the telepresence solution is using WebRTC. Ring's OpenDHT potentially offers more privacy and resilience.

on November 15, 2017 07:57 PM

Previously: v4.13.

Linux kernel v4.14 was released this last Sunday, and there’s a bunch of security things I think are interesting:

vmapped kernel stack on arm64
Similar to the same feature on x86, Mark Rutland and Ard Biesheuvel implemented CONFIG_VMAP_STACK for arm64, which moves the kernel stack to an isolated and guard-paged vmap area. With traditional stacks, there were two major risks when exhausting the stack: overwriting the thread_info structure (which contained the addr_limit field which is checked during copy_to/from_user()), and overwriting neighboring stacks (or other things allocated next to the stack). While arm64 previously moved its thread_info off the stack to deal with the former issue, this vmap change adds the last bit of protection by nature of the vmap guard pages. If the kernel tries to write past the end of the stack, it will hit the guard page and fault. (Testing for this is now possible via LKDTM’s STACK_GUARD_PAGE_LEADING/TRAILING tests.)

One aspect of the guard page protection that will need further attention (on all architectures) is that if the stack grew because of a giant Variable Length Array on the stack (effectively an implicit alloca() call), it might be possible to jump over the guard page entirely (as seen in the userspace Stack Clash attacks). Thankfully the use of VLAs is rare in the kernel. In the future, hopefully we’ll see the addition of PaX/grsecurity’s STACKLEAK plugin which, in addition to its primary purpose of clearing the kernel stack on return to userspace, makes sure stack expansion cannot skip over guard pages. This “stack probing” ability will likely also become directly available from the compiler as well.

set_fs() balance checking
Related to the addr_limit field mentioned above, another class of bug is finding a way to force the kernel into accidentally leaving addr_limit open to kernel memory through an unbalanced call to set_fs(). In some areas of the kernel, in order to reuse userspace routines (usually VFS or compat related), code will do something like: set_fs(KERNEL_DS); ...some code here...; set_fs(USER_DS);. When the USER_DS call goes missing (usually due to a buggy error path or exception), subsequent system calls can suddenly start writing into kernel memory via copy_to_user (where the “to user” really means “within the addr_limit range”).

Thomas Garnier implemented USER_DS checking at syscall exit time for x86, arm, and arm64. This means that a broken set_fs() setting will not extend beyond the buggy syscall that fails to set it back to USER_DS. Additionally, as part of the discussion on the best way to deal with this feature, Christoph Hellwig and Al Viro (and others) have been making extensive changes to avoid the need for set_fs() being used at all, which should greatly reduce the number of places where it might be possible to introduce such a bug in the future.

SLUB freelist hardening
A common class of heap attacks is overwriting the freelist pointers stored inline in the unallocated SLUB cache objects. PaX/grsecurity developed an inexpensive defense that XORs the freelist pointer with a global random value (and the storage address). Daniel Micay improved on this by using a per-cache random value, and I refactored the code a bit more. The resulting feature, enabled with CONFIG_SLAB_FREELIST_HARDENED, makes freelist pointer overwrites very hard to exploit unless an attacker has found a way to expose both the random value and the pointer location. This should render blind heap overflow bugs much more difficult to exploit.

Additionally, Alexander Popov implemented a simple double-free defense, similar to the “fasttop” check in the GNU C library, which will catch sequential free()s of the same pointer. (And has already uncovered a bug.)

Future work would be to provide similar metadata protections to the SLAB allocator (though SLAB doesn’t store its freelist within the individual unused objects, so it has a different set of exposures compared to SLUB).

setuid-exec stack limitation
Continuing the various additional defenses to protect against future problems related to userspace memory layout manipulation (as shown most recently in the Stack Clash attacks), I implemented an 8MiB stack limit for privileged (i.e. setuid) execs, inspired by a similar protection in grsecurity, after reworking the secureexec handling by LSMs. This complements the unconditional limit to the size of exec arguments that landed in v4.13.

randstruct automatic struct selection
While the bulk of the port of the randstruct gcc plugin from grsecurity landed in v4.13, the last of the work needed to enable automatic struct selection landed in v4.14. This means that the coverage of randomized structures, via CONFIG_GCC_PLUGIN_RANDSTRUCT, now includes one of the major targets of exploits: function pointer structures. Without knowing the build-randomized location of a callback pointer an attacker needs to overwrite in a structure, exploits become much less reliable.

structleak passed-by-reference variable initialization
Ard Biesheuvel enhanced the structleak gcc plugin to initialize all variables on the stack that are passed by reference when built with CONFIG_GCC_PLUGIN_STRUCTLEAK_BYREF_ALL. Normally the compiler will yell if a variable is used before being initialized, but it silences this warning if the variable’s address is passed into a function call first, as it has no way to tell if the function did actually initialize the contents. So the plugin now zero-initializes such variables (if they hadn’t already been initialized) before the function call that takes their address. Enabling this feature has a small performance impact, but solves many stack content exposure flaws. (In fact at least one such flaw reported during the v4.15 development cycle was mitigated by this plugin.)

improved boot entropy
Laura Abbott and Daniel Micay improved early boot entropy available to the stack protector by both moving the stack protector setup later in the boot, and including the kernel command line in boot entropy collection (since with some devices it changes on each boot).

eBPF JIT for 32-bit ARM
The ARM BPF JIT had been around a while, but it didn’t support eBPF (and, as a result, did not provide constant value blinding, which meant it was exposed to being used by an attacker to build arbitrary machine code with BPF constant values). Shubham Bansal spent a bunch of time building a full eBPF JIT for 32-bit ARM which both speeds up eBPF and brings it up to date on JIT exploit defenses in the kernel.

seccomp improvements
Tyler Hicks addressed a long-standing deficiency in how seccomp could log action results. In addition to creating a way to mark a specific seccomp filter as needing to be logged with SECCOMP_FILTER_FLAG_LOG, he added a new action result, SECCOMP_RET_LOG. With these changes in place, it should be much easier for developers to inspect the results of seccomp filters, and for process launchers to generate logs for their child processes operating under a seccomp filter.

Additionally, I finally found a way to implement an often-requested feature for seccomp, which was to kill an entire process instead of just the offending thread. This was done by creating the SECCOMP_RET_ACTION_FULL mask (née SECCOMP_RET_ACTION) and implementing SECCOMP_RET_KILL_PROCESS.

That’s it for now; please let me know if I missed anything. The v4.15 merge window is now open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on November 15, 2017 05:23 AM

November 13, 2017

Latte Dock, the very popular doc/panel app for Plasma Desktop, has released its new bugfix version 0.7.2. This is also the first stable release since Latte Dock became an official KDE project at the end of August.

 

 

Version 0.7.1 was added to our backports PPA in a previous round of backports for Kubuntu 17.10 Artful Aardvark.

Today that has been updated to 0.7.2, and a build added for Kubuntu 17.04 Zesty Zapus users.

The PPA can be enabled by adding the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

Upgrade notes:

~ The Kubuntu backports PPA includes various other backported applications and Plasma releases, so please be aware that enabling the backports PPA for the first time and doing a full upgrade would result in a substantial amount of upgraded packages in addition to Latte Dock.

~ The PPA will also continue to receive further bugfix updates when they become available, and further updated releases of Plasma and applications where practical.

~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main Ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
3. Kubuntu PPA bugs: https://bugs.launchpad.net/kubuntu-ppa

on November 13, 2017 06:10 PM

I am a pretty firm advocate of personal development. I don’t mean those cheesy self-help books that make you walk on coals, promise you a “secret formula” for wealth, and merely bang on about motivation and inspiration. That stuff is largely snake oil.

No, I mean genuine personal development: building discipline and new skills with practice, focus, and patience.

This kind of work teaches you to look at the world in a different way, to sniff out opportunity more efficiently, to treat challenges and (manageable) adversity as an opportunity to grow, to treat failure as a valuable tool for improvement, and to get a better work/life balance.

There is no quick pill or shortcut with this stuff: it takes work, time, patience, and practice, but it is a wonderful investment in yourself. It can reap great rewards in happiness, relationships, productivity, and more.

Sometimes I recommend some personal development resources (that I have found invaluable) when I speak at conferences, and it struck me that it might be helpful to package this up into a $150 Personal Development Kit: a recommended collection of items you can buy to get you a good start. It is a worthwhile investment.

IMPORTANT NOTE: these are merely my own recommendations. I am not making money from any of this, there are no referral links here, and I am not being asked to promote them. These are products I have personally got a lot of value out of, but of course, your mileage may vary.

Overall Approach

The items I am recommending in the kit are based upon what I consider to be the five key goals we should focus on in ourselves:

  1. Structured – with so much detail in the world, we often focus on only the urgent things, but not the important things. As such, we get stuck in a rat race. We should aim to look ahead, plan, and use our time and energy wisely so we can balance it on the things we need to do and the things we love to do.
  2. Reflective – we should always evaluate our experiences (both good and bad) to see how we can learn and improve. We want to develop a curiosity that manifests in positive adjustments to how we do things.
  3. Stoic – life will throw curveballs, and we need to train ourselves to manage adversity with logic, not emotion, and to find opportunity even in challenging times. This will strengthen us.
  4. Mindful – we need to train ourselves to manage our minds to to be less busy and have a little more space. This will help with focus and managing stress.
  5. Habitual – the only way in which we grow and improve is to build good habits that implement these changes. As such, we should be explicit in how we design these habits and stick to them.

Let’s now run through these recommendations and I will provide some guidance on how to use them near the end of this post.

Books

Reading is a critical component in how we grow. Much of humanity’s broader wisdom has been documented, so why not learn from it?

One of the most valuable devices I have ever bought is an Amazon Kindle because it makes reading so convenient. If you are strapped for cash though, go and join your local library. Either way, make a few moments for reading each day (for me it is before bed), it is worth it.

Seven Habits Of Highly Effective People

While the title may sound like a tacky self-help effort, this book is fantastic, and a good starting point in this kit. It is, for me, the perfect starting point for personal development.

Essentially it teaches seven key principles for focusing on the right opportunities/problems, being proactive, getting the most value out of you work, building your skills, and more.

These are not trendy quick fixes: they are consistent principles that have stood the test of time. They are presented in simple and practical ways and easily applicable. This provides a great framework in which to base the rest of the kit.

The Obstacle Is The Way

I have become quite the fan of stoicism, an ancient philosophy that teaches resilience and growth in the most testing of times. Stoicism is a key pillar in effective personal development: it builds resilience and strength.

While the seven habits touches on some stoic principles, this book delves into further depth. It teaches us that in every challenge there is an opportunity for learning and growth. It helps us to train ourselves to manage challenging situations with logic and calmness as opposed to emotion and freaking out.

This book is one that I always recommend to people going through a tough time: it is wonderful at resetting our perspectives and showing that all scenarios can be managed more effectively if we approach them with the right mental perspective. This gives us confidence, resilience, and structure.

The Daily Stoic

When you have read The Obstacle Is The Way, this book is wonderful at keeping these stoic principles front and center. It provides a daily “meditation”, a key stoic principle to read, consider, and think about throughout the day.

I have found this really helpful. Part of personal development is building new ideas and mental frameworks in your head in which to apply to your life. This book is handy for applying the stoic piece so it doesn’t just remain an abstract concept, but something you can directly muse on and apply.

As with all of these methods and principles, they only stick if you practice. This book is a great way to build this discipline.

Nudge

The previous books are designed to build your psychological and organizational armor. While not strictly a personal development book, Nudge is more focused on our approach to problems.

In a nutshell, Nudge demonstrates that we make effective changes to problems with lots of small “nudges”. That is, instead of running in there with a big new solution, apply a collection of mini-solutions that move the needle and you will make more progress. This is huge for solving organizational issues, dealing with complicated people, taking on large projects, and more.

Services and Apps

In addition to the above books, there are also some key services and apps that I want to include in this kit.

Headspace 1 Year Subscription

Our lives are riddled with complexity, and as we get increasingly connected with social media, cell phones, and more, our minds are busier than ever before.

As such, meditation is a key personal development tool in managing our minds. In much the same way the previous books help shape a healthier and more pragmatic perspective, meditation is a key companion for this. There are numerous scientific benefits to meditation, but I have found it to be an invaluable tool in maintaining a calm, logical, and pragmatic perspective.

While there are various meditation services, I love Headspace. It is a little more expensive, but it is worth it. All you need is a pair of headphones and a computer/phone/tablet to get started.

You can join a plan on a month to month basis, but I included the 1 year plan in the kit because this should not be a temporary fad…it is a critical component throughout the year.

HabitBull

  • Free

The key to making all of the above stick is to practice every day until it becomes a habit. The general wisdom is that it takes 66 days to build a habit, so simply try to practice all of these principles once a day for 66 days straight. After this long you generally won’t have to think about doing something, it will just be part of your routine.

HabitBull (and many similar apps) simply provide a way to track these habits and when you stick to them. This is helpful in seeing your progress, just make sure you use it!

How To Use These

Now, before you get started, it is important to know that benefitting from these different elements of the kit is going to take some discipline.

There is no magic pill here: it will take practice and you will have some good days and bad days. Remember though, even doing a little each day has you lapping those doing nothing.

So, this is how I recommend you use these resources:

  • In HabitBull add some habits to track. Our goals is to stick to these every day for 66 days. Add items such as:
    • Reading (10mins a day)
    • Meditation (10mins a day)
    • Exercise (10mins a day)
  • Start by reading The Seven Habits of Highly Effective People.
  • At the same time start using Headspace and run through the three Basic packs which will take 30 days (10mins a day).
  • The next book to read is The Obstacle Is The Way. Again, while reading books, continue using headspace and move onto the themed Headspace packs. Focus on the Prioritization pack next and then the Stress pack. Also listen to the Daily Headspace session which is only 3 mins long each day.
  • When you have completed The Obstacle Is The Way, start reading an entry every day from The Daily Stoic (add a habit to HabitBull to track this) and also begin reading Nudge. Again continue using Headspace throughout this.

The most important thing here is building the habit. Do something every day. Even if it means putting it in your calendar, make sure you apply yourself to the above every day.

Further Recommendations?

These are my recommendations for the kit. What else do you think should be included?

What other approaches and methods have you also found to be helpful?

Share your thoughts in the comments!

The post The $150 Personal Development Kit appeared first on Jono Bacon.

on November 13, 2017 04:00 PM

I was the sole editor and contributor of new content for A Practical Guide to Linux Commands, Editors, and Shell Programming, Fourth Edition.

I want to note that I feel I am standing on the shoulder of a giant as the previous author, Mark Sobell, has been incredibly helpful in the hand off of the book. Mark is retiring and leaving behind a great foundation for me.

on November 13, 2017 01:51 PM

Genoci and Lpack

Serge Hallyn

Introduction

I’ve been working on a pair of tools for manipulating OCI images:

  • genoci, for GENerating OCI images, builds images according to a recipe in yaml format.
  • lpack, the layer unpacker, unpacks an OCI image’s layers onto either btrfs subvolumes or thinpool LVs.

See the README.md for both for more detailed usage.

The two can be used together to speed up genoci’s builds by reducing the number of root filesystem unpacks and repacks. (See genoci’s README.md for details)

Example

While the project’s readme’s give examples, here is a somewhat silly one just to give an idea. Copy the following into recipe.yaml:

cirros:
  base: empty
  expand: https://download.cirros-cloud.net/0.3.5/cirros-0.3.5-i386-lxc.tar.gz
weird:
  base: cirros
  pre: mount -t proc proc %ROOT%/proc
  post: umount %ROOT%/proc
  run: ps -ef > /processlist
  run: |
    cat > /usr/bin/startup << EOF
    #!/bin/sh
    echo "Starting up"
    nc -l -4 9999
    EOF
    chmod 755 /usr/bin/startup
  entrypoint: /usr/bin/startup

Then run “./genoci recipe.yaml”. You should end up with a directory “oci”, which you can interrogate with

$ umoci ls --layout oci
empty
cirros
cirros-2017-11-13_1
weird
weird-2017-11-13_1

You can unpack one of the containers with:

$ umoci unpack --image oci:weird
$ ls -l weird/rootfs/usr/bin/startup
-rwxr-xr-x 1 root root 43 Nov 13 04:27 weird/rootfs/usr/bin/startup

Upcoming

I’m about to begin the work to replace both with a single tool, written in golang, and based on an API exported by umoci.

Disclaimer

The opinions expressed in this blog are my own views and not those of Cisco.


on November 13, 2017 04:37 AM

November 12, 2017

I wrote a Web Component

Stuart Langridge

I’ve been meaning to play with Web Components for a little while now. After I saw Ben Nadel create a Twitter tweet progress indicator with Angular and Lucas Leandro did the same with Vue.js I thought, here’s a chance to experiment.

Web Components involve a whole bunch of different dovetailing specs; HTML imports, custom elements, shadow DOM, HTML templates. I didn’t want to have to use the HTML template and import stuff if I could avoid it, and pleasantly you actually don’t need it. Essentially, you can create a custom element named whatever-you-want and then just add <whatever-you-want someattr="somevalue">content here</whatever-you-want> elements to your page, and it all works. This is good.

To define a new type of element, you use window.customElements.define('your-element-name', YourClass).1 YourClass is an ES2016 JavaScript class. 2 So, we start like this:


window.customElements.define('twitter-circle-count', class extends HTMLElement {
});

The class has a constructor method which sets everything up. In our case, we’re going to create an SVG with two circles: the “indicator” (which is the one that changes colour and fills in as you add characters), and the “track” (which is the one that’s always present and shows where the line of the circle goes). Then we shrink and grow the “indicator” circle by using Jake Archibald’s dash-offset technique. This is all perfectly expressed by Ben Nadel’s diagram, which I hope he doesn’t mind me borrowing because it’s great.

So, we need to dynamically create an SVG. The SVG we want will look basically like this:


<svg viewBox="0 0 100 100" xmlns="http://www.w3.org/2000/svg">
  <circle cx="50" cy="50" r="45" 
  style="stroke: #9E9E9E"></circle>
  <circle cx="50" cy="50" r="45" 
  style="stroke: #333333)"></circle>
</svg>

Let’s set that SVG up in our element’s constructor:


window.customElements.define('twitter-circle-count', class extends HTMLElement {
  constructor() {
    /* You must call super() first in the constructor. */
    super();

    /* Create the SVG. Note that we need createElementNS, not createElement */
    var svg = document.createElementNS("http://www.w3.org/2000/svg", "svg");
    svg.setAttribute("viewBox", "0 0 100 100");
    svg.setAttribute("xmlns", "http://www.w3.org/2000/svg");

    /* Create the track. Note createElementNS. Note also that "this" refers to
       this element, so we've got a reference to it for later. */
    this.track = document.createElementNS("http://www.w3.org/2000/svg", "circle");
    this.track.setAttribute("cx", "50");
    this.track.setAttribute("cy", "50");
    this.track.setAttribute("r", "45");
    /* And create the indicator, by duplicating the track */
    this.indicator = this.track.cloneNode(true);

    svg.appendChild(this.track);
    svg.appendChild(this.indicator);
  }
});

Now we need to actually add that created SVG to the document. For that, we create a shadow root. This is basically a little separate HTML document, inside your element, which is isolated from the rest of the page. Styles set in the main page won’t apply to stuff in your component; styles set in your component won’t leak out to the rest of the page.3 This is easy with attachShadow, which returns you this shadow root, which you can then treat like a normal node:


window.customElements.define('twitter-circle-count', class extends HTMLElement {
  constructor() {
    super();
    var svg = document.createElementNS("http://www.w3.org/2000/svg", "svg");
    svg.setAttribute("viewBox", "0 0 100 100");
    svg.setAttribute("xmlns", "http://www.w3.org/2000/svg");
    this.track = document.createElementNS("http://www.w3.org/2000/svg", "circle");
    this.track.setAttribute("cx", "50");
    this.track.setAttribute("cy", "50");
    this.track.setAttribute("r", "45");
    this.indicator = this.track.cloneNode(true);

    svg.appendChild(this.track);
    svg.appendChild(this.indicator);
    let shadowRoot = this.attachShadow({mode: 'open'});
    shadowRoot.appendChild(svg);
  }
});

Now, we want to allow people to set the colours of our circles. The way to do this is with CSS custom properties. Basically, you can invent any new property name you like, as long as it’s prefixed with --. So we invent two: --track-color and --circle-color. We then set the two circles to be those colours by using CSS’s var() syntax; this lets us say “use this variable if it’s set, or use this default value if it isn’t”. So our user can style our element with twitter-circle-count { --track-color: #eee; } and it’ll work.

Annoyingly, it doesn’t seem to be easily possible to use existing CSS properties for this; there doesn’t seem to be a good way to have the standard property color set the circle colour.4 One has to use a custom variable even if there’s a “real” CSS property that would be appropriate. I’m hoping I’m wrong about this and there is a sensible way to do it that I just haven’t discovered.5 (Update: Matt Machell mentions currentColor which would work perfectly for this example, but it only works for color; there’s no way of setting other properties like, say, font-size on the component and having that explicitly propagate down to a particular element in the component; there’s no currentFontSize. I don’t know why color gets special treatment, even though the special treatment would solve my particular problem.)


window.customElements.define('twitter-circle-count', class extends HTMLElement {
  constructor() {
    super();
    var svg = document.createElementNS("http://www.w3.org/2000/svg", "svg");
    svg.setAttribute("viewBox", "0 0 100 100");
    svg.setAttribute("xmlns", "http://www.w3.org/2000/svg");
    this.track = document.createElementNS("http://www.w3.org/2000/svg", "circle");
    this.track.setAttribute("cx", "50");
    this.track.setAttribute("cy", "50");
    this.track.setAttribute("r", "45");
    this.indicator = this.track.cloneNode(true);
    this.track.style.stroke = "var(--track-color, #9E9E9E)";
    this.indicator.style.stroke = "var(--circle-color, #333333)";
    svg.appendChild(this.track);
    svg.appendChild(this.indicator);
    let shadowRoot = this.attachShadow({mode: 'open'});
    shadowRoot.appendChild(svg);
  }
});

We want our little element to be inline-block. To set properties on the element itself, from inside the element, there is a special CSS selector, :host.6 Add a <style> element inside the component and it only applies to the component (this is special “scoped style” magic), and setting :host styles the root of your element:


window.customElements.define('twitter-circle-count', class extends HTMLElement {
  constructor() {
    super();
    var svg = document.createElementNS("http://www.w3.org/2000/svg", "svg");
    svg.setAttribute("viewBox", "0 0 100 100");
    svg.setAttribute("xmlns", "http://www.w3.org/2000/svg");
    this.track = document.createElementNS("http://www.w3.org/2000/svg", "circle");
    this.track.setAttribute("cx", "50");
    this.track.setAttribute("cy", "50");
    this.track.setAttribute("r", "45");
    this.indicator = this.track.cloneNode(true);
    this.track.style.stroke = "var(--track-color, #9E9E9E)";
    this.indicator.style.stroke = "var(--circle-color, #333333)";
    svg.appendChild(this.track);
    svg.appendChild(this.indicator);
    let shadowRoot = this.attachShadow({mode: 'open'});
    shadowRoot.appendChild(svg);
    var style = document.createElement("style");
    style.innerHTML = ":host { display: inline-block; position: relative; contain: content; }";
    shadowRoot.appendChild(style);
  }
});

Next, we need to be able to set the properties which define the value of the counter — how much progress it should show. Having value and max properties similar to an <input type="range"> seems logical here. For this, we define a little function setDashOffset which sets the stroke-dashoffset style on our indicator. We then call that function in two places. One is in connectedCallback, a method which is called when our custom element is first inserted into the document. The second is whenever our value or max attributes change. That gets set up by defining observedAttributes, which returns a list of attributes that we want to watch; whenever one of those attributes changes, attributeChangedCallback is called.


window.customElements.define('twitter-circle-count', class extends HTMLElement {
  static get observedAttributes() {
    return ['value', 'max'];
  }
  attributeChangedCallback(name, oldValue, newValue) {
    this.setDashOffset();
  }
  setDashOffset() {
    var mx = parseInt(this.getAttribute("max"), 10);
    if (isNaN(mx)) mx = 100;
    var value = parseInt(this.getAttribute("value"), 10);
    if (isNaN(value)) value = 0;
    this.indicator.style.strokeDashoffset = this.circumference - 
        (value * this.circumference / mx);
  }
  constructor() {
    super();
    var svg = document.createElementNS("http://www.w3.org/2000/svg", "svg");
    svg.setAttribute("viewBox", "0 0 100 100");
    svg.setAttribute("xmlns", "http://www.w3.org/2000/svg");
    this.track = document.createElementNS("http://www.w3.org/2000/svg", "circle");
    this.track.setAttribute("cx", "50");
    this.track.setAttribute("cy", "50");
    this.track.setAttribute("r", "45");
    this.indicator = this.track.cloneNode(true);

    this.track.style.stroke = "var(--track-color, #9E9E9E)";
    this.indicator.style.stroke = "var(--circle-color, #333333)";
    /* We know what the circumference of our circle is. It doesn't matter
       how big the element is, because the SVG is always 100x100 in its own
       "internal coordinates": that's what the viewBox means. So the circle
       always has a 45px radius, and so its circumference is always the same,
       2πr. Store this for later. */
    this.circumference = 3.14 * (45 * 2);

    svg.appendChild(this.track);
    svg.appendChild(this.indicator);
    let shadowRoot = this.attachShadow({mode: 'open'});
    shadowRoot.appendChild(svg);
    var style = document.createElement("style");
    style.innerHTML = ":host { display: inline-block; position: relative; contain: content; }";
    shadowRoot.appendChild(style);
  }
  connectedCallback() {
    this.setDashOffset();
  }
});

This works if the user of the component does counter.setAttribute("value", "50"), but it doesn’t make counter.value = 50 work, and it’s nice to provide these direct JavaScript APIs as well. For that we need to define a getter and a setter for each.


window.customElements.define('twitter-circle-count', class extends HTMLElement {
  static get observedAttributes() {
    return ['value', 'max'];
  }
  attributeChangedCallback(name, oldValue, newValue) {
    this.setDashOffset();
  }
  setDashOffset() {
    var mx = parseInt(this.getAttribute("max"), 10);
    if (isNaN(mx)) mx = this.defaultMax;
    var value = parseInt(this.getAttribute("value"), 10);
    if (isNaN(value)) value = this.defaultValue;
    this.indicator.style.strokeDashoffset = this.circumference - (
        value * this.circumference / mx);
  }
  get value() {
    var value = this.getAttribute('value');
    if (isNaN(value)) return this.defaultValue;
    return value;
  }
  set value(value) { this.setAttribute("value", value); }
  get max() {
    var mx = this.getAttribute('max');
    if (isNaN(mx)) return this.defaultMax;
    return max;
  }
  set value(value) { this.setAttribute("value", value); }
  constructor() {
    super();
    var svg = document.createElementNS("http://www.w3.org/2000/svg", "svg");
    svg.setAttribute("viewBox", "0 0 100 100");
    svg.setAttribute("xmlns", "http://www.w3.org/2000/svg");
    this.track = document.createElementNS("http://www.w3.org/2000/svg", "circle");
    this.track.setAttribute("cx", "50");
    this.track.setAttribute("cy", "50");
    this.track.setAttribute("r", "45");
    this.indicator = this.track.cloneNode(true);
    this.track.style.stroke = "var(--track-color, #9E9E9E)";
    this.indicator.style.stroke = "var(--circle-color, #333333)";
    this.circumference = 3.14 * (45 * 2);
    svg.appendChild(this.track);
    svg.appendChild(this.indicator);
    let shadowRoot = this.attachShadow({mode: 'open'});
    shadowRoot.appendChild(svg);
    var style = document.createElement("style");
    style.innerHTML = ":host { display: inline-block; position: relative; contain: content; }";
    shadowRoot.appendChild(style);
    this.defaultValue = 50;
    this.defaultMax = 100;
  }
  connectedCallback() {
    this.setDashOffset();
  }
});

And that’s all we need. We can now create our twitter-circle-count element and hook it up to a textarea like this:

<twitter-circle-count value="0" max="280"></twitter-circle-count>
<p>Type in here</p>
<textarea rows=3 cols=40></textarea>
twitter-circle-count {
  width: 30px;
  height: 30px;
  --track-color: #ddd;
  --circle-color: #333;
  --text-color: #888;
}
// we use input, not keyup, because that fires when text is cut or pasted
// thank you Dave MN for that insight
document.querySelector("textarea").addEventListener("input", function() {
  document.querySelector("twitter-circle-count").setAttribute("value", this.value.length);
}, false);

and it works! I also added a text counter and a couple of other nicenesses, such as making the indicator animate to its position, and included a polyfill to add support in browsers that don’t have it.7

Here’s the counter:

Type some text in here:

  1. I relied for a lot of this understanding on Google’s web components documentation by Eric Bidelman.
  2. All this stuff is present already in Chrome; for other browsers you may need polyfills, and I’ll get to that later.
  3. Pedant posse: yes, it’s a bit more complicated than this. One step at a time.
  4. It would be possible to have color apply to our circle colour by monitoring changes to the element’s style, but that’s a nightmare.
  5. QML does this by setting “aliases”; in a component, you can say property alias foo: subelement.bar and setting foo on an instance of my component propagates through and sets bar on the subelement. This is a really good idea, and I wish Web Components did it somehow.
  6. Firefox doesn’t seem to support this yet, either :host or scoping styles so they don’t leak out of the component, so I’ve also set display:inline-block and position:relative on the twitter-circle-count selector in my normal CSS. This should be fixed soon.
  7. Mikeal Rogers has a really nice technique here for bundling your web component with a polyfill which is also worth considering.
on November 12, 2017 11:31 AM

November 11, 2017

This week we perfect the roast potato, discuss Google Code In, bring you some GUI love and go over your feedback.

It’s Season Ten Episode Thirty-Six of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on November 11, 2017 09:05 AM

I recently attended Dr. Dmitry Nedospasov’s 4-day “Hardware Hacking, Reversing and Instrumentation” training class as part of the HardwareSecurity.training event in San Francisco. I learned a lot, and it was incredibly fun class. If you understand the basics of hardware security and want to take it to the next level, this is the course for you.

The class predominantly focuses on the use of FPGAs for breaking security in hardware devices (embedded devices, microcontrollers, etc.). The advantage of FPGAs is that they can be used to implement arbitrary protocols and can operate with very high timing resolution. (e.g., single clock cycle, since it’s essentially synthesized hardware.)

The particular FPGA board used in this class is the Digilent Arty, based on the Xilinx Artix 7 FPGA. This board is clocked at 100 MHz, allowing 10ns resolution for high-speed protocols, timing attacks, etc. The development board contains over 33,000 logic cells with more than 20,000 LUTs and 40,000 flip-flops. (And if you don’t know what those things are, don’t worry, it’s explained in the class!) The largest project in the class only uses about 1% of the resources of this FPGA, so there’s plenty for more complex operations after the class.

Dmitry is obviously very knowledgable as an instructor and has a very direct and hands-on style. If you’re looking for someone to spoon feed you the course material, this won’t be the course you’re looking for. If, on the other hand, you prefer to learn by doing and just need an instructor to get you started and help you if you have issues, Dmitry has the perfect teaching style for you.

You should have some knowledge of basic hardware topics before starting the class. Knowing basic logic gates (AND, OR, NAND, XOR, etc.), basic electronics (i.e., how to supply power and avoid short circuits), and being familiar with concepts like JTAG and UARTs will help. I’ve taken several other hardware security classes before (including with Joe Fitzpatrick, another of the HardwareSecurity.training instructors and organizers) and I found that background knowledge quite useful. If you don’t know the basics, I highly reccommend taking a course like Joe’s “Applied Physical Attacks on Embedded Systems and IoT” first.

The first day of the class is mostly lecture about the architecture of FPGAs and basic Verilog. Some Verilog is written and results simulated in the Xilinx Vivado tool. Beginning with the second day, work moves to the actual FPGA, beginning with a task as “simple” as implementing a UART in hardware, then moving to using the FPGA to brute force a PIN on a microcontroller, and finally moving on to a timing attack against the microcontroller. Many of the projects are implemented with the performance-critical parts done in Verilog on the FPGA and then communicating with a Python script for logic & calculation.

I really enjoyed the course – it was challenging, but not defeatingly so, and I learned quite a few new things from it. This was my first exposure to FPGAs and Verilog, but I now feel I could successfully use an FPGA for a variety of projects, and look forward to finding something interesting to try with it.

on November 11, 2017 08:00 AM

November 10, 2017

I am pleased to announce that ucaresystem core version 4.2.3 has been released with some cool features. Now either you have an Ubuntu or Debian based distribution, you just need only one deb package installer. Why bother ? Here is the thing… I love creating automas (bash scripts). So until now there were 3 waysContinue reading "ucaresystem core 4.2.3 : One installer for Ubuntu and Debian based distributions"
on November 10, 2017 11:13 PM

November 09, 2017

Call for participation: an ubuntu default theme lead by the community?

As part of our Unity 7 to GNOME Shell transition in 17.10, we had last August a Fit and Finish Sprint at the London office to get the Shell feeling more like Ubuntu and we added some tweaks to our default GTK theme.

The outcome can be seen in the following posts:

Some more refinements came in afterward and finale 17.10 has a slightly modified look, but the general feedback we got from the community is that the ubuntu GNOME Shell session really looks and feels like ubuntu.

So I guess, objective completed (next level, attribute your earned point to person’s skills… :p).

Default ubuntu 17.10 desktop

All done?

However, as in any good RPG, this isn’t the real end of the story (there is the next boss!): we heard as well some people (on the community hub, in blog post comments) asking for a more drastic refresh of our theme and we generally agree with this. That would be a good idea to rebase and refresh our desktop with the help of the community!

For any themes, there are multiple parts:

  • The Shell theme itself (css).
  • The GTK3 and GTK 2 theme. The first one is using css, the second is some C code.
  • An icon theme.

Here is thus a call for participation if you are interesting into getting into that journey with us. The idea is to have few people (I think, 2-3 people + Alan Pope and I), having contributed to popular shell or GTK themes already, leading this project. That way, we can define what changes to the theme feels like “ubuntu” or not. We will coordinate all the work on the community hub to ensure that every decision is public and explain why.

We will sync regularly with the Canonical design theme (we have one meeting at the end of the month already) to check progress and get advice. Once the theme for the Shell, and GTK bindings are ready, we’ll switch the default ubuntu to it. That may or may not happen for the LTS, depending on the advancement. If it’s not ready to be switched by default, we will thus give instructions for our advanced users to get a taste of what’s currently cooking up :)

How do we get that going?

The idea is to restart from scratch, basing on upstream (GNOME) work. Indeed, the current ubuntu theme is in pure css, upstream is using sass. The Shell theme itself didn’t deviate much, but the general idea is:

for GTK3 apps, starting from Adwaita (default GNOME theme), and modify constants and slight behavior modifications in the sass files. That way, we don’t deviate too much and it will be easy to rebase with the numerous theme changes every new GTK release gets. for the Shell, using their sass files and tweaking for the same reasons. for the icon theme, we might start from our unity8 Suru icon set? Also, even if we hope that contributors from popular united and other themes will come along, the fact to start back from upstream’s playground provide a nice neutral and clean ground, we didn’t prevent from cherry-picking from what exists already. :)

Come with us!

Anyone can contribute (preferably via pull request on the projects we will created), however, in design, it’s always easier to have ideas than coming with concrete technical help ;). This is why there is this call to get some ideas of the number of people willing to spend some time on this project. The needs skills are either CSS (we’ll use SASS, more on that later) or C GTK theming. Also, Icon designers are more than welcome. :)

Excited? Join us! If you are interested (in either leading or just contributing), please post on this community hub topic your intents and ideas. Also, please reference your technical skills so that we can get an idea on the amount of awesome help we’ll get!

We’ll of course post more info on the same desktop section once we are ready to kick in the project! Hope, you are as thrilled by this project as we all are. ;)

on November 09, 2017 02:36 PM

Hi, the X-SWAT updates PPA has actually shipped Mesa 17.2 for 16.04 for a few weeks now, but it got bumped to the latest stable release yesterday. It’s available for the latest Ubuntu LTS (16.04) plus most recent interim release (17.10) as usual.

This version has also been uploaded to respective proposed queues as it’s a part of the HWE backport stack for 16.04.4 which will be released early next year, and for artful it’s a normal bugfix SRU. Feel free to shout out success reports on the SRU bug or here and don’t forget to list the GPU(s) you run it on.


on November 09, 2017 08:54 AM

November 08, 2017

Hosting in Vancouver?

Randall Ross

Dear Lazyweb,

If you are a happy customer (or a shameless promoter) of a hosting provider that has its servers in Vancouver BC, I'd love to hear from you. The amazing Retrix, and lovely host of this blog has announced that they are shutting down so looks like I'll be migrating soon.

My needs aren't too elaborate: I'll be setting up a couple of database driven websites, probably Drupal-based, hosted email is a plus.

Oh yeah... must be Ubuntu inside ;)

Thanks!
http://randall.executiv.es/contactme

on November 08, 2017 07:16 PM

November 07, 2017

I have be honest with you folks, the last few weeks have been crazy with travel. Since the beginning of September I have been to Hong Kong, Orlando, Charleston, Prague, Los Angeles, Las Vegas, and I am about to head to New York. In this time I have keynoted two conferences, given two additional talks, ran training and workshops for two clients, and much more.

As such, I have been a little remiss in sharing some content that might be useful. Today I have two videos to share.

Interview: The Business of Building Communities

This was a fun one. I was interviewed by Swapnil Bhartiya in which we got into how to build communities, the work I do as a consultant, my previous work at Canonical/XPRIZE/GitHub, bringing open source into organizations, and more.

Check it out here:

Can’t see the video? Click here to watch it.

Interview: theCUBE

I was interviewed by the fine folks at theCUBE where we touched on bringing open source into companies, how the open source and business world is changing, the formalization of the software development lifecycle, and more.

Can’t see the video? Click here to watch it.

As usual, feedback welcome in the comments!

The post Video: Organizational Community Strategy, Innersource, Consulting, and More appeared first on Jono Bacon.

on November 07, 2017 04:00 PM

Sysdig (.org) is an open-source container troubleshooting tool and it works by capturing system calls and events directly from the Linux kernel.

When you install Sysdig, it adds a new kernel module that it uses to collect all those system calls and events. That is, compared to other tools like strace, lsof and htop, it gets the data directly from the kernel and not from /proc. In terms of functionality, it is a single tool that can do what strace + tcpdump + htop + iftop + lsof + wireshark do together.

An added benefit of Sysdig is that it understands Linux Containers (since 2015). Therefore, it is quite useful when we want to figure out what is going on in our LXD containers.

Once we get used to Sysdig, we can venture to the companion tool called Falco, a tool for container security. Both are GPL v2 licensed, though you need to sign a CLA in order to contribute to the projects (hosted on Github).

Installing Sysdig

We are installing Sysdig on the host (where LXD is running) and not in any of our containers. In this way, we have full visibility of the whole system.

You can get the installation instructions at https://www.sysdig.org/install/ which essentially amount to a single curl | sh command:

curl -s https://s3.amazonaws.com/download.draios.com/stable/install-sysdig | sudo bash

Ubuntu and Debian already have a version of Sysdig, however it is a bit older than the one you get from the command above. Currently, the version in the universe repository is 0.8, while from the command above you get 0.19.

Running Sysdig

If we run sysdig without any filters, it will show us all system messages and events. It’s a never-ending waterfall, and you need to Ctrl+C to stop it.

We can run instead the Curses version called csysdig:

You can go through the examples at https://github.com/draios/sysdig/wiki/Sysdig-Examples In this post, we look into the section that relates to containers.

Specifically,

View the list of containers running on the machine and their resource usage

sudo csysdig -vcontainers

There are six LXD containers, though we do not see their names. The LXD container names are shown under a column further to the right, therefore we would need to use the arrow keys to move to the right. Let’s select the container we are interested in, and press Enter.

We selected the container guiapps and here are the processes inside this container.

View the list of processes with container context

sudo csysdig -pc

This command shows all the container processes together.  That is, they have the container context.

View the CPU usage of the processes running inside the guiapps container

sudo sysdig -pc -c topprocs_cpu container.name=guiapps

Here we switch from csysdig to sysdig. An issue is that these two tools do not have the same parameters.

We have a container called guiapps and we asked sysdig to show the CPU usage of the processes, sorted. The container is idle, therefore all are 0%.

View the network bandwidth usage of the processes running inside the guiapps container

sudo sysdig -pc -c topprocs_net container.name=guiapps

Here it shows the current network traffic inside the container, sorted by traffic. If there is no traffic, the list is empty. Therefore, it is just good to give you an indication of what is happening.

 

View the top files in terms of I/O bytes inside the guiapps container

sudo sysdig -pc -c topfiles_bytes container.name=guiapps

View the top network connections inside the guiapps container

sudo sysdig -pc -c topconns container.name=guiapps

The output is similar to tcpdump, showing the IP addresses of source and destination.

Show all the interactive commands executed inside the guiapps container

sudo sysdig -pc -c spy_users container.name=guiapps

The output looks like this,

29756 17:10:57 root@guiapps) groups 
29756 17:10:57 root@guiapps) /bin/sh /usr/bin/lesspipe
 29756 17:10:57 root@guiapps) basename /usr/bin/lesspipe
29756 17:10:57 root@guiapps) dirname /usr/bin/lesspipe
29756 17:10:57 root@guiapps) dircolors -b
29756 17:11:07 root@guiapps) ls --color=auto
29756 17:11:24 root@guiapps) ping 8.8.8.8
29756 17:11:38 root@guiapps) ifconfig

The commands in italics are the commands that were recorded when running lxc exec. The rest are the commands I typed in the container (ls, ping 8.8.8.8, ifconfig and finally exit which does not get shown). Commands that come from the shell (like pwd, exit) are not visible since they do not execv some command.

Installing Falco

We have already installed Sysdig using the curl | sh method that added their repository. Therefore, to install Falco, we just need to

sudo apt-get install falco

Falco needs its own kernel module,

$ sudo dkms status
falco, 0.8.1, 4.10.0-38-generic, x86_64: installed
sysdig, 0.19.1, 4.10.0-38-generic, x86_64: installed

Upon installation, it adds some default rules in /etc/falco. These rules are about application behaviour that Falco will be inspecting and reporting to us. Therefore, Falco is ready to go. If we need something specific, we would need to add our rules in /etc/falco/falco_rules.local.yaml

Running Falco

Let’s run falco.

$ sudo falco container.name = guiapps
Tue Nov 7 17:41:41 2017: Falco initialized with configuration file /etc/falco/falco.yaml
Tue Nov 7 17:41:41 2017: Parsed rules from file /etc/falco/falco_rules.yaml
Tue Nov 7 17:41:41 2017: Parsed rules from file /etc/falco/falco_rules.local.yaml
17:41:52.933145895: Notice Unexpected setuid call by non-sudo, non-root program (user=nobody parent=<NA> command=lxd forkexec guiapps /var/lib/lxd/containers /var/log/lxd/guiapps/lxc.conf -- env USER=root HOME=/root TERM=xterm-256color PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin LANG=C.UTF-8 -- cmd sudo --user ubuntu --login uid=root)
17:41:52.938956110: Notice A shell was spawned in a container with an attached terminal (user=user guiapps (id=guiapps) shell=bash parent=sudo cmdline=bash terminal=34842)
17:41:58.583366422: Notice A shell was spawned in a container with an attached terminal (user=root guiapps (id=guiapps) shell=bash parent=su cmdline=bash terminal=34842)

We specified that we want to focus only on the container with the name guiapps.

The lines in italics are the startup lines. The two lines on 17:41:52 are the result of the lxc exec guiapps -- sudo --user ubuntu --login. The next line is the result of sudo su.

Conclusion

Both Sysdig (troubleshooting) and Falco (monitoring) are useful tools that are aware of containers. Their default use is quite handy to troubleshoot and monitor containers. These tools have more much features and the ability to add scripts to them (called chisels) to do more even more advanced stuff.

For more resources, check their respective home page.

on November 07, 2017 03:52 PM

Familiarising with MAAS

Canonical Design Team

Recently a number of new designers and developers joined our team – welcome Caleb, Lyubomir, Michael, Thomas and Shivam!

As part of the introduction to Canonical and the Design team, each member of the team gives an overview of the products we design for. As the Lead UX designer for MAAS I did so by explaining the functionality of MAAS on a high level, which was inevitably followed by a lot of questions for more details. In order to provide a complete MAAS introduction I put together a small list of resources that would help the newcomers but also the veterans in our team dig deeper into this metal world..

I am now sharing this list with you and hope that it will help you get started with MAAS.

Happy reading!

Introduction

There are various sources where you can get information about MAAS and the concepts it involves; the Ubuntu websites, Wikipedia, youtube and blogs are all places you can find bits and pieces that will help you understand more about MAAS.

Then there are also a lot of people working on MAAS; myself and the other designers and of course the MAAS engineering team would be happy to help with any questions you might have. You can reach MAAS-ters on the public IRC channel (Freenode #maas) and the Ask Ubuntu website.

You can also follow the development of MAAS and contact the team by registering to the MAAS mailing list at https://lists.ubuntu.com/mailman/listinfo/maas-devel (maas-devel@lists.ubuntu.com).

Here is a list that I think might be a good start to understand what MAAS does, its features and concepts as well as some of the functionality. It is sorted from high to low-level information and it allows you to go as deep as you want.

Chapter I – MAAS and server provisioning

If you are a server provisioning novice, you can start with some sources for understanding what server provisioning is, which is the main thing that MAAS is used for. If you already know about server provisioning you can move to the next section that explains what MAAS is.

  • A recent Webinar takes you through the steps of how to get cloud-ready servers in minutes with MAAS. By Dariush Marsh-Mossadeghi (Consulting Architect) and Chris Wilder (Cloud Content).
  • Canonical’s e-book on What you need to know about server provisioning is also quite insightful. It contains a lot of content from the maas.io homepage and the How it works page and some additional information.
  • Take a look at the tour page to get an overview of the functionality and pick up terms that you can search further to find out what they mean.

And here is a couple of videos explaining what MAAS is

Metal As A Service – the model (you can jump to 2:13 where the model starts getting explained)

https://www.youtube.com/watch?v=I3nfiRKzNSw

MAAS

If you have more questions this factsheet answers the top 10 questions about MAAS.

Chapter II – Technical information that MAAS involves

Now, you can stop if you had enough or you can go deeper into the technical details.

Here are some videos and wiki entries explaining concepts and functionality that MAAS includes.

Servers & hardware

  • PXE booting

https://en.wikipedia.org/wiki/Preboot_Execution_Environment

  • Network Interfaces

https://en.wikipedia.org/wiki/Network_interface

  • BMC & IPMI

http://searchnetworking.techtarget.com/definition/baseboard-management-controller

https://en.wikipedia.org/wiki/Intelligent_Platform_Management_Interface

  • KVM hypervisor

https://en.wikipedia.org/wiki/Kernel-based_Virtual_Machine

Services

  • DNS  (video)

Intro to DNS

  • DHCP  (video)

https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol

Intro to DHCP

Networking

  • Introduction to networking (video – basic intro)

https://www.youtube.com/watch?v=rL8RSFQG8do&index=1&list=PLF360ED1082F6F2A5

  • VLANs and Subnet (video)

https://www.youtube.com/watch?v=twYeSRvdEtc

  • The OSI model (video – explains difference between layer 2 and 3 networking)

https://www.youtube.com/watch?v=HEEnLZV2wGI&list=PLF360ED1082F6F2A5&index=5

  • TCP IP / Subnet masking (video – explains IPv4)

https://www.youtube.com/watch?v=EkNq4TrHP_U

  • IPv4 vs IPV6 (video)

https://www.youtube.com/watch?v=aor29pGhlFE

  • Fabric

https://en.wikipedia.org/wiki/Switched_fabric

Last but not least, the MAAS docs would also be a useful source. You can search terms and functionality specific to MAAS:

https://docs.ubuntu.com/maas/2.2/en/?_ga=2.208312085.683565088.1508226045-405342743.1460033629

Now that you are more familiar with MAAS’s basics, how about seeing it in action? MAAS is free and open source and you can install it in 6 simple steps. The maas.io install page will guide you through them or if you prefer this video shows the installation process. Happy provisioning!

on November 07, 2017 02:08 PM

I’m considering a proposal to have 16.04 LTS be the last release of Ubuntu with 32 bit images to run on 32 bit only machines (on x86 aka Intel/AMD only – this has no bearing on ARM). You would still be able to run 32 bit applications on 64 bit Ubuntu.

Please answer my survey on how this would affect you or your organization.

Please only answer if you are running 32-bit (x86) Ubuntu! Thanks!

If you can’t see the form below click here.

Loading…

Comments

 

  • dragonbite says:

    Something like this is inevitable, but 32 bit is still helpful to have available.

    Myself, I have a number of old, single-core desktops running Ubuntu Server that cannot handle 64 bits, but are able to work as servers just fine.

    I also have a 64bit capable netbook I run 32bit Lubuntu on because of resources.

    Maybe make it so that a minimal disk or server disk is available 32 bit for a little bit longer, after it is dropped for desktop-orientated systems. Those that need a desktop and 32bit can install minimum and then add whatever is needed for the circumstance.

    1. Rob van der Linde says:

      Do I have to fill in the form for each machine I run, it looks like it as there is no field for how many machines?

      I run 6 PC’s at home, all on 64 bit Kubuntu. My work machine is also 64 bit Kubuntu, and my 2 VM’s are also 64 bit. In fact, every VM I use at work is 64 bit as well.

      That’s 9 machines on 64 bit and none on 32, I haven’t run 32 bit for years now.

      Then I also have a couple of Beagleboards also running Trusty, but that’s ARM.

      1. bob says:

        Did you miss this: “Please only answer if you are running 32-bit (x86) Ubuntu!” ???

        1. Bryan says:

          To be fair, I added it because I was getting a lot of 64 bit users responding. Still it’s in the title…

  • vasilisc says:

    My organization use Ubuntu Server LTS 32bits in virtual enviroment.

    1. Bryan says:

      Why? and what version of Ubuntu?

      1. vasilisc says:

        1) Virtual Machine with guest OS Ubuntu Server LTS 32bit less consumption RAM, right?
        2) Ubuntu Server 14.04 LTS

        1. Bryan says:

          It is less RAM consumption, but usually in the 64 MB to 128 MB range. Obviously can be worse depending on the app you are running.

          Generally the performance trade off makes it not worth it.

  • Ali Linx says:

    Hi, it is good idea to run such surveys and it is good point to bring on the table. Old machines should not go to trash unless these are 100% dead. I realized that even Lubuntu or other distributions won’t be helpful in the coming years and for that, I have created this project: http://torios.org/ which is still Alpha at the moment but we are moving forward with solid steps and Beta is just around the corner. Now, ToriOS is based on Ubuntu 12.04 LTS which is well-known to be better on old hardware than 14.04 LTS and for that reason, I insisted to base ToriOS on 12.04 and the team has agreed. Now, what could happen when 16.04 is out and just in case Canonical decided to end the 32bit support by that cycle? or perhaps even before that? 14.04 LTS could be the last one? who knows? not to worry, ToriOS will make sure that old machines will stay in service as long as possible. Only time can tell and prove that 🙂

    Thanks!

  • Walter Lapchynski says:

    My workplace uses 32 bit machines almost exclusively, using Ubuntu Server and FreeBSD for servers and Kubuntu for desktop. We have a mission that includes a commitment to being good environmental stewards. Our machines come from the local electronics recycling store. I admit we are a strange case, but why should we abandon our commitment to older machines when we officially support something committed to them (Lubuntu)?

    That being said, reading Mark Shuttleworth’s wiki page recently helped me understand that Ubuntu is not the distro for every case. Limited scope is necessary to achieve intended goals.

    Still, I love Ubuntu and it’s community but don’t want to contribute to landfills by buying new stuff just to keep using it.

    On the other hand one of our staff has been looking for the excuse to go FreeBSD as a desktop (note we are a manufacturer of a mechanical product i.e. our staff is largely not computer savvy). Please don’t make those of us at the company that provide user support suffer that curse!

    I guess the possibility exists to do community supported releases like we do for ppc, no?

    Finally, do I really need to fill this out for every machine? There are 30-40 of them.

    1. Bryan says:

      >I guess the possibility exists to do community supported releases like we do for ppc, no?

      Members of the Lubuntu community have already expressed interest in making it community supported for Lubuntu. So the possibility definitely exists. Please do fill out the form as though that’s not going to happen though…

      >Finally, do I really need to fill this out for every machine? There are 30-40 of them.
      Generally no, just one entry and say their are 35 of them. If there are substantial differences between them breaking them up could be useful..

      One thing I have noticed is that there is a high rate of people thinking they have a 32-bit machine when they have on capable of 64 bit. The only way I can confirm that myself is be seeing the processor. Then having RAM is useful to, because 64 bit on 1 GB is not fun.

      Of the last, let’s say 5, machines you got from the recycling how many were 64 bit capable?

      1. Walter Lapchynski says:

        > One thing I have noticed is that there is a high rate of people thinking they have a 32-bit machine when they have on capable of 64 bit. The only way I can confirm that myself is be seeing the processor. Then having RAM is useful to, because 64 bit on 1 GB is not fun. Of the last, let’s say 5, machines you got from the recycling how many were 64 bit capable?

        I understand your plight. I just wish there was a way to include info for multiple machines at a time. We actually have several machines that are of the same model and everything.

        Most of the machines we have are HP dc7800 SFF (SKU#GC760AV), using an Intel Core 2 Duo T5470 (type 0, family 6, model 15, stepping 13). So it is actually 64 bit capable, since /proc/cpuinfo does include the “lm” flag.

        We made the decision to go with 32 bit since we didn’t know what we’d end up with. I think that this may not be so relevant any more. That being said, how would we solve this? We would have to re-install every machine? You can’t “upgrade” to 64 bit can you?

        1. Bryan says:

          We made the decision to go with 32 bit since we didn’t know what we’d end up with. I think that this may not be so relevant any more. That being said, how would we solve this? We would have to re-install every machine? You can’t “upgrade” to 64 bit can you?

          You can reinstall in place (but backup first!) and choose the “Upgrade Ubuntu” option in the installer. I’ve moved machines from 32 to 64 bit using it, but it’s not heavily tested. One of the big things I’ve gotten from this survey to make an 32->64 supported upgrade path (even if it can’t be a dist-upgrade path).

  • javier says:

    I am using Ubuntu 10.04.4 LTS 32 bit installed with Wubi! Main OS: Vista Business 32bit

    1. Bryan says:

      As a server? 10.04 for the destkop isn’t supported for 1.5+ years now.

  • Walter Lapchynski says:

    One other thought: are you dropping 32-bit support across all chips or is this only affecting Intel chips? Since Lubuntu is supporting PPC (primarily 32-bit), it would be a huge bummer if this affected PPC, too.

    1. Bryan says:

      This survey/proposal isn’t touching on PPC.

  • BGBgus says:

    But, what would happen to the rest of tastes of Ubuntu? Distros like Xubuntu depends on “small” computers and give us a way to keep using our old but still working PC’s.

    I admit, I would never use Unity on my Pentium, but i still like to keep my repositories updated. Will Xubuntu just disapear? I could look for another GNU’s distribution, but it’s still a lost.

  • oldcomputerfan says:

    Hallo,
    Ich würde es schade finden wenn die 32 BIT Versionen weg fallen.
    Dann wird es viele auf Ubuntu- basierende Distributionen für ältere Computer nicht mehr geben.
    Gruß

  • Aaron says:

    Hi…

    I run Lubuntu 14.04 32 bit on both my laptop and desktop systems. For myself personally, although my laptop is a 64 bit system, I’ve purposely chosen the 32 bit version of Lubuntu because I’ve found that a couple Linux games (that are available in the repositories) don’t crash with segmentation fault errors as they did when I was running Ubuntu 10.04 64 bit. I’m inclined to think there are still bugs to iron out with respect to running 32 bit programs on 64 bit Linux operating systems.

    As part of my work as a computer repair technician, I’ve also installed a 32 bit versions of Linux for a couple clients with older systems where (installing) Windows was not an option, including for financial reasons. This is where Linux fills an important role. The fact that there are 32 bit distributions available, free of charge, for older systems helps keep perfectly usable computers in the hands of those who cannot afford (or easily afford) to purchase a new(er) computer. And there are many people out there who are poor and in tight positions financially.

    For this reason especially, I request (for all distributions within the Ubuntu family) that this decision be delayed until enough 32 bit computers have been recycled/disposed of to where those that are left are in the extreme minority.

    Thank you for your time and consideration. 🙂

    1. Bryan says:

      That is the goal… determining how judge when minority is the right time is the hard part.

      Please do test those games on 16.04 64-bit when you get a chance – feel free to report bugs here.

        1. Bryan says:

          Yea, we generally don’t have enough resources to review every bug people report, sorry. If you can reproduce on Ubuntu 16.04, please report a new bug.

          Feel free to ping me here or on LP, if you find it still occurs.

          1. Aaron says:

            Thanks! 🙂

          2. Aaron says:

            Hi Bryan…

            If I have an occasion to try 16.04, I might do that. 🙂

  • witchyseattle says:

    I myself am on a 32 bit and it would be devastating if I no longer could download Ubuntu. Isn’t the whole point is inclusion ? I only have 2 gig of RAM and cannot use a 64 on this laptop. So much for Ubuntu, they should change their name to sell out, because they are leaving a whole bunch of people behind, just because they are not running 64 bit and that is not fair !

    1. Bryan says:

      First of all, no decision has been made – in fact 32 bit is fully supported for 16.04 LTS. Secondly, Ubuntu is available for free and it costs money for each architecture supported.

      Provide cat /proc/cpuinfo and we can double check if you definitely can’t run 64 bit.

  • Timothy D Lynch says:

    I still use 32 bit on about 5 units and would most likely switch to another distro with 32 bit support to have the uniform. This is the same reason I waited so long to change from 10.04 Ubuntu and will most likely go from 12.04 and do 2 upgrades to 16.04 to use Mate. Old Hardware. I guess I’m cheap and if the hardware is still working I keep using it. The getting it on the cheap is why I quite using Window even though at one time I owned a business supporting it.

  • Weasel says:

    Well I’m late to the party I suppose. I find it absurd to drop an architecture like x86 (32) when you support PPC, but that doesn’t matter.

    For most VMs, 32-bit is much better as it uses less resources. Not just RAM, but disk as well. Especially if you want to run 32-bit apps within the VM, which would require multilib, making the disk space difference that much more than on a pure 32-bit VM. Anyway, running multiple “slim” VMs in parallel tends to make it that much more obvious.

    Look, if you don’t want to support 32-bit as in “test it on every ‘ancient’ machine” then that’s still not so bad as dropping it. The problem isn’t only lack of technical support here, but as you see, LACK OF DOWNLOAD. You can drop “technical support” without dropping the download ISO file for those interested, like to run it in a VM — why should we care of real hardware anyway? Still, we need an iso. To me it just sounds like an excuse to be honest.

    But you take it away from everyone by dropping the download image. That’s why it needs to be preserved as an OPTION, even if not on main download page. Some things need to be “preserved”. Either way bandwidth wouldn’t be a problem since if it’s rarely downloaded then it doesn’t matter.

    “Popularity” isn’t the issue. It’s just having it *available* for anyone wishing to use it (e.g. for a VM).

    As a sidenote, people on the internet keep throwing around the word “use VM for old stuff” but how to use VM when we’re not provided the OS anymore? wtf.

 

on November 07, 2017 04:28 AM

Packaging Notes

Bryan Quigley

I’ve done easy fixes (debdiffs) in Ubuntu and find I need to look up exactly how I want to do a debdiff every time.   Last time I had to look at 5 different docs to get all the commands I needed.   The bug I based this on was a debian only change (Init script), I plan to update it next time I have an actual source change.

  1. Start a new VM/ Cloud instance
  2. sudo apt-get install packaging-dev
  3.  apt-get source <package_name>  ;  apt-get build-dep <package_name>
  4. cd into-directory-created
  5. Make the change (if it’s only a debian/ change)
  6. dch -i   (document it)
  7. debuild -S -us -uc  (build it)
  8. debdiff rrdtool_1.4.7-1.dsc rrdtool_1.4.7-1ubuntu1.dsc > rrdtool_1.4.7-1ubuntu1.debdiff   (make the debdiff – note to me, change the name later)
  9. cd into-directory; DEB_BUILD_OPTIONS=’nostrip noopt debug’ fakeroot debian/rules binary  (build it)
  10.  Test it

Docs used:

  1. http://packaging.ubuntu.com/html/traditional-packaging.html
  2. http://packaging.ubuntu.com/html/fixing-a-bug-example.html
  3. http://cheesehead-techblog.blogspot.com/2008/10/creating-patch-to-fix-ubuntu-bug.html
  4. https://wiki.debian.org/IntroDebianPackaging
  5. https://wiki.debian.org/BuildingTutorial

Comments:toobuntu

My understanding is the current best practice is to use mk-build-deps, provided by the devscripts package, to install build dependencies. See https://wiki.debian.org/BuildingAPackage#Get_the_build_dependencies

With apt-get build-dep, the installed packages are marked as manually installed and so won’t be offered for autoremoval. mk-build-deps creates a dummy metapackage with the build deps as dependencies. When that generated package is later removed, so are the build deps.

Or, add “APT::Get::Build-Dep-Automatic true;” to your apt.conf to mark the build deps as automatically installed so they will be removed with the next “apt-get autoremove”

on November 07, 2017 04:09 AM

Recently I have been having trouble sleeping and looked into ways to help myself drift off to the land of nod a bit more easily.

One technique, which I have on my mobile phone, is to reduce blue colours with a shift to more subtle warmer red colours. This is called “redshift” and I decided to see if the feature is available in KDE Plasma. It turns out that there is a Plasmoid made for this task called “Redshift Control” and it is available via the Ubuntu archive.

clivejo@kubuntu.org:~ $ sudo apt install plasma-applet-redshift-control

Once installed you can “Add widgets” by menu clicking on your Desktop or Panel (make sure they are unlocked)

    

Search for “red” and the Redshift Control plasmoid should appear.  You can then drag this to the panel or somewhere on the desktop as you want.  For this example I have dragged it onto the bottom panel.

You can now toggle it on and off by clicking the bulb icon, and by menu clicking you can change the options.

 

Hovering over the bulb icon and scrolling your mouse wheel will reduce or increase the level of “red shift”, the following is the result while at the maximum on my system and certainly creates a lovely red glow in the room.

PS: Just remember to close your curtains while using this intensity level, otherwise you could have all kinds of nightly callers knocking your door !

on November 07, 2017 12:51 AM

November 06, 2017

Update #1: Added working screenshot and some instructions.

Update #2: Added instructions on how to get the app to autostart through systemd.

Installing a Node.js app on your desktop Linux computer is a messy affair, since you need to add a new repository and install lots of additional packages.

The alternative to messing up your desktop Linux, is to create a new LXD (LexDee) container, and install into that. Once you are done with the app, you can simply delete the container and that’s it. No trace whatsoever and you keep your clean Linux installation.

First, see how to setup LXD on your Ubuntu desktop. There are extra resources if you want to install LXD on other distributions.

Second, for this post we are installing chalktalk, a Node.js app that turns your browser into an interactive blackboard. Nothing particular about Chalktalk, it just appeared on HN and it looks interesting.

Here is what we are going to see today,

  1. Create a LXD container
  2. Install Node.js in the LXD container
  3. Install Chalktalk
  4. Testing Chalktalk

Creating a LXD container

Let’s create a new LXD container with Ubuntu 16.04 (Xenial, therefore ubuntu:x), called mynodejs. Feel free to use something more descriptive, like chalktalk.

$ lxc launch ubuntu:x mynodejs
Creating mynodejs
Starting mynodejs
$ lxc list -c ns4
+----------+---------+----------------------+
| NAME     | STATE   | IPV4                 |
+----------+---------+----------------------+
| mynodejs | RUNNING | 10.52.252.246 (eth0) |
+---------------+----+----------------------+

Note down the IP address of the container. We need it when we test Chalktalk at the end of this howto.

Then, we get  a shell into the LXD container.

$ lxc exec mynodejs -- sudo --login --user ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@mynodejs:~$

We executed in the mynodejs container the command sudo –login –user ubuntu. It gives as a login shell for the non-root default user ubuntu, that is always found in Ubuntu container images.

Installing Node.js in the LXD container

Here are the instructions to install Node.js 8 on Ubuntu.

ubuntu@mynodejs:~$ curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
## Installing the NodeSource Node.js v8.x repo...
...
## Run `apt-get install nodejs` (as root) to install Node.js v8.x and npm

ubuntu@mynodejs:~$ sudo apt-get install -y nodejs
...
Setting up python (2.7.11-1) ...
Setting up nodejs (8.9.0-1nodesource1) ...
ubuntu@mynodejs:~$ sudo apt-get install -y build-essential
...
ubuntu@mynodejs:~$

The curl | sh command makes you want to install in a container rather than on your desktop. Just saying. We also install the build-essential meta-package because it is needed when you install packages on top of Node.js.

Installing Chalktalk

We follow the installation instructions for Chalktalk to clone the repository. We use depth=1 to get a shallow copy (18MB) instead of the full repository (100MB).

ubuntu@mynodejs:~$ git clone https://github.com/kenperlin/chalktalk.git --depth=1
Cloning into 'chalktalk'...
remote: Counting objects: 195, done.
remote: Compressing objects: 100% (190/190), done.
remote: Total 195 (delta 5), reused 51 (delta 2), pack-reused 0
Receiving objects: 100% (195/195), 8.46 MiB | 8.34 MiB/s, done.
Resolving deltas: 100% (5/5), done.
Checking connectivity... done.
ubuntu@mynodejs:~$ cd chalktalk/server/
ubuntu@mynodejs:~/chalktalk/server$ npm install
> bufferutil@1.2.1 install /home/ubuntu/chalktalk/server/node_modules/bufferutil
> node-gyp rebuild

make: Entering directory '/home/ubuntu/chalktalk/server/node_modules/bufferutil/build'
 CXX(target) Release/obj.target/bufferutil/src/bufferutil.o
 SOLINK_MODULE(target) Release/obj.target/bufferutil.node
 COPY Release/bufferutil.node
make: Leaving directory '/home/ubuntu/chalktalk/server/node_modules/bufferutil/build'

> utf-8-validate@1.2.2 install /home/ubuntu/chalktalk/server/node_modules/utf-8-validate
> node-gyp rebuild

make: Entering directory '/home/ubuntu/chalktalk/server/node_modules/utf-8-validate/build'
 CXX(target) Release/obj.target/validation/src/validation.o
 SOLINK_MODULE(target) Release/obj.target/validation.node
 COPY Release/validation.node
make: Leaving directory '/home/ubuntu/chalktalk/server/node_modules/utf-8-validate/build'
npm WARN chalktalk@0.0.1 No description
npm WARN chalktalk@0.0.1 No repository field.
npm WARN chalktalk@0.0.1 No license field.

added 5 packages in 3.091s
ubuntu@mynodejs:~/chalktalk/server$ cd ..
ubuntu@mynodejs:~/chalktalk$

Trying out Chalktalk

Let’s run the app, using the following command (you need to be in the chalktalk directory):

ubuntu@mynodejs:~/chalktalk$ node server/main.js 
HTTP server listening on port 11235

Now, we are ready to try out Chalktalk! Use your favorite browser and visit http://10.52.252.246/11235 (replace according the IP address of your container).

You are presented with a blackboard! You use the mouse to sketch objects, then click on your sketch to get Chalktalk to try to identify it and create the actually responsive object.

It makes more sense if you watch the following video,

And this is how you can cleanly install Node.js into a LXD container. Once you are done testing, you can delete the container and it’s gone.

 

Update #1:

Here is an actual example. The pendulum responds to the mouse and we can nudge it.

The number can be incremented or decremented using the mouse; do an UP gesture to increment, and a DOWN gesture to decrement. You can also multiply/divide by 10 if you do a LEFT/RIGHT gesture.

Each type of object has a corresponding sketch in Chalktalk. In the source there is a directory with the sketches, with the ability to add new sketches.

 

Update #2:

Let’s see how to get this Node.js app to autostart when the LXD container is started. We are going to use systemd to control the autostart feature.

First, let’s create a script, called chalktalk-service.sh, that starts the Node.js app:

ubuntu@mynodejs:~$ pwd
/home/ubuntu
ubuntu@mynodejs:~$ cat chalktalk-service.sh 
#!/bin/sh
cd /home/ubuntu/chalktalk
/usr/bin/node /home/ubuntu/chalktalk/server/main.js
ubuntu@mynodejs:~$

We have created a script instead of running the command directly. The reason is that chalktalk uses relative pathes and we need to chdir to the appropriate directory first so it works. You may want to contact the author to attend to this.

Then, we create a service file for Chalktalk.

ubuntu@mynodejs:~$ cat /lib/systemd/system/chalktalk.service 
[Unit]
Description=Chalktalk - your live blackboard
Documentation=https://github.com/kenperlin/chalktalk/wiki
After=network.target
After=network-online.target

[Service]
Type=simple
User=ubuntu
Group=ubuntu
ExecStart=/home/ubuntu/chalktalk-service.sh
Restart=on-failure
StandardOutput=syslog
StandardError=syslog
SyslogIdentifier=chalktalk

[Install]
WantedBy=multi-user.target

ubuntu@mynodejs:~$

We have configured to get Chalktalk to autostart once the network is up and online. The service gets to become user ubuntu, then run the command. Any output or error goes to syslog, using the chalktalk syslog identifier.

Let’s get Systemd to learn about this new service file.

ubuntu@mynodejs:~$ sudo systemctl daemon-reload
ubuntu@mynodejs:~$

Let’s enable the Chalktalk service, then start it.

ubuntu@mynodejs:~$ sudo systemctl enable chalktalk.server
ubuntu@mynodejs:~$ sudo systemctl start chalktalk.service
ubuntu@mynodejs:~$

Now we verify whether it works.

Let’s restart the container and test whether the Chalktalk actually autostarted!

ubuntu@mynodejs:~$ logout

myusername@mycomputer /home/myusername:~$ lxc restart mynodejs
myusername@mycomputer /home/myusername:~$

That’s it!

on November 06, 2017 10:12 PM

2.3.0 RC1 Released!

Andres Rodriguez

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 RC1 has now been released and it is currently available in PPA and as a snap.
PPA Availability
For those running Ubuntu Xenial and would like to use RC1, please use the following PPA:
ppa:maas/next
Snap Availability
For those running from the snap, or would like to test the snap, please use the Beta channel on the default track:
sudo snap install maas –devmode —beta
 

MAAS 2.3.0 (RC1)

Issues fixed in this release

For more information, visit: https://launchpad.net/maas/+milestone/2.3.0rc1

  • LP: #1727576    [2.3, HWTv2] When specific tests timesout there’s no log/output
  • LP: #1728300    [2.3, HWTv2] smartctl interval time checking is too short
  • LP: #1721887    [2.3, HWTv2] No way to override a machine that Failed Testing
  • LP: #1728302    [2.3, HWTv2, UI] Overall health status is redundant
  • LP: #1721827    [2.3, HWTv2] Logging when and why a machine failed testing (due to missing heartbeats/locked/hanged) not available in maas.log
  • LP: #1722665    [2.3, HWTv2] MAAS stores a limited amount of test results
  • LP: #1718779    [2.3] 00-maas-06-get-fruid-api-data fails to run on controller
  • LP: #1729857    [2.3, UI] Whitespace after checkbox on node listing page
  • LP: #1696122    [2.2] Failed to get virsh pod storage: cryptic message if no pools are defined
  • LP: #1716328    [2.2] VM creation with pod accepts the same hostname and push out the original VM
  • LP: #1718044    [2.2] Failed to process node status messages – twisted.internet.defer.QueueOverflow
  • LP: #1723944    [2.x, UI] Node auto-assigned address is not always shown while in rescue mode
  • LP: #1718776    [UI] Tooltips missing from the machines listing page
  • LP: #1724402    no output for failing test
  • LP: #1724627    00-maas-06-get-fruid-api-data fails relentlessly, causes commissioning to fail
  • LP: #1727962    Intermittent failure: TestDeviceHandler.test_list_num_queries_is_the_expected_number
  • LP: #1727360    Make partition size field optional in the API (CLI)
  • LP: #1418044    Avoid picking the wrong IP for MAAS_URL and DEFAULT_MAAS_URL
  • LP: #1729902    When commissioning don’t show message that user has overridden testing
on November 06, 2017 07:31 PM

November 05, 2017

Embracing Modern CMake

Stephen Kelly

I spoke at the ACCU conference in April 2017 on the topic of Embracing Modern CMake. The talk was very well attended and received, but was unfortunately not recorded at the event. In September I gave the talk again at the Dublin C++ User Group, so that it could be recorded for the internet.

The slides are available here. The intention of the talk was to present a ‘gathered opinion’ about what Modern CMake is and how it should be written. I got a lot of input from CMake users on reddit which informed some of the content of the talk.

Much of the information about how to write Modern CMake is available in the CMake documentation, and there are many presentations these days advocating the use of modern patterns and commands, discouraging use of older commands. Two other talks from this year that I’m aware of and which are popular are:

It’s very pleasing to see so many well-received and informative talks about something that I worked so hard on designing (together with Brad King) and implementing so many years ago.

One of the points which I tried to labor a bit in my talk was just how old ‘Modern’ CMake is. I recently was asked in private email about the origin and definition of the term, so I’ll try to reproduce that information here.

I coined the term “Modern CMake” while preparing for Meeting C++ 2013, where I presented on the topic and the developments in CMake in the preceding years. Unfortunately (this happens to me a lot with CMake), the talk was not recorded, but I wrote a blog post with the slides and content. The slides are no longer on the KDAB website, but can be found here. Then already in 2013, the simple example with Qt shows the essence of Modern CMake:


find_package(Qt5Widgets 5.2 REQUIRED)

add_executable(myapp main.cpp)
target_link_libraries(myapp Qt5::Widgets)

Indeed, the first terse attempt at a definition of “Modern CMake” and first public appearance of the term with its current meaning was when I referred to it as approximately “CMake with usage requirements”. That’s when the term gained a capitalized ‘M’ and its current meaning and then started to gain traction.

The first usage I found of “Modern CMake” in private correspondence was March 13 2012 in an email exchange with Alex Neundorf about presenting together on the topic at a KDE conference:

Hi Alex

Are you planning on going to Talinn for Akademy this year? I was thinking about sumitting a talk along the lines of Qt5, KF5, CMake (possibly along the lines of the discussion of ‘modern CMake’ we had before with Clinton, and what KDE CMake files could look like as a result).

I thought maybe we should coordinate so either we don’t submit overlapping proposals, or we can submit a joint talk.

Thanks,

Steve.

The “discussion with Clinton” was probably this thread and the corresponding thread on the cmake mailing list where I started to become involved in what would become Modern CMake over the following years.

The talk was unfortunately not accepted to the conference, but here’s the submission:

Speakers: Stephen Kelly, Alexander Neundorf
Title: CMake in 2012 – Modernizing CMake usage in Qt5 and KDE Frameworks 5
Duration: 45 minutes

KDE Frameworks 5 (KF5) will mark the start of a new chapter in the history of KDE and of the KDE platform. Starting from a desire to make our developments more easy to use by 3rd parties and ‘Qt-only’ developers, the effort to create KF5 is partly one of embracing and extending upstreams to satisfy the needs of the KDE Platform, to enable a broadening of the user base of our technology.

As it is one of our most important upstreams, and as the tool we use to build our software, KDE relies on CMake to provide a high standard of quality and features. Throughout KDE 4 times, KDE has added extensions to CMake which we consider useful to all developers using Qt and C++. To the extent possible, we are adding those features upstream to CMake. Together with those features, we are providing feedback from 6 years of experience with CMake to ensure it continues to deliver an even more awesome build experience for at least the next 6 years. Qt5 and KF5 will work together with CMake in ways that were not possible in KDE 4 times.

The presentation will discuss the various aspects of the KDE buildsystem planned for KF5, both hidden and visible to the developer. These aspects will include the CMake automoc feature, the role of CMake configuration files, and how a target orientated and consistency driven approach could change how CMake will be used in the future.

There is a lot to recognize there in what has since come to pass and become common in Modern CMake usage, in particular the “target orientated and consistency driven approach” which is the core characteristic of Modern CMake.


on November 05, 2017 05:40 PM
¡Comenzamos la 2ª temporada de Ubuntu y otras hierbas con un episodio muy largo!

Francisco MolineroFrancisco Javier TerueloFernando Lanero  y Marcos Costalescharlamos sobre la nueva versión 17.10 de Ubuntu y sobre las distros 100% libres.

Ubuntu y otras hierbas S02E01

El podcast está disponible para escuchar en:

on November 05, 2017 02:57 PM

November 04, 2017

This is an security and bugfix update. However, there is also minor feature added to enhance usability. It will be integrated into Lubuntu very soon. Changes since previous release 0.3.0 (see git log for details): Fix CVE-2016-10369, a Denial-of-Service vulnerability. Fix bug that prevents changing tab name, which is introduced since 0.3.0. The keyboard shortcut can be changed […]
on November 04, 2017 01:02 PM

November 03, 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I had 1.5h left from September too. During this time, I finally finished my work on exiv2: I completed the triage of all CVE, backported 3 patches to the version in wheezy and released DLA-1147-1.

I also did some review of the oldest entries in dla-needed. I reclassified a bunch of CVE on zoneminder and released DLA-1145-1 for the most problematic issue on that package. Many other packages got their CVE reclassified as not worth an update: xbmc, check-mk, rbenv, phamm, yaml-cpp. For mosquitto, I released DLA-1146-1.

I filed #879001 (security issue) and #879002 (removal suggestion) on libpam4j. This library is no longer used by any other package in Debian, so it could be removed instead of costing us time in support.

Misc Debian work

After multiple months of wait, I was allowed to upload my schroot stable update (#864297).

After ack from the d-i release manager, I pushed my pkgsel changes and uploaded version 0.46 of the package: this brings unattended-upgrades support in the installer. It’s now installed by default.

I nudged the upstream developer of gnome-shell-timer to get a new release for GNOME 3.26 compatibility and packaged it.

Finally, I was pleased to merge multiple patches from Ville Skyttä on Distro Tracker (the software powering tracker.debian.org). It looks like Ville will continue to contribute on a regular basis, yay. \o/ He already helped me to fix the remaining blockers for the switch to Python 3.

Not really Debian related, but I also filed a bug against Tryton that I discovered after upgrading to the latest version.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on November 03, 2017 10:53 AM