October 21, 2017

Every new Ubuntu cycle brings many changes, and the arrival of Ubuntu 17.10, the “Artful Aardvark” release, brings more changes than usual. The default desktop has changed to GNOME Shell, with some very thoughtful changes by the desktop team to make it more familiar. And of course, the community wallpapers included with this exciting new release have changed as well!

Every cycle, talented artists around the world create media and release it under licenses that encourage sharing and adaptation. For Ubuntu 17.10, 50 images were submitted to the Ubuntu 17.10 Free Culture Showcase photo pool on Flickr, where all eligible submissions can be found.

Amid the busy development work being done each cycle, a small group of community members votes on their favorites. These anonymous contributors work hard to make the community and software around Ubuntu even better. But this time around I would like to thank two additional contributors who were asked to look at the photo pool and vote. Their ballots held the same weight as each of the others.

The first is Barton George, who works at Dell and leads Product Sputnik, which produces some really nice Ubuntu-based laptops specifically for developers to use. It was nice of him to take some time to

The second is Jane Silber, the outgoing CEO of Canonical, who for many years has helped guide Ubuntu and been very generous to the community with her time and energy. She was the first respondent when requests to vote were sent out—no surprise to anyone who saw her dedication firsthand—and I am happy that she was able to take a little time, during her last Ubuntu release as CEO, to recommend a few images. :)

The results are in, the new release is out, and I’m proud to announce the winning images that are waiting for you right now in Ubuntu 17.10:

A big congratulations to the winners, and thanks to everyone who submitted a wallpaper. You can find these wallpapers (along with dozens of other stunning wallpapers) at the links above, or in your desktop wallpaper list in Ubuntu 17.10.

on October 21, 2017 09:12 PM

At the Web Engines Hackfest in A Coruña at the beginning of October 2017, I was working on adding some proof-of-concept code to Servo to render HTML5 videos with GStreamer. For the impatient, the results can be seen in this video here

And the code can be found here and here.

Details

Servo is Mozilla‘s experimental browser engine written in Rust, optimized for high-performance, parallelized rendering. Some of the parts of Servo are being merged in Firefox as part of the Project Quantum, and already provide a lot of performance and stability improvements there.

During the hackfest I actually spent most of the time trying to wrap my head around the huge Servo codebase. It seems very well-structured and designed, exactly what you would expect from starting such a project from scratch by a company that has decades of experience writing browser engines already. After also having worked on WebKit in the past, I would say that you can see the difference of a legacy codebase from the end of the 90s and something written in a modern language with modern software engineering practices.

To the actual implementation of HTML5 video rendering via GStreamer, I actually started on top of the initial implementation that Philippe Normand started before already. That one was rendering the video in a separate window though, and did not work with the latest version of Servo anymore. I cleaned it up and made it work again (probably the best task you can do to learn a new codebase), and then added support for actually rendering the video inside the web view.

This required quite a few additions on the Servo side, some of which are probably more hacks than anything else, but from the GStreamer-side is was extremely simple. In Servo currently all the infrastructure for media rendering is still missing, while GStreamer has more than a decade of polishing for making integration into other software as easy as possible.

All the GStreamer code was written with the GStreamer Rust bindings, containing not a single line of unsafe code.

As you can see from the above video, the results work quite well already. Media controls or anything more fancy are not working though. Also rendering is currently done completely in software, and a RGBA frame is then uploaded via OpenGL to the GPU for rendering. However, hardware codecs can already be used just fine, and basically every media format out there is supported.

Future

While this all might sound great, unfortunately Mozilla’s plans for media support in Servo are different. They’re planning to use the C++ Firefox/Gecko media backend instead of GStreamer. Best to ask them for reasons, I would probably not repeat them correctly.

Nonetheless, I’ll try to keep the changes updated with latest Servo and once they add more things for media support themselves add the corresponding GStreamer implementations in my branch. It still provides value for both showing that GStreamer is very well capable of handling web use cases (which it already showed in WebKit), as well as being a possibly better choice for people trying to use Servo on embedded systems or with hardware codecs in general. But as I’ll have to work based on what they do, I’m not going to add anything fundamentally new myself at this point as I would have to rewrite it around whatever they decide for the implementation of it anyway.

Also once that part is there, having GStreamer directly render to an OpenGL texture would be added, which would allow direct rendering with hardware codecs to the screen without having the CPU worry about all the raw video data.

But for now, it’s waiting until they catch up with the Firefox/Gecko media backend.

on October 21, 2017 11:25 AM

October 20, 2017

As someone who helps organizations to build communities with prospective members and customers, I am always on the lookout for effective methods and techniques for building authentic, valuable engagement. Sadly, as part of this, I often see cases where people get it wrong too. I want to share one such example here.

Bark is a website that provides a service where people can find service providers such as gardeners, plumbers etc. They seem to have around 20 million users and a good TrustPilot rating. I have never used Bark before so I have no idea how good their service is, but it seems their engagement approach is broken.

Now, to be clear, my goal here is highlight a problem and propose a solution. While Bark are the company in question here, they themselves are not the focus of this article. I am less interested in them and more interested in the topic of unsolicited and automated engagement, irrespective of who it is. This is why I didn’t use Bark in the title of this post and I have not optimized my SEO around their name: they are merely a current example (and hopefully they will fix this).

The Problem

Recently I started getting a bunch of emails from Bark in a fairly short timeframe:

Bark Emails

Each one looks fairly similar. Here is an example:

To think I would be good with a lawnmower is ludicrous.

Now, a few key notes here:

  • I have never signed up for Bark, never used the service, never had my badge scanned by someone from Bark at a conference, and never given permission for them to email me.
  • I am not a lawncare professional (quite the opposite, I am a shitty gardener).
  • I don’t live in Solihull. I live on the other side of the planet in California.

After I got the third email I reached out to Bark via Twitter to ask why they are contacting me. They asked me to continue the conversation in a Direct Message. I am not sharing those messages here because I don’t believe in posting private conversations.

In a nutshell, I was informed a colleague got my details online and must have got it wrong about my expertise in lawncare. I asked where they got my details from and they said from my contact page.

When I informed them that spamming is illegal in England, they assured me that the precautions they take in the emails they send (e.g. including an opt-out link) mean their emails operate within the law.

Issues and Solutions

As mentioned above, I have no animosity to Bark themselves, and I am sure they are good people trying to do good work (in fact, their people responding on social media were lovely), but there are flaws in this current model. Let’s cover these and some proposed solutions.

Ensure your emails are accurate

As shown above, the emails I got from Bark were simply broken in the two most critical areas: the service sought and the location. As someone who isn’t a gardener living in California, I am of literally no use in this correspondence.

This means that Bark is wasting my time (opening and reading the email) and wasting their resources (e.g. sending out the emails, trying to connect customers and providers etc).

One would assume this simply providers no value, but it is worse as it now cements Bark in my head as an incompetent organization to get this so wrong.

Solution: always ensure your emails in any context are (a) accurate, (b) personal, and (c) provide value. There is an uncanny valley in emails: people can often spot if they are automated. If you do automate (which is totally fine in many scenarios), it should be personal and offer relevant value.

Don’t send valueless unsolicited email

Now, it is easy to be snippy about unsolicited email, but it is not bad in all scenarios. Importantly, people judge unsolicited email in three areas:

  1. Who sent it
  2. The value of the content
  3. The relevance of the content to the reader

Some unsolicited emails are helpful. For example, when someone out of the blue emails me about hiring me it is of value. As per above, (1) the person themselves sent it, (2) it relates to my area of expertise and business, (3) I can probably serve those needs.

In this case, (1) some random company sent this to me, not the person themselves looking for business, (2) the content as discussed above is entirely mismatched to me or my location, and (3) see #2.

Solution: firstly, you should never send emails to people have not indicated in some form they are happy to get them (e.g. having a badge scanned at a conference or agreeing to receive email). Secondly, always ensure this content is highly tuned to the reader: make it personal, make it demonstrate value specific to them, and include the integrity of the sender in it. Unsolicited email can be used for good but as a general rule it is broadly abused and then it is filed in the spam folder where it never gets looked at.

Don’t scrape contact details

In this instance, it seems my contact details were pulled from my public contact form. Now, to be clear, I put my email address there (it is not hidden).

The issue here is twofold.

Firstly, when I created my contact page, I intended for people to contact me directly with questions, queries, and potential collaborations. I don’t put it there to get unsolicited email.

Secondly, I am convinced that the reason why the original email to me was so inaccurate (lawn services and me living in Solihull) is because they tried to find corrosponding information in some scraped way (or with a minimal level of human effort). Quite how they got to lawn services I don’t know. I did used to live near Solihull at least…

Solution: don’t scrape contact details from the Internet. It is unwanted, it doesn’t work well, and and it offers little value for everyone involved.

Don’t use the law as a defence for poor engagement

When I queried Bark via DM about where they got my contact details and informed them that spamming is illegal in the UK, their response was that they are operating within the laws of the UK. I believe this: I am not suggesting at all that Bark are breaking the law, I don’t think they are.

The problem with this response though is that it is needling out of the problem. Sure, they are working within the parameters of the law, but are they working within the parameters of how people like to be treated online?

I don’t think so.

Solution: don’t send unsolicited email, as outlined above.

Brand Harm

The core of my philosophy with how companies should build communities and engage with their customers/users is that it should be authentic. In a nutshell, treat your customers as you want to be treated yourself.

When we automate away the personality of a service, when we forfeit due diligence in the interests of growth, and when we deliver an experience that puts the other person in a position of being bombarded with content they didn’t ask for, it erodes brand confidence.

I think this is happening here. Just a few small examples:

This is just a few small examples. Visit, their Twitter feed has more.

Firstly, it is clear this behavior is irrating a lot of people. If I was running Bark, I would immediate change this course of action.

Secondly, the common responses from Bark to these frustrations are (1) we were just trying to help a client find someone, and (2) it was in error, it won’t happen again. For the former, this is a weak answer as their methods of finding a service provider are clearly broken, mismatched, and not adding value. It is one thing to get an uninteresting request for one of their clients, but another to get totally irrelevant unsolicited email For the latter, this error appears to be happening so much that I frankly question whether they are trying to resolve the errors as a systematic way (to fix the broader problem) as opposed to an individual level (to simply unsubscribe people).

What’s Next?

I believe sunlight is the best disinfectant and I always admire companies who are open about both their successes and failures. It reminds me when GitLab had their downtime incident: instead of battening down the hatches, they spun up a Google Doc, a live YouTube stream and brought their customers in to help rectify the issue. They got a lot of goodwill from their community.

If you work for an organization where this article smacks a little close to home, I would be open about it, identify where there are failings, and bring your customers in where they can help you to understand the primary value they are seeking and how you can craft that. People respect humility in cases of failure.

The reason I am writing this is because I suspect the folks at Bark are good people making some mistakes, and I suspect other companies are making similar mistakes, so I figured this might be a useful article to mull on.

The post The Risks of Unsolicited and Automated Engagement appeared first on Jono Bacon.

on October 20, 2017 10:12 PM

This week we’ve been protecting our privacy with LineageOS and playing Rust. Telegram get fined, your cloud is being used to mine BitCoin, Google announces a new privacy focused product tier, North Korea hacks a UK TV studio, a new fully branded attack vector is unveiled and Purism reach their funding goal for the Librem 5.

It’s Season Ten Episode Thirty-Three of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on October 20, 2017 07:35 PM

Earlier this year I worked a bit with our logo to propose a small change to it – first change to the logo in 5 years. The team approved, but for various reasons the new logo did not make it to 17.10. Now we’re ready to push it out to the world.

For the last five years, we’ve served two versions of the logo – one for small rendered sizes and one for larger sizes – because the whiskers needed to look good in all sizes. This has been slightly confusing for people who want to use the Xubuntu logo outside the material curated by the Xubuntu team. To be honest, the team itself has been a bit confused at times too and sometimes special arrangements have been made.

The new logo solves this problem – there is now one version for all sizes. While fixing this small annoyance (by essentially making the weight of the whiskers something in between the old versions), I improved their shape and distance to the head a little bit. Finally, I cleaned up some path nodes to make the head vector slightly less complex – without changing the looks of it too much.

Xubuntu logo icon (from left): Version for large sizes (2012), updated version from 2017, version for small sizes (2012)

I will be working to get the logo spread out everywhere as soon as possible starting from now. If you are using the Xubuntu logo on your website (or any place really), take action now to update it.

The new logo is already available from the Brand Resources page on the Xubuntu website.

on October 20, 2017 06:41 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In August, about 170 work hours have been dispatched among 13 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours is the same as last month. But we have a new sponsor in the pipe.

The security tracker currently lists 52 packages with a known CVE and the dla-needed.txt file 49. The number of packages with open issues decreased slightly compared to last month but we’re not yet back to the usual situation.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on October 20, 2017 01:03 PM

GStreamer now has support for I-frame-only (aka keyframe) trick mode playback of DASH streams. It works only on DASH streams with ISOBMFF (aka MP4) fragments, and only if these contain all the required information. This is something I wanted to blog about since many months already, and it’s even included in the GStreamer 1.10 release already.

When trying to play back a DASH stream with rates that are much higher than real-time (say 32x), or playing the streams in reverse, you can easily run into various problems. This is something that was already supported by GStreamer in older versions, for both DASH streams as well as local files or HLS streams but it’s far from ideal. What would happen is that you usually run out of available network bandwidth (you need to be able to download the stream 32x faster than usual), or out of CPU/GPU resources (it needs to be decoded 32x faster than usual) and even if all that works, there’s no point in displaying 960 (30fps at 32x) frames per second.

To get around that, GStreamer 1.10 can now (if explicitly requested with GST_SEEK_FLAG_TRICKMODE_KEY_UNITS) only download and decode I-frames. Depending on the distance of I-frames in the stream and the selected playback speed, this looks more or less smooth. Also depending on that, this might still yield to many frames to be downloaded or decoded in real-time, so GStreamer also measures the distance between I-frames, how fast data can be downloaded and whether decoders and sinks can catch up to decide whether to skip over a couple of I-frames and maybe only download every third I-frame.

If you want to test this, grab the playback-test from GStreamer, select the trickmode key-units mode, and seek in a DASH stream while providing a higher positive or negative (reverse) playback rate.

Let us know if you run into any problems with any specific streams!

Short Implementation Overview

From an implementation point of view this works by having the DASH element in GStreamer (dashdemux) not only download the ISOBMFF fragments but also parses the headers of each to get the positions and distances of each I-frame in the fragment. Based on that it then decides which ones to download or whether to skip ahead one or more fragments. The ISOBMFF headers are then passed to the MP4 demuxer (qtdemux), followed by discontinuous buffers that only contain the actual I-frames and nothing else. While this sounds rather simple from an high-level point of view, getting this all right in the details was the result of a couple of months of work by Edward Hervey and myself.

Currently the heuristics for deciding which I-frames to download and how much to skip ahead are rather minimal, but it’s working fine in many situations already. A lot of tuning can still be done though, and some streams are working less well than others which can also be improved.

on October 20, 2017 10:53 AM

October 19, 2017

Since Ubuntu 17.10 has just been released, I have added new feature to the ucaresystem Core that can be used by the user to upgrade his distribution to the next stable version or optionally to the next development version of Ubuntu. For those who are not familiar with the ucaresystem app it is an automation […]
on October 19, 2017 11:11 PM

Kubuntu 17.10 has been released, featuring the beautiful Plasma 5.10 desktop from KDE.

Codenamed “Artful Aardvark”, Kubuntu 17.10 continues our proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

The team has been hard at work through this cycle, introducing new features and fixing bugs.

Under the hood, there have been updates to many core packages, including a new 4.13-based kernel, KDE Frameworks 5.38, Plasma 5.10.5 and KDE Applications 17.04.3


Kubuntu has seen some exciting improvements, with newer versions of Qt, updates to major packages like Krita, Kdenlive, Firefox and LibreOffice, and stability improvements to KDE Plasma.

For a list of other application updates, upgrading notes and known bugs be sure to read our release notes.

Download 17.10 or read about how to upgrade from 17.04.

on October 19, 2017 07:06 PM
Thanks to all the hard work from our contributors, Lubuntu 17.10 has been released! With the codename Artful Aardvark, Lubuntu 17.10 is the 13th release of Lubuntu, with support until July of 2018. What is Lubuntu? Lubuntu is an official Ubuntu flavor based on the Lightweight X11 Desktop Environment (LXDE). The project’s goal is to […]
on October 19, 2017 06:55 PM

The Xubuntu team is happy to announce the immediate release of Xubuntu 17.10.

Xubuntu 17.10 is a regular release and will be supported for 9 months, until July 2018. If you need a stable environment with longer support time, we recommend that you use Xubuntu 16.04 LTS instead, or wait for 18.04, the next LTS version to be released in April 2018.

The final release images are available as torrents and direct downloads from
xubuntu.org/getxubuntu/

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

We’d like to thank everybody who contributed to this release of Xubuntu!

Support

For support with the release, navigate to Help & Support for a complete list of methods to get help.

Highlights and Known Issues

Highlights

  • The GNOME Font Viewer is now included by default. This application simplifies viewing and installing fonts.
  • Client side decorations (CSD) now consume much less space with the Greybird GTK+ theme.
  • New device, mimetype, and monochrome panel icons have been included with the elementary-xfce icon theme.

We usually link directly to the Ubuntu release notes, but there are several significant improvements that affect all flavors and our users:

  • Accelerated video playback with Intel hardware should now work more reliably out of the box. The changes might also bring some performance improvements for Parole and Chromium users. More information here.
  • Bluetooth and USB audio devices should now work better by default due to changes in BlueZ and PulseAudio.
  • Driverless printing has been added to Ubuntu. This provides support for most modern printers: IPP Everywhere, Apple AirPrint, Mopria, PCLm, and Wifi Direct as supported. Other printers can still be added from the Printers dialog.

Known Issues

System encryption password set before setting keyboard locale (1047384). Workaround: Start the installation with the correct keymap. Use F3 to set your keymap before booting to Try or Install Xubuntu from that menu.

Currently at times the panel can show 2 network icons, this appears to be a race condition which we have not been able to rectify in time for release. While this is an appearance issue only as far as we know, you can if you wish restart networking, the affected plugin or the panel. This fixes the issue in your running session but does not prevent the issue from re-appearing.

For more information on affecting bugs, bug fixes and a list of new package versions, please refer to the Release Notes.

 

on October 19, 2017 04:54 PM

It’s another great Ubuntu release day, with fresh versions of Ubuntu, Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu Kylin, Ubuntu MATE, Ubuntu Studio, and my personal favorite: Xubuntu

This has been a comparatively quiet development cycle for Xubuntu. With increased development on Xfce as we prepare for Xfce 4.14, less Xubuntu-specific changes took place this cycle. Thankfully, there are still plenty of goodies to get excited about.

  • Appearance Updates: Greybird‘s client side decorations (CSD) have been refreshed and now consume much less space. elementary-xfce, our preferred icon theme, has been updated and includes new device, mimetype, and panel icons. And we have a fancy new wallpaper.
  • Application Updates: This is the first release of Xubuntu to feature GNOME Font Viewer, a handy tool for font management. LibreOffice, Firefox, and Thunderbird have been updated to their latest versions (5.4, 56, and 52.4 respectively). On the Xfce side, Dictionary, Genmon Plugin, Mount Plugin, Exo, and Tumbler have been updated to take advantage of the latest GTK+ version and continue the march toward Xfce 4.14.
  • Technical Updates: GTK+ 3.26, Python 3.6, and Linux 4.13 are all included. Thanks to the Ubuntu Desktop team, hardware accelerated video, improved bluetooth audio, and driverless printing round out a solid development cycle.

Screenshots

Download

Download Xubuntu 17.10 from Xubuntu.org.  It’s available in both 32-bit and 64-bit varieties.

What’s Next?

After the release festivities calm down, work will begin on Xubuntu 18.04, our next LTS release. These are always our most active cycles as we polish the work that we’ve been doing the past 18 months and prepare for a 3-year support window. A few things we already have planned…

  • Replacing the Sound Indicator with the Xfce PulseAudio Plugin, a very capable replacement with more features landing soon.
  • Replacing the Xfce Indicator Plugin with the Xfce StatusNotifier Plugin, a fully compatible and better maintained plugin with a few new tricks.
  • Another wallpaper contest to showcase the community’s artful taste.
  • And plenty more as we begin the blueprint process!

In Case You Missed It

on October 19, 2017 04:06 PM
We are happy to announce the release of our latest version, Ubuntu Studio 17.10 Artful Aardvark! As a regular version, it will be supported for 9 months. Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes for a complete list […]
on October 19, 2017 03:56 PM

19th October, London, UK: Canonical today announced the release of Ubuntu 17.10 featuring a new GNOME desktop on Wayland, and new versions of KDE, MATE and Budgie to suit a wide range of tastes. On the cloud, 17.10 brings Kubernetes 1.8 for hyper-elastic container operations, and minimal base images for containers. This is the 27th release of Ubuntu, the world’s most widely used Linux, and forms the baseline for features in the upcoming Long Term Support enterprise-class release in April 2018.

“Ubuntu 17.10 is a milestone in our mission to enable developers across the cloud and the Internet of Things” said Mark Shuttleworth, CEO and Founder of Canonical. “With the latest capabilities in Linux, it provides a preview of the next major LTS and a new generation of operations for AI, container-based applications and edge computing.”

Enhanced security and productivity for developers

The Atom editor and Microsoft Visual Studio Code are emerging as the new wave of popular development tools, and both are available across all supported releases of Ubuntu including 16.04 LTS and 17.10.

The new default desktop features the latest version of GNOME with extensions developed in collaboration with the GNOME Shell team to provide a familiar experience to long-standing Ubuntu users. 17.10 will run Wayland as the default display server on compatible hardware, with the option of Xorg where required.

Connecting to WiFi in public areas is simplified with support for captive portals. Firefox 56 and Thunderbird 52 both come as standard together with the latest LibreOffice 5.4.1 suite. Ubuntu 17.10 supports driverless printing with IPP Everywhere, Apple AirPrint, Mopria, and WiFi Direct. This release enables simple switching between built-in audio devices and Bluetooth.

Secure app distribution with snaps

In the 6 months since April 2017, the number of snaps has doubled with over 2000 now available for Ubuntu, Debian, Solus and other Linux distributions. Snaps are a single delivery and update mechanism for an application across multiple Linux releases, and improve security by confining the app to its own set of data. Hiri, Wavebox, and the Heroku CLI are notable snaps published during this cycle.

Ubuntu 17.10 features platform snaps for GNOME and KDE which enable developers to build and distribute smaller snaps with shared common libraries. Delta updates already ensure that snap updates are generally faster, use less bandwidth, and are more reliable than updates to traditional deb packages in Ubuntu.

The catkin Snapcraft plugin enables Robot Operating System (ROS) snaps for secure, easily updated robots and drones. There are many new mediated secure interfaces available to snap developers, including the ability to use Amazon Greengrass and Password Manager.

The latest hardware support and container capabilities

Ubuntu 17.10 ships with the 4.13 based Linux kernel, enabling the latest hardware and peripherals from ARM, IBM, Dell, Intel, and others. The 17.10 kernel adds support for OPAL disk drives and numerous improvements to disk I/O. Namespaced file capabilities and Linux Security Module stacking reinforce Ubuntu’s leadership in container capabilities for cloud and bare-metal Kubernetes, Docker and LXD operations.

Canonical’s Distribution of Kubernetes, CDK, supports the latest 1.8 series of Kubernetes. In addition to supporting the new features of Kubernetes 1.8, CDK also enables native cloud integration with AWS, native deployment and operations on VMWare, Canal as an additional networking choice, and support for the IBM Z and LinuxONE.

Netplan by default

Network configuration has over the years become fragmented between NetworkManager, ifupdown and other tools. 17.10 introduces netplan as the standard declarative YAML syntax for configuring interfaces in Ubuntu. Netplan is backwards compatible, enabling interfaces to continue to be managed by tools like NetworkManager, while providing a simple overview of the entire system in a single place. New installations of Ubuntu 17.10 will use Netplan to drive systemd-networkd and NetworkManager. Desktop users will see their system fully managed by NetworkManager as in previous releases. On Ubuntu server and in the cloud, users now have their network devices assigned to systemd-networkd in netplan. Ifupdown remains supported; upgrades will continue to use ifupdown and it can be installed for new machines as needed.

-Ends-

About Canonical
Canonical is the company behind Ubuntu, the leading OS for cloud operations. Most public cloud workloads use Ubuntu, as do most new smart gateways, switches, self-driving cars and advanced robots. Canonical provides enterprise support and services for commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.

For further information please click here.

on October 19, 2017 03:19 PM

FOSDEM is one of the world's premier meetings of free software developers, with over five thousand people attending each year. FOSDEM 2018 takes place 3-4 February 2018 in Brussels, Belgium.

This email contains information about:

  • Real-Time communications dev-room and lounge,
  • speaking opportunities,
  • volunteering in the dev-room and lounge,
  • related events around FOSDEM, including the XMPP summit,
  • social events (the legendary FOSDEM Beer Night and Saturday night dinners provide endless networking opportunities),
  • the Planet aggregation sites for RTC blogs

Call for participation - Real Time Communications (RTC)

The Real-Time dev-room and Real-Time lounge is about all things involving real-time communication, including: XMPP, SIP, WebRTC, telephony, mobile VoIP, codecs, peer-to-peer, privacy and encryption. The dev-room is a successor to the previous XMPP and telephony dev-rooms. We are looking for speakers for the dev-room and volunteers and participants for the tables in the Real-Time lounge.

The dev-room is only on Sunday, 4 February 2018. The lounge will be present for both days.

To discuss the dev-room and lounge, please join the FSFE-sponsored Free RTC mailing list.

To be kept aware of major developments in Free RTC, without being on the discussion list, please join the Free-RTC Announce list.

Speaking opportunities

Note: if you used FOSDEM Pentabarf before, please use the same account/username

Real-Time Communications dev-room: deadline 23:59 UTC on 30 November. Please use the Pentabarf system to submit a talk proposal for the dev-room. On the "General" tab, please look for the "Track" option and choose "Real Time Communications devroom". Link to talk submission.

Other dev-rooms and lightning talks: some speakers may find their topic is in the scope of more than one dev-room. It is encouraged to apply to more than one dev-room and also consider proposing a lightning talk, but please be kind enough to tell us if you do this by filling out the notes in the form.

You can find the full list of dev-rooms on this page and apply for a lightning talk at https://fosdem.org/submit

Main track: the deadline for main track presentations is 23:59 UTC 3 November. Leading developers in the Real-Time Communications field are encouraged to consider submitting a presentation to the main track.

First-time speaking?

FOSDEM dev-rooms are a welcoming environment for people who have never given a talk before. Please feel free to contact the dev-room administrators personally if you would like to ask any questions about it.

Submission guidelines

The Pentabarf system will ask for many of the essential details. Please remember to re-use your account from previous years if you have one.

In the "Submission notes", please tell us about:

  • the purpose of your talk
  • any other talk applications (dev-rooms, lightning talks, main track)
  • availability constraints and special needs

You can use HTML and links in your bio, abstract and description.

If you maintain a blog, please consider providing us with the URL of a feed with posts tagged for your RTC-related work.

We will be looking for relevance to the conference and dev-room themes, presentations aimed at developers of free and open source software about RTC-related topics.

Please feel free to suggest a duration between 20 minutes and 55 minutes but note that the final decision on talk durations will be made by the dev-room administrators based on the received proposals. As the two previous dev-rooms have been combined into one, we may decide to give shorter slots than in previous years so that more speakers can participate.

Please note FOSDEM aims to record and live-stream all talks. The CC-BY license is used.

Volunteers needed

To make the dev-room and lounge run successfully, we are looking for volunteers:

  • FOSDEM provides video recording equipment and live streaming, volunteers are needed to assist in this
  • organizing one or more restaurant bookings (dependending upon number of participants) for the evening of Saturday, 4 February
  • participation in the Real-Time lounge
  • helping attract sponsorship funds for the dev-room to pay for the Saturday night dinner and any other expenses
  • circulating this Call for Participation (text version) to other mailing lists

Related events - XMPP and RTC summits

The XMPP Standards Foundation (XSF) has traditionally held a summit in the days before FOSDEM. There is discussion about a similar summit taking place on 2 February 2018. XMPP Summit web site - please join the mailing list for details.

Social events and dinners

The traditional FOSDEM beer night occurs on Friday, 2 February.

On Saturday night, there are usually dinners associated with each of the dev-rooms. Most restaurants in Brussels are not so large so these dinners have space constraints and reservations are essential. Please subscribe to the Free-RTC mailing list for further details about the Saturday night dinner options and how you can register for a seat.

Spread the word and discuss

If you know of any mailing lists where this CfP would be relevant, please forward this email (text version). If this dev-room excites you, please blog or microblog about it, especially if you are submitting a talk.

If you regularly blog about RTC topics, please send details about your blog to the planet site administrators:

Planet site Admin contact
All projects Free-RTC Planet (http://planet.freertc.org) contact planet@freertc.org
XMPP Planet Jabber (http://planet.jabber.org) contact ralphm@ik.nu
SIP Planet SIP (http://planet.sip5060.net) contact planet@sip5060.net
SIP (Español) Planet SIP-es (http://planet.sip5060.net/es/) contact planet@sip5060.net

Please also link to the Planet sites from your own blog or web site as this helps everybody in the free real-time communications community.

Contact

For any private queries, contact us directly using the address fosdem-rtc-admin@freertc.org and for any other queries please ask on the Free-RTC mailing list.

The dev-room administration team:

on October 19, 2017 08:33 AM

CNI for LXC

Serge Hallyn

It’s now possible to use CNI (container networking interface) with lxc. Here is an example. This requires some recent upstream patches, so for simplicity let’s use the lxc packages for zesty in ppa:serge-hallyn/atom. Setup a zesty host with that ppa, i.e.

sudo add-apt-repository ppa:serge-hallyn/atom
sudo add-apt-repository ppa:projectatomic/ppa
sudo apt update
sudo apt -y install lxc1 skopeo skopeo-containers jq

(To run the oci template below, you’ll also need to install git://github.com/openSUSE/umoci. Alternatively, you can use any standard container, the oci template is not strictly needed, just a nice point to make)

Next setup CNI configuration, i.e.

cat >> EOF | sudo tee /etc/lxc/simplebridge.cni
{
  "cniVersion": "0.3.1",
  "name": "simplenet",
  "type": "bridge",
  "bridge": "cnibr0",
  "isDefaultGateway": true,
  "forceAddress": false,
  "ipMasq": true,
  "hairpinMode": true,
  "ipam": {
    "type": "host-local",
    "subnet": "10.10.0.0/16"
  }
}
EOF

The way lxc will use CNI is to call out to it using a start-host hook, that is, a program (hook) which is called in the host namespaces right before the container starts. We create the hook using:

cat >> EOF | sudo tee /usr/share/lxc/hooks/cni
#!/bin/sh

CNIPATH=/usr/share/cni

CNI_COMMAND=ADD CNI_CONTAINERID=${LXC_NAME} CNI_NETNS=/proc/${LXC_PID}/ns/net CNI_IFNAME=eth0 CNI_PATH=${CNIPATH} ${CNIPATH}/bridge < /etc/lxc/simplebridge.cni
EOF

This tells the ‘bridge’ CNI program our container name and the network namespace in which the container is running, and sends it the contents of the configuration file which we wrote above.

Now create a container,

sudo lxc-create -t oci -n a1 -- -u docker://alpine

We need to edit the container configuration file, telling it to use our new hook,

sudo sed -i '/^lxc.net/d' /var/lib/lxc/a1/config
cat >> EOF | sudo tee -a /var/lib/lxc/a1/config
lxc.net.0.type = empty
lxc.hook.start-host = /usr/share/lxc/hooks/cni
EOF

Now we’re ready! Just start the container with

lxc-execute -n a1

and you’ll get a shell in the alpine container with networking configured.

Disclaimer

The opinions expressed in this blog are my own views and not those of Cisco.


on October 19, 2017 03:08 AM

October 18, 2017

MAAS KVM Pods

Ubuntu Insights

This is a guest post by Michael Iatrou, cloud solutions architect at Canonical

OpenStack is the dominant solution in the IaaS space, fueled by the need for reliable, scalable and interoperable private cloud infrastructure to accommodate cloud native applications. Through OpenStack’s open APIs, tenants can easily deploy elaborate virtual (overlay) networks, integrate with a variety of storage backends, even leverage modern hypervisor-like machine containers (LXD) for bare metal performance. Although the tooling allows a full fledged OpenStack deployment on just a single machine, the intrinsic efficiencies that OpenStack’s design promises, materialize at a certain scale — typically at least 12 servers.

There is a large number of environments which would benefit from the merits of virtualization, including but not limited to hydrocarbon exploration sites, diagnostic imaging for health centers, backoffice for restaurants and retail store IT operations. But their per location server footprint is very small, usually no more than a handful of machines. We will explore here how to use the same tooling that delivers operational efficiency for large OpenStack and Kubernetes deployments, in order to revamp small scale environments.

MAAS stands for Metal As A Service and enables you to treat physical servers like an elastic cloud-like resource. As of MAAS 2.2 “pod” is introduced as an operational concept. A MAAS pod effectively describes the availability of resources and enables the creation (or composition) of machines with a set of those resources. A user can allocate the needed CPU, RAM, and (local or remote) storage resources manually (using the MAAS UI or CLI) or dynamically (using Juju or the MAAS API). That is, machines can be allocated “just in time”, based on CPU, RAM, and storage constraints of a specific workload. MAAS 2.2. supports two types of pods, (1) Physical systems with Intel RSD and (2) Virtual Machines with KVM (using the virsh interface).

We have published a detailed tutorial with step by step instructions on how to install and configure a testbed environment for MAAS KVM pods. You can quickly transform a physical server into a lightweight, reliable virtual machine management node. You can even test it on your laptop.

MAAS has been designed to be a modern, agile machine provisioning and infrastructure modelling solution, enabling both physical and virtual infrastructure. Beyond MAAS’ extensive adoption in cloud native environments, it can optimize the utilisation of existing in-house, small scale IT infrastructure using VM pods. The pod abstraction is very powerful and flexible, and it has quickly gained attention, with many new features coming soon! Try it out!

on October 18, 2017 06:40 PM

All good things must come to an end, however, in that particular case, it’s rather a beginning! We are indeed almost done in our road to Artful, which means that 17.10 is just around the corner: official Ubuntu 17.10 release is due tomorrow. Of course, it doesn’t mean we stop right away working on it: you will have bug fixes and security updates for 9 months of support! It’s thus time to close this series on Artful, and for this, we are going to tackle one topic we didn’t get to yet, which is quite important approaching the release: upgrading from a previous Ubuntu release! For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 16: Flavors, upgrades and sessions!

Different kind of sessions

Any new Ubuntu installation will have two sessions available at most, whose base name is “Ubuntu”:

  • The “Ubuntu” session corresponds to GNOME Shell experience with our modifications (Ubuntu Dock, appindicator support, our theme, small behavior changes…). You have probably seen those and followed their development on previous blog posts. This is the default session running under Wayland.
  • The “Ubuntu on Xorg” session, being similar to the previous one, but running on Xorg as the name indicates :) Users who can’t run Wayland (using nvidia proprietary driver or unsupported hardware) should be automatically fallbacked and only presented with that session.

Ubuntu default installation sessions

Those two sessions are available when you install the ubuntu-session package.

However, more sessions are available in the Ubuntu archives around GNOME technologies: the Unity and vanilla GNOME ones. The first one is available as soon as you install the unity-session binary package. The vanilla GNOME sessions simply appears once gnome-session is installed. After a reboot, GDM is presenting all of them for selection when login in.

All available sessions

Let’s see how that goes on upgrades.

Upgrading from Ubuntu 17.04 or Ubuntu 16.04 (LTS)

People running today Ubuntu 17.04 or our last LTS, Ubuntu 16.04, are generally using Unity. As with every releases when upgrading people being on one default, we upgrade them to the next new default. It means that on upgrades, those users will reboot in our default Ubuntu GNOME Shell experience, having the “Ubuntu” and “Ubuntu on Xorg” sessions available. The “Ubuntu” session is the default and will lead you to our default and fully supported desktop:

Ubuntu GNOME Shell on 17.10

However, we don’t remove packages that are still available in the distribution on upgrades. Those users will thus have an additional “Unity” session option, which they can use to continue running Unity 7 (and thus, on Xorg only). Indeed, Unity is still present, in universe (meaning we don’t commit to strong maintenance or security updates), but we will continue to have a look at it on a best effort bases (at least until our next LTS). Some features are slightly modified to either don’t collide with GNOME Shell experience or to follow more the upstream philosophy, like the headerbars I mentioned in my previous blog post. In a nutshell, don’t expect the exact same experience that you used to have, but you will reach similar familiarity for the main concepts and components.

Unity on 17.10

Upgrading from Ubuntu GNOME 17.04 or Ubuntu GNOME 16.04

Those people were experiencing a more vanilla upstream GNOME experience than our Ubuntu session. It was a little bit 5050 in what to do for those users on upgrades as they were used to something different. The final experience that Ubuntu GNOME users will get is to have those 2 upstream vanilla GNOME session, (”GNOME” and “GNOME on Xorg”). Those will stay the default after upgrades.

Vanilla GNOME Shell session on 17.10

In addition to those sessions, we still want to give an easy option for our users to try our new default experience, and thus, the 2 “Ubuntu” sessions (Wayland & Xorg) are automatically installed as well on upgrade. The sessions are just around for user’s convenience. :)

Fallback

I want to quickly mention and give kudos to Olivier who fixed a pet bug of mine, to ensure that Wayland to Xorg automatic fallbacking will always fallback to the correct sessions (Ubuntu will fallback to Ubuntu on Xorg and GNOME to GNOME on Xorg). His patches were discussed upstream and are now committed in the gdm tree. This will be quickly available as a stable release update conveniently as only impacting upgrades.

In a nutshell

To sum all that up:

  • New installs will have the “Ubuntu and Ubuntu on Xorg” options. Ubuntu under wayland being the default.
  • Upgrades from Ubuntu 16.04 and 17.04 will get Ubuntu (default), Ubuntu on Xorg and Unity sessions installed.
  • Upgrades from Ubuntu GNOME 16.04 and 17.04 will get GNOME (default), GNOME on Xorg, Ubuntu and Ubuntu on Xorg sessions installed.
  • Fallbacking when Wayland is not supported will fallback to the corresponding Xorg session.

And this is it for our “road to Artful” blog post long series! I hope you had as much fun reading it as I had writing and detailing the work done by the Ubuntu desktop team to make this transition, we hope, a success. It was really great as well to be able to interact and answers to the many comments that you posted on the dedicated section. Thanks to everyone participating there.

You can comment on the community HUB and participate and contribute from there! We will likely redo the same experiment and keep you posted on our technical advancement for the Ubuntu 18.04 LTS release. You should expect fewer posts as of course as the changes shouldn’t be so drastic as they were this cycle. We will mostly focus on stabilizing, bug fixes and general polish!

Until then, enjoy the upcoming Ubuntu 17.10 release, watch the ubuntu.com website for the release announcement on desktop, servers, flavors, iot and clouds, join our community HUB… and see you soon around! :)

Didier

on October 18, 2017 04:15 PM
Mathy Vanhoef discovered that wpa_supplicant and hostapd incorrectly handled WPA2. A remote attacker could use this issue with key reinstallation attacks to obtain sensitive information. (CVE-2017-13077, CVE-2017-13078, CVE-2017-13079, CVE-2017-13080, CVE-2017-13081,    CVE-2017-13082, CVE-2017-13086, CVE-2017-13087, CVE-2017-13088) Imre Rad discovered that wpa_supplicant and hostapd incorrectly handled  invalid characters in passphrase parameters. A remote attacker could use this issue to cause a denial of service. (CVE-2016-4476). Imre Rad […]
on October 18, 2017 11:40 AM

October 17, 2017

I’ve started writing for the Heptio Blog, check out my new article on Upgrading to Kubernetes 1.8 with Kubeadm.

Also if you’re looking for more interactive help with Kubernetes, make sure you check out our brand new Kubernetes Office Hours, where we livestream developers answering user questions about Kubernetes. Starting tomorrow (18 October) at 1pm and 8pm UTC, hope to see you there!

 

on October 17, 2017 04:09 PM

October 16, 2017

Since the Ubuntu Rally in New York, the Ubuntu desktop team is full speed ahead on the latest improvements we can make to our 17.10 Ubuntu release, Artful Aardvark. Last Thursday was our Final Freeze and I think it’s a good time to reflect some of the changes and fixes that happened during the rally and the following weeks. This list isn’t exhaustive at all, of course, and only cover partially changes in our default desktop session, featuring GNOME Shell by default. For more background on our current transition to GNOME Shell in artful, you can refer back to our decisions regarding our default session experience as discussed in my blog post.

Day 15: Final desktop polish before 17.10 is out

GNOME 3.26.1

Most of you would have noticed already, but most of GNOME modules have been updated to their 3.26.1 release. This means that Ubuntu 17.10 users will be able to enjoy the latest and greatest from the GNOME project. It’s been fun to follow again the latest development release, report bugs, catch up regressions and following new features.

GNOME 3.26.1 introduces in addition to many bug fixes, improvements, documentation and translation, updates resizeable tiling support, which is a great feature that many people will surely take advantage of! Here is the video that Georges has done and blogged about while developing the feature for those who didn’t have a look yet:

A quick Ubuntu Dock fix rampage

I’ve already praised here many times the excellent Dash to Dock upstream for their responsiveness and friendliness. A nice illustration of this occurred during the Rally. Nathan grabbed me in the Desktop room and asked if a particular dock behavior was desired (scrolling on the edge switching between workspaces). First time I was hearing that feature and finding the behavior being possibly confusing, I pointed him to the upstream bug tracker where he filed a bug report. Even before I pinged upstream about it, they noticed the report and engaged the discussion. We came to the conclusion the behavior is unpredictable for most users and the fix was quickly in, which we backported in our own Ubuntu Dock as well with some other glitch fixes.

The funny part is that Chris witnessed this, and reported that particular awesome cooperation effort in a recent Linux Unplugged show.

Theme fixes and suggested actions

With our transition to GNOME Shell, we are following thus more closely GNOME upstream philosophy and dropped our headerbar patches. Indeed, as we previously, for Unity vertical space optimizations paradigm with stripping the title bar and menus for maximized applications, distro-patched a lot of GNOME apps to revert the large headerbar. This isn’t the case anymore. However, it created a different class of issues: action buttons are generally now on the top and not noticeable with our Ambiance/Radiance themes.

Enabled suggested action button (can't really notice it)

We thus introduced some styling for the suggested action, which will consequently makes other buttons noticeable on the top button (this is how upstream Adwaita theme implements it as well). After a lot of discussions on what color to use (we tied of course different shades of orange, aubergine…), working with Daniel from Elementary (proof!), Matthew suggested to use the green color from the retired Ubuntu Touch color palette, which is the best fit we could ever came up with ourself. After some gradient work to make it match our theme, and some post-upload fixes for various states (thanks to Amr for reporting some bugs on them so quickly which forced me to fix them during my flight back home :p). We hope that this change will help users getting into the habit to look for actions in the GNOME headerbars.

Enabled suggested action button

Disabled suggested action button

But That’s not all on the theme front! A lot of people were complaining about the double gradient between the shell and the title bar. We just uploaded for the final freeze some small changes by Marco making them looking a little bit better for both titlebars, headerbars and gtk2 applications, when focused on unfocused, having one or no menus. Another change was made in GNOME Shell css to make our Ubuntu font appear a little bit less blurry than it was under Wayland. A long-term fix is under investigation by Daniel.

Headerbar on focused application before theme change

Headerbar on focused application with new theme fix

Title bar on focused application before theme change

Title bar on focused application with new theme fix

Title bar on unfocused application with new theme fix

Settings fixes

The Dock settings panel evolved quite a lot since its first inception.

First shot at Dock settings panel

Bastien, who had worked a lot on GNOME Control Center upstream, was kindly enough to give a bunch of feedbacks. While some of them were too intrusive so late in the cycle, we implemented most of his suggestions. Of course, even if we live less than 3 kms away from each other, we collaborated as proper geeks over IRC ;)

Here is the result:

Dock settings after suggestions

One of the best advice was to move the background for list to white (we worked on that with Sébastien), making them way more readable:

Settings universal access panel before changes

Settings universal access panel after changes

Settings search Shell provider before changes

Settings search Shell provider after changes

i18n fixes in the Dock and GNOME Shell

Some (but not all!) items accessible via right clicking on applications in the Ubuntu Dock, or even in the upstream Dash in the vanilla session weren’t translated.

Untranslated desktop actions

After a little bit of poking, it appeared that only the Desktop Actions were impacted (what we called “static quicklist” in the Unity world). Those were standardized some years after we introduced it in Unity in Freedesktop spec revision 1.1.

Debian, like Ubuntu is extracting translations from desktop files to include them in langpacks. Glib is thus distro-patched to load correctly those translations. However, the patch was never updated to ensure action names were returning localized strings, as few people are using those actions. After a little bit of debugging, I fixed the patch in Ubuntu and proposed back in the Debian Bug Tracking System. This is now merged for the next glib release there (as the bug impacts both Ubuntu, Debian and all its derivatives).

We weren’t impacted by this bug previously as when we introduced this in Unity, the actions weren’t standardized yet and glib wasn’t supporting it. Unity was thus directly loading the actions itself. Nice now to have fixed that bug so that other people can benefit from it, using Debian and vanilla GNOME Shell on Ubuntu or any other combinations!

Translated desktop actions

Community HUB

Alan announced recently the Ubuntu community hub when we can exchange between developers, users and new contributors.

When looking at this at the sprint, I decided that it could be a nice place for the community to comment on those blog posts rather than creating another silo here. Indeed, the current series of blog post have more than 600 comments, I tried to be responsive on most of them requiring some answers, but I can’t obviously scale. Thanks to some of the community who already took the time to reply to already answered questions there! However, I think our community hub is a better place for those kind of interactions and you should see below, an automated created topic on the Desktop section of the hub corresponding to this blog post (if all goes well. Of course… it worked when we tested it ;)). This is read-only, embedded version and clicking on it should direct you to the corresponding topic on the discourse instance where you can contribute and exchange. I really hope that can foster even more participation inside the community and motivate new contributors!

(edit: seems like there is still some random issues on topic creation, for the time being, I created a topic manually and you can comment on here)

Other highlights

We got some multi-monitor fixes, HiDPI enhancements, indicator extension improvements and many others… Part of the team worked with Jonas from Red Hat on mutter and Wayland on scaling factor. It was a real pleasure to meet him and to have him tagging along during the evenings and our numerous walks throughout Manhattan as well! It was an excellent sprint followed by nice follow-up weeks.

If you want to get a little bit of taste of what happened during the Ubuntu Rally, Chris from Jupiter Broadcasting recorded some vlogs from his trip to getting there, one of them being on the event itself:

As usual, if you are eager to experiment with these changes before they migrate to the artful release pocket, you can head over to our official Ubuntu desktop team transitions ppa to get a taste of what’s cooking!

Now, it’s almost time to release 17.10 (few days ahead!), but I will probably blog about the upgrade experience in my next and last - for this cycle - report on our Ubuntu GNOME Shell transition!

Edit: As told before, feel free to comment on our community HUB as the integration below doesn’t work for now.

on October 16, 2017 04:15 PM

Following on from yesterday’s 1st spin of the 17.10 RC images by the ubuntu release team, today the RC images (marked Artful Final on the QA tracker) have been re-spun and updated.

Please update your ISOs if you downloaded previous images, and test as before.

Please help us by testing as much as you have time for. Remember, in particular we need i386 testers, on “bare metal” rather than VMs if possible.

Builds are available from:

http://iso.qa.ubuntu.com/qatracker/milestones/383/builds

the CD image to left of the ISO names being a link to take you to download urls/options.

Take note of the Ubuntu Community ISO testing party on Monday 16th at 15:00 UTC:

https://community.ubuntu.com/t/ubuntu-17-10-community-iso-testing/458

Please attend and participate if you are able. The #ubuntu-on-air IRC channel on irc.freenode.net can be joined via a web client found beneath the live stream on ubuntuonair.com, or of course you can join in a normal IRC client.

Happy testing,

Rik Mills

Kubuntu Developer
Kubuntu Release team

on October 16, 2017 01:06 PM

October 15, 2017

I got to spend a few days with Andy and his wife Gaby and their exciting new dog, Iwa. I don’t get to see them as often as I should, but since they’ve now moved rather closer to Castle Langridge we’re going to correct that. And since they’re in the Cotswolds I got to peer at a whole bunch of things. Mostly things built of yellow stone, admittedly. It is a source of never-ending pleasure that despite twenty-three years of conversation we still never run out of things to talk about. There is almost nothing more delightful than spending an afternoon over a pint arguing about what technological innovation you’d take back to Elizabethan England. (This is a harder question than you’d think. Sure, you can take your iPhone back and a solar charger, and it’d be an incredibly powerful computer, but what would they use it for? They can do all the maths that they need; it’s just slower. Maybe you’d build a dynamo and gift them electricity, but where would you get the magnets from? Imagine this interspersed with excellent beer from the Volunteer and you have a flavour of it.)

There were also some Rollright Stones, as guided by Julian Cope’s finest-guidebook-ever The Modern Antiquarian. But that’s not the thing.

The thing is Snowshill Manor. There was a bloke and his name was Charles Paget Wade. Did some painting (at which he was not half bad), did some architecting (also not bad), wrote some poetry. And also inherited a dumper truck full of money by virtue of his family’s sugar plantations in the West Indies. This money he used to assemble an exceedingly diverse collection of Stuff, which you can now go and see by looking around Snowshill. What’s fascinating about this is that he didn’t just amass the Stuff into a big pile and then donate the house to the National Trust as a museum to hold it. Every room in the house was individually curated by him; this room for these objects, that room for those, what he called “an attractive set of rooms pictorially”. There’s some rhyme and some reason — one of the upstairs rooms is full of clanking, rigid, iron bicycles, and another full of suits of samurai armour — but mostly they’re things he just felt fitted together somehow. He’s like Auri from the Kingkiller Chronicles; this room cries out for this thing to be in it. (If you’ve read the first two Kingkiller books but haven’t read The Slow Regard of Silent Things, go and read it and know more of Auri than you currently do.) There’s a room with a few swords, and a clock that doesn’t work, and a folding table, and a box with an enormously ornate lock and a set of lawn bowls, and a cabinet containing a set of spectacles and a picture of his grandmother and a ball carved from ivory inside which is a second ball carved from the same piece of ivory inside which is yet another ball. The rhyme and the reason were all in his head, I think. I like to imagine that sometimes he’d wake up in his strange bedroom with its huge carved crucifix at four in the morning and scurry into the house to carefully carry a blue Japanese vase from the Meridian Room into Zenity and then sit back, quietly satisfied that the cosmic balance was somehow improved. Or to study a lacquered cabinet for an hour and a half and then tentatively shift it an inch to the left, so it sits there just so. So it’s right. I don’t know if the order, the placing, the detail of the collection actually speaks as loudly to anyone as it spoke to him, and it doesn’t matter. You could spend the rest of your life hearing the stories about everything there and never get off the ground floor.

Take that room of samurai armour, for example. One of the remarkable things about the collection (there are so many remarkable things about the collection) is that rather a lot of it is Oriental — Japanese or Chinese, mainly — but Wade never went to China or Japan. A good proportion of the objects came from other stately homes, selling off items after the First World War — whether because none of the family were left, or for financial reasons, or maybe just that the occupants came home and didn’t want it all any more. The armour is a case in point; Wade needed some plumbing done on the house and went off to chat to a plumber’s merchant about it, where he found a box of scrap metal. Since the bloke was the Lord High Emperor of looking for objects that caught his fancy, he had a look through this discarded pile and found in it… about fifteen suits of samurai armour. (A large box, to be sure.) So he asked the merchant what the score was, and was told: oh, those, yeah, take them if you want them.

This sort of thing doesn’t happen to me all that much.

Outside that room, just hanging on the wall, is the door from a carriage; one of the ones with the large wheels, all pulled by horses. Like the cabs that Sherlock Holmes rode in, or that the Queen takes to coronations. It was monogrammed ECC, and had one of those coats of arms where you just know that the family have been around for a while because two different shields have been quartered in it and then it’s been quartered again. After some entirely baseless speculation we discovered that it was owned by Countess Cowper. She married Lord Palmerston; her brother was William Lamb, Lord Melbourne, who was another Prime Minister and had the Australian city named after him; his wife was Lady Caroline Lamb, who infamously described Byron as “mad, bad, and dangerous to know”. History is all intertwined around itself.

None of the clocks in the house work. Apparently at one point Wade had a guest over who glanced at a clock and assumed she had plenty of time to catch her train. Of course, she missed it, and on hearing from him that of course the clocks don’t tell the right time, she was not best pleased. Not sure who it was. Virginia Woolf, or someone like that.

There is too much stuff. He can’t possibly have kept it all in his head. You can’t possibly keep it all in, walking around. Visitors ought to be banned from going into more than three or four rooms; by the time you’ve got halfway through it’s just impossible to give each place the attention it deserves. There are hardly any paintings; Wade liked actual things, not drawings or representations. It’s not an art gallery. It’s a craftsmanship gallery; Wade sought out things that were made, that showed beauty or artistry or ingenuity in their construction. Objects, not drawings; stuff that demonstrates human creation at work. The house is like walking around inside his head, I think. (“Sometimes I think the asylum is a head. We’re inside a huge head that dreams us all into being. Perhaps it’s your head, Batman.”)

Next time you’re near Evesham, go visit.

on October 15, 2017 10:49 PM

Adam Conrad, on behalf of the Ubuntu Release Team, has spun up a set of images for everyone with serial 20171015.

Those images are *not* final images (ISO volid and base-files are still
not set to their final values), intentionally, as we had some hiccups
with langpack uploads that are landing just now.

That said, we need as much testing as possible, bugs reported (and, if
you can, fixed), so we can turn around and have slightly more final
images produced on Monday morning.  If we get no testing, we get no
fixing, so no time like the present to go bug-hunting.

https://lists.ubuntu.com/archives/ubuntu-release/2017-October/004224.html

Originally posted to the ubuntu-release mailing list on Sun Oct 15 05:40:12 UTC 2017  by Adam Conrad on behalf of the Ubuntu Release Team

on October 15, 2017 07:37 AM

October 13, 2017

Adam Conrad, on behalf of the Ubuntu Release Team is pleased to announce that artful has entered the Final Freeze period in preparation for the final release of Ubuntu 17.10 next week.

The current uploads in the queue will be reviewed and either accepted or rejected as appropriate by pre-freeze standards, but anything from here on should fit two broad categories:

1) Release critical bugs that affect ISOs, installers, or otherwise can’t be fixed easily post-release.

2) Bug fixes that would be suitable for post-release SRUs, which we may choose to accept, reject, or shunt to -updates for 0-day SRUs on a case-by-base basis.

For unseeded packages that aren’t on any media or in any supported sets, it’s still more or less a free-for-all, but do take care not to upload changes that you can’t readily validate before release.  That is, ask yourself if the current state is “good enough”, compared to the burden of trying to fix all the bugs you might accidentally be introducing with your shiny new upload.

We will shut down cronjobs and spin some RC images late Friday or early Saturday once the archive and proposed-migration have settled a bit, and we expect everyone with a vested interest in a flavour (or two) and a few spare hours here and there to get to testing to make sure we have another uneventful release next week.  Last minute panic is never fun.

https://lists.ubuntu.com/archives/ubuntu-release/2017-October/004221.html

Originally posted to the ubuntu-release mailing list on Fri Oct 13 08:42 UTC 2017 by Adam Conrad on behalf of the Ubuntu Release Team

on October 13, 2017 12:46 AM

October 12, 2017

S10E32 – Possessive Open Chicken - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This week we’ve been playing Wifiwars, discuss what happened at the Ubuntu Rally in New York, serve up some command line lurve and go over your feedback.

It’s Season Ten Episode Thirty-Two of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

sudo snap install pulsemixer
pulsemixer
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • This weeks cover image is taken from Wikimedia.

Ubuntu Rally

Trouble comes to NYC

Inside the Ubuntu Rally

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on October 12, 2017 02:00 PM

October 11, 2017

MAAS 2.3.0 beta 2 released!

Andres Rodriguez

Hello MAASters!

I’m happy to announce that MAAS 2.3.0 Beta 2 has now been released and it is currently available in PPA and as a snap.
PPA Availability
For those running Ubuntu Xenial and would like to use beta 2, please use the following PPA:
ppa:maas/next
Snap Availability
For those running from the snap, or would like to test the snap, please use the Beta channel on the default track:
sudo snap install maas –devmode —beta
 

MAAS 2.3.0 (beta 2)

Issues fixed in this release

https://launchpad.net/maas/+milestone/2.3.0beta2

  • LP: #1711760    [2.3] resolv.conf is not set (during commissioning or testing)

  • LP: #1721108    [2.3, UI, HWTv2] Machine details cards – Don’t show “see results” when no tests have been run on a machine

  • LP: #1721111    [2.3, UI, HWTv2] Machine details cards – Storage card doesn’t match CPU/Memory one

  • LP: #1721548    [2.3] Failure on controller refresh seem to be causing version to not get updated

  • LP: #1710092    [2.3, HWTv2] Hardware Tests have a short timeout

  • LP: #1721113    [2.3, UI, HWTv2] Machine details cards – Storage – If multiple disks, condense the card instead of showing all disks

  • LP: #1721524    [2.3, UI, HWTv2] When upgrading from older MAAS, Storage HW tests are not mapped to the disks

  • LP: #1721587    [2.3, UI, HWTv2] Commissioning logs (and those of v2 HW Tests) are not being shown

  • LP: #1719015    $TTL in zone definition is not updated

  • LP: #1721276    [2.3, UI, HWTv2] Hardware Test tab – Table alignment for the results doesn’t align with titles

  • LP: #1721525    [2.3, UI, HWTv2] Storage card on machine details page missing red bar on top if there are failed tests

  • LP: #1722589    syslog full of “topology hint” logs

  • LP: #1719353    [2.3a3, Machine listing] Improve the information presentation of the exact tasks MAAS is running when running hardware testing

  • LP: #1719361    [2.3 alpha 3, HWTv2] On machine listing page, remove success icons for components that passed the tests

  • LP: #1721105    [2.3, UI, HWTv2] Remove green success icon from Machine listing page

  • LP: #1721273    [2.3, UI, HWTv2] Storage section on Hardware Test tab does not describe each disk to match the design

on October 11, 2017 08:57 PM

October 10, 2017

During our user testing sessions on ubuntu.com, we often receive feedback from users about content on the site (“I can’t find this”, “I’d like more of that” or “I want to know this”). Accumulated feedback like this contributed to our decision here on the Web team to find a more standardised way of designing our product landing pages. We have two main motivations for doing this work:

1)  To make our users’ lives easier The www.ubuntu.com site has a long legacy of bespoke page design which has resulted in an inconsistent content strategy across some of our pages.  In order to evaluate and compare our products effectively, our users need consistent information delivered in a consistent way.

2) To make our lives easier Here at Canonical, we don’t have huge teams to write copy, make videos or create content for our websites. Because of this our product pages need to be quick and easy to design, build and maintain – which they will be if they all follow a standardised set of guidelines.

After a process of auditing the current site content, researching competitors, and refining a few different design routes – we reached a template that we all agreed was better than what we currently had in most cases.  Here’s some annotated photos of the process.

Web pages printed out with post-it notes

First we completed a thorough content audit of existing ubuntu.com product pages. Here the coloured post-it notes denote different types of content.

Flip-chart of hand-written list of components for a product page

Our audit of the site resulted in this unprioritized ‘short-list’ of possible types of content  to be included on a product page.

Early wireframe sketch 1Early wireframe sketch 2Early wireframe sketch 3

Some examples of early wireframe sketches.

Here is an illustrated wireframe of new template. I use this illustrated wireframe as a guideline for our stakeholders, designers and developers to follow when considering creating new or enhancing existing product pages.

Diagram of a product page template for ubuntu.com

We have begun rolling out this new template across our product pages –  e.g. our server-provisioning page. Our plan is to continue to test, watch and measure the pages using this template and then to iterate on the design accordingly. In the meantime, it’s already making our lives here on the Web Team easier!

on October 10, 2017 04:50 PM

Librem 5 Plasma MobileLibrem 5 Plasma Mobile
In the past days, the campaign to crowd-fund a privacy-focused smartphone built on top of Free software and in collaboration with its community reached its funding goal of 1.5 million US dollars. While many people doubted that the crowdfunding campaign would succeed, it is actually hardly surprising if we look what the librem 5 promises to bring to the table.

1. Unique Privacy Features: Kill-switches and auditable code

Neither Apple nor Android have convincing stories when it comes to privacy. Ultimately, they’re both under the thumbs of a restrictive government, which, to put it mildly doesn’t give a shit about privacy and has created the most intrusive global spying system in the history of mankind. Thanks to the U.S., we now live in the dystopian future of Orwell’s 1984. It’s time to put an end to this with hardware kill switches that cut off power to the radio, microphone and camera, so phones can’t be hacked into anymore to listen in on your conversations, take photos you never know were taken and send them to people you definitely would never voluntarily share them with. All that comes with auditable code, which is something that we as citizens should demand from our government. With a product on the market supplying these features, it becomes very hard for your government to argue that they really need their staff to use iphones or Android devices. We can and we should demand this level of privacy from those who govern us and handle with our data. It’s a matter of trust.
Companies will find this out first, since they’re driven by the same challenges but usually much quicker to adopt technology.

2. Hackable software means choice

The librem 5 will run a mostly standard Debian system with a kernel that you can actually upgrade. The system will be fully hackable, so it will be easy for others to create modified phone systems based on the librem. This is so far unparalleled and brings the freedom the Free software world has long waited for, it will enable friendly competition and collaboration. All this leads to choice for the users.

3. Support promise

Can a small company such as Purism actually guarantee support for a whole mobile software stack for years into the future? Perhaps. The point is, even in case they fail (and I don’t see why they would!), the device isn’t unsupported. With the librem, you’re not locked into a single vendor’s eco system, but you buy into the support from the whole Free software community. This means that there is a very credible support story, as device doesn’t have to come from a single vendor, and the workload is relatively limited in the first place. Debian (which is the base for PureOS) will be maintained anyway, and so will Plasma as tens of millions of users already rely on it. The relatively small part of the code that is unique to Plasma Mobile (and thus isn’t used on the desktop) is not that hard to maintain, so support is manageable, even for a small team of developers. (And if you’re not happy with it, and think it can be done better, you can even take part.)

4. It builds and enables a new ecosystem

The Free software community has long waited for this hackable device. Many developers just love to see a platform they can build software for that follows their goals, that allows development with a proven stack. Moreover, convergence allows users to blur the lines between their devices, and advancing that goal hasn’t been on the agenda with the current duopoly.
The librem 5 will put Matrix on the map as a serious contender for communication. Matrix has rallied quite a bit of momentum to bring more modern mobile-friendly communication, chat and voice to the Free software eco-system.
Overall, I expect the librem 5 to make Free software (not just open-source-licensed, but openly developed Free software) a serious player also on mobile devices. The Free software world needs such a device, and now is the time to create it. With this huge success comes the next big challenge, actually creating the device and software.

The unique selling points of the librem 5 definitely strike a chord with a number of target groups. If you’re doubtful that its first version can fully replace your current smart phone, that may be justified, but don’t forget that there’s a large number of people and organisations that can live with a more limited feature set just fine, given the huge advantages that private communication and knowing-what’s-going-on in your device brings with it.
The librem 5 really brings something very compelling to the table and those are the reasons why it got funded. It is going to be a viable alternative to Android and iOS devices that allows users to enjoy their digital life privately. To switch off tracking, and to sleep comfortably.
Are you convinced this is a good idea? Don’t hesitate to support the campaign and help us reach its stretch goals!

on October 10, 2017 02:21 PM

October 09, 2017

Welcome to the seventh Ubuntu OpenStack development summary!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

OpenStack Distribution

Stable Releases

Current in-flight SRU’s for OpenStack related packages:

Ceph 10.2.9 point release

Ocata Stable Point Releases

Pike Stable Point Releases

Horizon Newton->Ocata upgrade fixes

Recently released SRU’s for OpenStack related packages:

Newton Stable Point Releases

Development Release

OpenStack Pike released in August and is install-able on Ubuntu 16.04 LTS using the Ubuntu Cloud Archive:

sudo add-apt-repository cloud-archive:pike

OpenStack Pike also forms part of the Ubuntu 17.10 release later this month; final charm testing is underway in preparation for full Artful support for the charm release in November.

We’ll be opening the Ubuntu Cloud Archive for OpenStack Queens in the next two weeks; the first uploads will be the first Queens milestones, which will coincide nicely with the opening of the next Ubuntu development release (which will become Ubuntu 18.04 LTS).

OpenStack Snaps

The main focus in the last few weeks has been on testing of the gnocchi snap, which is currently install-able from the edge channel:

sudo snap install --edge gnocchi

The gnocchi snap provides the gnocchi-api (nginx/uwsgi deployed) and gnocchi-metricd service;  Due to some incompatibilities between gnocchi/cradox/python-rados the snap is currently based on the 3.1.11 release; hopefully we should work through the issues with the 4.0.x release in the next week or so, as well as having multiple tracks setup for this snap so you can consume a version known to be compatible with a specific OpenStack release.

Nova LXD

The team is currently planning work for the Queens development cycle; pylxd has received a couple of new features – specifically support for storage pools as provided in newer LXD versions, and streaming of image uploads to LXD which greatly reduces the memory footprint of client applications during uploads.

OpenStack Charms

Queens Planning

Out of the recent Queens PTG, we have a number of feature specs landed in the charms specification repository . There are a few more in the review queue; if you’re interested in plans for the Queens release of the charms next year, this is a great place to get a preview and provide the team feedback on the features that are planned for development.

Deployment Guide

The first version of the new Charm Deployment Guide has now been published to the OpenStack Docs website; we have a small piece of followup work to complete to ensure its published alongside other deployment project guides, but hopefully that should wrap up in the next few days.  Please give the guide a spin and log any bugs that you might find!

Bugs

Over the last few weeks there has been an increased level of focus on the current bug triage queue for the charms; from a peak of 600 open bugs two weeks ago, with around 100 pending triage, we’ve closed out 70 bugs and the triage queue is down to a much more manageable level.  The recently introduced bug triage rota has helped with this effort and should ensure we keep on-top of incoming bugs in the future.

Releases

In the run-up to the August charm release, a number of test scenarios which required manual execution where automated as part of the release testing activity;  this automation work reduces the effort to produce the release, and means that the majority of test scenarios can be run on a regular basis.  As a result, we’re going to move back to a three month release cycle; the next charm release will be towards the end of November after the OpenStack summit in Sydney.

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

EOM

 


on October 09, 2017 10:44 AM

October 08, 2017

Have you been to an event recently involving free software or a related topic? How did you find it? Are you organizing an event and don't want to fall into the trap of using Facebook or Meetup or other services that compete for a share of your community's attention?

Are you keen to find events in foreign destinations related to your interest areas to coincide with other travel intentions?

Have you been concerned when your GSoC or Outreachy interns lost a week of their project going through the bureaucracy to get a visa for your community's event? Would you like to make it easier for them to find the best events in the countries that welcome and respect visitors?

In many recent discussions about free software activism, people have struggled to break out of the illusion that social media is the way to cultivate new contacts. Wouldn't it be great to make more meaningful contacts by attending more a more diverse range of events rather than losing time on social media?

Making it happen

There are already a number of tools (for example, Drupal plugins and Wordpress plugins) for promoting your events on the web and in iCalendar format. There are also a number of sites like Agenda du Libre and GriCal who aggregate events from multiple communities where people can browse them.

How can we take these concepts further and make a convenient, compelling and global solution?

Can we harvest event data from a wide range of sources and compile it into a large database using something like PostgreSQL or a NoSQL solution or even a distributed solution like OpenDHT?

Can we use big data techniques to mine these datasources and help match people to events without compromising on privacy?

Why not build an automated iCalendar "to-do" list of deadlines for events you want to be reminded about, so you never miss the deadlines for travel sponsorship or submitting a talk proposal?

I've started documenting an architecture for this on the Debian wiki and proposed it as an Outreachy project. It will also be offered as part of GSoC in 2018.

Ways to get involved

If you would like to help this project, please consider introducing yourself on the debian-outreach mailing list and helping to mentor or refer interns for the project. You can also help contribute ideas for the specification through the mailing list or wiki.

Mini DebConf Prishtina 2017

This weekend I've been at the MiniDebConf in Prishtina, Kosovo. It has been hosted by the amazing Prishtina hackerspace community.

Watch out for future events in Prishtina, the pizzas are huge, but that didn't stop them disappearing before we finished the photos:

on October 08, 2017 05:36 PM

Diving Langedijk

Sebastian Kügler

There’s hardly a better way to spend a sunday diving, even in early fall when the weather gets a little colder and rainier. We went to Zeeland, at the Dutch coast, to a divespot named Langedijk for two shallow shore dives. The water was a somewhat brisk 14°C, but our drysuits kept us toasty even through longe dive.

SteurgarnaalSteurgarnaal
Fluwelen zwemkrabFluwelen zwemkrab
WeduweroosWeduweroos
PitvisPitvis
ZakpijpZakpijp
botervisbotervis
KreeftKreeft
on October 08, 2017 05:11 PM

October 06, 2017

#UbuntuRally New York

KDE at #UbuntuRally New York

I was happy to attend Ubuntu Rally last week in New York with Aleix Pol to represent KDE.
We were able toaccomplish many things during this week, and that is a result of having direct contact with Snap developers.
So a big thank you out to Canonical for sponsoring me. I now have all of KDE core applications,
and many KDE extragear applications in the edge channel looking for testers.
I have also made a huge dent in also making the massive KDE PIM snap!
I hope to have this done by week end.
Most of our issue list made it onto TO-DO lists 🙂
So from KDE perspective, this sprint was a huge success!

on October 06, 2017 08:50 PM

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I only spent 10.5h. During this time, I continued my work on exiv2. I finished reproducing all the issues and then went on doing code reviews to confirm that vulnerabilities were not present when the issue was not reproducible. I found two CVE where the vulnerability was present in the wheezy version and I posted patches in the upstream bug tracker: #57 and #55.

Then another batch of 10 CVE appeared and I started the process over… I’m currently trying to reproduce the issues.

While doing all this work on exiv2, I also uncovered a failure to build on the package in experimental (reported here).

Misc Debian/Kali work

Debian Live. I merged 3 live-build patches prepared by Matthijs Kooijman and added an armel fix to cope with the the rename of the orion5x image into the marvell one. I also uploaded a new live-config to fix a bug with the keyboard configuration. Finally, I also released a new live-installer udeb to cope with a recent live-build change that broke the locale selection during the installation process.

Debian Installer. I prepared a few patches on pkgsel to merge a few features that had been added to Ubuntu, most notably the possibility to enable unattended-upgrades by default.

More bug reports. I investigated much further my problem with non-booting qemu images when they are built by vmdebootstrap in a chroot managed by schroot (cf #872999) and while we have much more data, it’s not yet clear why it doesn’t work. But we have a working work-around…

While investigating issues seen in Kali, I opened a bunch of reports on the Debian side:

  • #874657: pcmanfm: should have explicit recommends on lxpolkit | polkit-1-auth-agent
  • #874626: bin-nmu request to complete two transitions and bring back some packages in testing
  • #875423: openssl: Please re-enable TLS 1.0 and TLS 1.1 (at least in testing)

Packaging. I sponsored two uploads (dirb and python-elasticsearch).

Debian Handbook. My work on updating the book mostly stalled. The only thing I did was to review the patch about wireless configuration in #863496. I must really get back to work on the book!

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on October 06, 2017 08:30 AM

October 05, 2017

I am writing this from my hotel room in Bologna, Italy before going out for a pizza. After a successful Factory Acceptance Test today, I might also allow myself to celebrate with a beer. But anyway, here is what I have been up to in the FLOSS world for the last month and a bit.

Debian

  • Uploaded gramps (4.2.6) to stretch-backports & jessie-backports-sloppy.
  • Started working on the latest release of node-tmp. It needs further work due to new documentation being included etc.
  • Started working on packaging the latest goocanvas-2.0 package. Everything is ready except for producing some autopkgtests.
  • Moved node-coffeeify experimental to unstable.
  • Updated the Multimedia Blends Tasks with all the latest ITPs etc.
  • Reviewed doris for Antonio Valentino, and sponsored it for him.
  • Reviewed pyresample for Antonio Valentino, and sponsored it for him.
  • Reviewed a new parlatype package for Gabor Karsay, and sponsored it for him.

Ubuntu

  • Successfully did my first merge using git-ubuntu for the Qjackctl package. Thanks to Nish for patiently answering my questions, reviewing my work, and sponsoring the upload.
  • Refreshed the gramps backport request to 4.2.6. Still no willing sponsor.
  • Tested Len’s rewrite of ubuntustudio-controls, adding a CPU governor option in particular. There are a couple of minor things to tidy up, but we have probably missed the chance to get it finalised for Artful.
  • Tested the First Beta release of Ubuntu Studio 17.10 Artful and wrote the release notes. Also drafted my first release announcement on the Ubunti Studio website which Eylul reviewed and published.
  • Refreshed the ubuntustudio-meta package and requested sponsorship. This was done by Steve Langasek. Thanks Steve.
  • Tested the Final Beta release of Ubuntu Studio 17.10 Artful and wrote the release notes.
  • Started working on a new Carla package, starting from where Víctor Cuadrado Juan left it (ITP in Debian).

on October 05, 2017 07:35 PM

The first steering committee election for Kubernetes is now over.  Congratulations to Aaron Crickenberger, Derek Carr, Michelle Noorali, Phillip Wittrock, Quinton Hoole and Timothy St. Clair, who will be joining the newly formed Kubernetes Steering Committe.

If you’re unfamiliar with what the SC does, you can check out their charter and backlog. I was fortunate to work alongside Paris Pittman on executing this election, hopefully the first of many “PB&J Productions”.

To give you some backstory on this, the Kubernetes community has been bootstrapping it’s governance over the past few years, and executing a proper election as stated in the charter was an important first step. Therefore it was critical for us to run an open election correctly.

Thankfully we can stand on the shoulders of giants. OpenStack and Debian are just two examples of projects with well formed processes that have stood the test of time. We then produced a voter’s guide to give people a place where they could find all the information they needed and the candidates a spot to fill in their platform statements.

This morning I submitted a pull request with our election notes and steps so that we can start building our institutional knowledge on the process, and of course, to share with whomever is interested.

Also, a big shout out to Cornell University for providing CIVS as a public service.

on October 05, 2017 03:41 PM

Hello All,

As you may know the LoCo council members are set with a two years
term. Due this situation we are facing the difficult task of replacing
existing members and a whole set of restaffing. A special thanks to
all the existing members for all of the great contributions they have
made while serving with us on the LoCo Council.

So with that in mind, we are writing this to ask for volunteers to
step forward and nominate themselves or another contributor for the
five open positions. The LoCo Council is defined on our wiki page.

Wiki: https://wiki.ubuntu.com/LoCoCouncil

Team Agenda: https://wiki.ubuntu.com/LoCoCouncilAgenda

Typically, we meet up once a month in IRC to go through items on the
team agenda also we started to have Google Hangouts too (The time for
hangouts may vary depending the availability of the members time).
This involves approving new LoCo Teams, Re-approval of Approved LoCo
Teams, resolving issues within Teams, approving LoCo Team mailing list
requests, and anything else that comes along.

We have the following requirements for Nominees:

  • Be an Ubuntu member
  • Be available during typical meeting times of the council
  • Insight into the culture(s) and typical activities within teams is a plus

Here is a description of the current LoCo Council:

They are current Ubuntu Members with a proven track record of activity
in the community. They have shown themselves over time to be able to
work well with others, and display the positive aspects of the Ubuntu
Code of Conduct. They should be people who can judge contribution
quality without emotion while engaging in an interview/discussion that
communicates interest, a welcoming atmosphere, and which is marked by
humanity, gentleness, and kindness.

If this sounds like you, or a person you know, please e-mail the LoCo
Council with your nomination(s) using the following e-mail address:
loco-council<at>lists.ubuntu.com.

Please include a few lines about yourself, or whom you’re nominating,
so we can get a good idea of why you/they’d like to join the council,
and why you feel that you/they should be considered. If you plan on
nominating another person, please let them know, so they are aware.

We welcome nominations from anywhere in the world, and from any LoCo
team. Nominees do not need to be a LoCo Team Contact to be nominated
for this post. We are however looking for people who are active in
their LoCo Team.

The time frame for this process is as follows:

Nominations initially opened: Friday 1st September, 2017

Nominations will close: Wednesday 25th October 2017

We will then forward the nominations to the Community Council,
Requesting they take the following their next meeting to make their
selections.

on October 05, 2017 12:25 PM

October 04, 2017

MAAS 2.3.0 beta 1 released

Andres Rodriguez

MAAS 2.3.0 (beta1)

New Features & Improvements

Hardware Testing

MAAS 2.3 beta overhauls and improves the visibility of hardware test results and information. This includes various changes across MAAS:

  • Machine Listing page
    • Surface progress and failures of hardware tests, actively showing when a test is pending, running, successful or failed.
  • Machine Details page
    • Summary tab – Provide hardware testing information about the different components (CPU, Memory, Storage)
    • Hardware Tests tab – Completely re-design of the Hardware Test tab. It now shows a list of test results per component. Adds the ability to view more details about the test itself.
    • Storage tab – Ability to view tests results per storage component.

UI Improvements

Machines, Devices, Controllers

MAAS 2.3 beta 1 introduces a new design for the node summary pages:

  • “Summary tab” now only shows information of the machine, in a complete new design.
  • “Settings tab” has been introduced. It now includes the ability to edit such node.
  • “Logs tab” now consolidates the commissioning output and the installation log output.

Other UI improvements

Other UI improvements that have been made for MAAS 2.3 beta 1 includes:

  • Add DHCP status column on the ‘Subnet’s tab.
  • Add architecture filters
  • Update VLAN and Space details page to no longer allow inline editing.
  • Update VLAN page to include the IP ranges tables.
  • Convert the Zones page into AngularJS (away from YUI).
  • Add warnings when changing a Subnet’s mode (Unmanaged or Managed).

Rack Controller Deployment

MAAS beta 1 now adds the ability to deploy any machine with the rack controller, which is only available via the API.

API Improvements

MAAS 2.3 beta 1 introduces API output for volume_groups, raids, cache_sets, and bcaches field to the machines endpoint.

Known issues:

The following are a list of known UI issues affecting hardware testing:

Issues fixed in this release

https://launchpad.net/maas/+milestone/2.3.0beta1

  • #1711320    [2.3, UI] Can’t ‘Save changes’ and ‘Cancel’ on machine/device details page
  • #1696270    [2.3] Toggling Subnet from Managed to Unmanaged doesn’t warn the user that behavior changes
  • #1717287    maas-enlist doesn’t work when provided with serverurl with IPv6 address
  • #1718209    PXE configuration for dhcpv6 is wrong
  • #1718270    [2.3] MAAS improperly determines the version of some installs
  • #1718686    [2.3, master] Machine lists shows green checks on components even when no tests have been run
  • #1507712    cli: maas logout causes KeyError for other profiles
  • #1684085    [2.x, Accessibility] Inconsistent save states for fabric/subnet/vlan/space editing
  • #1718294    [packaging] dpkg-reconfigure for region controller refers to an incorrect network topology assumption
on October 04, 2017 02:05 PM

October 03, 2017

I’ve recently been looking at issues regarding separate /boot partitions and people running out of free space. Being a long time user of Ubuntu and LVM, I have a /boot partition that was appropriately sized for kernel of the release I installed (Ubuntu 13.04!), but now that I’m running Artful Aardvark, which will become Ubuntu 17.10, my /boot partition is a bit small.

I don’t have any extraneous files in my /boot partition, like .old-dkms files (being worked in bug 1515513) or old kernels. So I need some solution to make the files that are there take up less space. Fortunately, it is possible to choose the compression method used by update-initramfs when it makes initrd images. The default “COMPRESS=gzip” can be found in /etc/initramfs-tools/initramfs.conf.

Because configuration options are also read from /etc/initramfs-tools/conf.d/ and after the initial initramfs.conf file I decided to create a file named “compressed” in that directory with the contents:

COMPRESS=xz

After which I needed to update the initrd.img files in my /boot partition. I can do this with:

sudo update-initramfs -u -k all

Its easy to confirm the compression method used:

$ file /boot/initrd.img-4.12.0-13-generic
/boot/initrd.img-4.12.0-13-generic: XZ compressed data

Now I’ve managed to create some more free space in my /boot partition, however it may take a bit longer for update-initramfs to run and may increase boot time a little. That sounds better than reinstalling though!

on October 03, 2017 07:43 PM
Not so long ago I went to effectively recompile NetworkManager and fix up minor bug in it. It built fine across all architectures, was considered to be installable etc. And I was expecting it to just migrate across. At the time, glibc was at 2.26 in artful-proposed and NetworkManager was built against it. However release pocket was at glibc 2.24. In Ubuntu we have a ProposedMigration process in place which ensures that newly built packages do not regress in the number of architectures built for; installable on; and do not regress themselves or any reverse dependencies at runtime.

Thus before my build of NetworkManager was considered for migration, it was tested in the release pocket against packages in the release pocket. Specifically, since package metadata only requires glibc 2.17 NetworkManager was tested against glibc currently in the release pocket, which should just work fine....
autopkgtest [21:47:38]: test nm: [-----------------------
test_auto_ip4 (__main__.ColdplugEthernet)
ethernet: auto-connection, IPv4 ... FAIL ----- NetworkManager.log -----
NetworkManager: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.25' not found (required by NetworkManager)
At first I only saw failing tests, which I thought is transient failure. Thus they were retried a few times. Then I looked at the autopkgtest log and saw above error messages. Perplexed, I have started a lxd container with ubuntu artful, enabled proposed and installed just network-manager from artful-proposed and indeed a simple `NetworkManager --help` failed with above error from linker.

I am too young to know what dependency-hell means, since ever since I used Linux (Ubuntu 7.04) all glibc symbols were versioned, and dpkg-shlibdeps would generate correct minimum dependencies for a package. Alas in this case readelf confirmed that indeed /usr/sbin/NetworkManager requires 2.25 and dpkg depends is >= 2.17.

Further reading readelf output I checked that all of the glibc symbols used are 2.17 or lower, and only the "Version needs section '.gnu.version_r'" referenced GLIBC_2.25 symbol. Inspecting dpkg-shlibdeps code I noticed that it does not parse that section and only searches through the dynamic symbols used to establish the minimum required version.

Things started to smell fishy. On one hand, I trust dpkg-shlibdeps to generate the right dependencies. On the other hand I also trust linker to not tell lies either. Hence I opened a Debian BTS bug report about this issue.

At this point, I really wanted to figure out where the reference to 2.25 comes from. Clearly it was not from any private symbols as then the reference would be on 2.26. Checking glibc abi lists I found there were only a handful of symbols marked as 2.25
$ grep 2.25 ./sysdeps/unix/sysv/linux/x86_64/64/libc.abilist
GLIBC_2.25 GLIBC_2.25 A
GLIBC_2.25 __explicit_bzero_chk F
GLIBC_2.25 explicit_bzero F
GLIBC_2.25 getentropy F
GLIBC_2.25 getrandom F
GLIBC_2.25 strfromd F
GLIBC_2.25 strfromf F
GLIBC_2.25 strfroml F
Blindly grepping for these in network-manager source tree I found following:
$ grep explicit_bzero -r configure.ac src/
configure.ac: explicit_bzero],
src/systemd/src/basic/string-util.h:void explicit_bzero(void *p, size_t l);
src/systemd/src/basic/string-util.c:void explicit_bzero(void *p, size_t l) {
src/systemd/src/basic/string-util.c:        explicit_bzero(x, strlen(x));
First of all it seems like network-manager includes a partial embedded copy of systemd. Secondly that code is compiled into a temporary library and has autconf detection logic to use explicit_bzero. It also has an embedded implementation of explicit_bzero when it is not available in libc, however it does not have FORTIFY_SOURCES implementation of said function (__explicit_bzero_chk) as was later pointed out to me. And whilst this function is compiled into an intermediary noinst library, no functions that use explicit_bzero are used in the end by NetworkManger binary. To proof this, I've dropped all code that uses explicit_bzero, rebuild the package against glibc 2.26, and voila it only had Version reference on glibc 2.17 as expected from the end-result usage of shared symbols.

At this point toolchain bug was a suspect. It seems like whilst explicit_bzero shared symbol got optimised out, the version reference on 2.25 persisted to the linked binaries. At this point in the archive a snapshot version of binutils was in use. And in fact forcefully downgrading bintuils resulted in correct compilation / versions table referencing only glibc 2.17.

Mathias then took over a tarball of object files and filed upstream bug report against bintuils: "[2.29 Regression] ld.bfd keeps a version reference in .gnu.version_r for symbols which are optimized out". The discussion in that bug report is a bit beyond me as to me binutils is black magic. All I understood there was "we moved sweep and pass to another place due to some bugs", doing that introduced this bug, thus do multiple sweep and passes to make sure we fix old bugs and don't regress this either. Or something like that. Comments / Better description of the bintuils fix are welcomed.

Binutils got fixed by upstream developers, cherry-picked into debian, and ubuntu, network-manager got rebuild and everything is wonderful now. However, it does look like unused / deadend code paths tripped up optimisations in the toolchain which managed to slip by distribution package dependency generation and needless require a higher up version of glibc. I guess the lesson here is do not embed/compile unused code. Also I'm not sure why network-manager uses networkd internals like this, and maybe systemd should expose more APIs or serialise more state into /run, as most other things query things over dbus, private socket, or by establishing watches on /run/systemd/netif. I'll look into that another day.

Thanks a lot to Guillem Jover, Matthias Klose, Alan Modra, H.J. Lu, and others for getting involved. I would not be able to raise, debug, or fix this issue all by myself.
on October 03, 2017 01:27 PM