February 20, 2018

One of the many excellent suggestions from last year's HackerNews thread, Ask HN: What do you want to see in Ubuntu 17.10?, was to refresh the Ubuntu server's command line installer:


We're pleased to introduce this new installer, which will be the default Server installer for 18.04 LTS, and solicit your feedback.

Follow the instructions below, to download the current daily image, and install it into a KVM.  Alternatively, you could write it to a flash drive and install a physical machine, or try it in your virtual machine of your choice (VMware, VirtualBox, etc.).

$ wget http://cdimage.ubuntu.com/ubuntu-server/daily-live/current/bionic-live-server-amd64.iso
$ qemu-img create -f raw target.img 10G
$ kvm -m 1024 -boot d -cdrom bionic-live-server-amd64.iso -hda target.img
...
$ kvm -m 1024 target.img

For those too busy to try it themselves at the moment, I've taken a series of screenshots below, for your review.














Finally, you can provide feedback, bugs, patches, and feature requests against the Subiquity project in Launchpad:



Cheers,
Dustin
on February 20, 2018 08:32 PM

Lookalikes

Benjamin Mako Hill

Hippy/mako lookalikes

Did I forget a period of my life when I grew a horseshoe mustache and dreadlocks, walked around topless, and illustrated this 2009 article in the Economist on the economic boon that hippy festivals represent to rural American communities?


Previous lookalikes are here.

on February 20, 2018 07:45 PM

Debian 7 Wheezy LTS period ends on May 31st and some companies asked Freexian if they could get security support past this date. Since about half of the current team of paid LTS contributors is willing to continue to provide security updates for Wheezy, I have started to work on making this possible.

I just initiated a discussion on debian-devel with multiple Debian teams to see whether it is possible to continue to use debian.org infrastructure to host the wheezy security updates that would be prepared in this extended LTS period.

From the sponsor side, this extended LTS will not work like the regular LTS. It is unrealistic to continue to support all packages and all architectures so only the packages/architectures requested by sponsors will be supported. The amount invoiced to each sponsor will be directly related to the package list that they ask us to support. We made an estimation (based on history) of how much it costs to support each package and we split that cost between all the sponsors that are requesting support for this package. That cost is re-evaluated quarterly and will likely increase over time as sponsors are stopping their support (when they finished to migrate all their machines for example).

This extended LTS will also have some restrictions in terms of packages that we can support. For instance, we will no longer support the linux kernel from wheezy, you will have to switch to the kernel used in jessie (or maybe we will maintain a backport ourselves in wheezy). It is also not yet clear whether we can support OpenJDK since upstream support of version 7 stops at the end of June. And switching to OpenJDK 8 is likely non-trivial. There are likely other unsupportable packages too.

Anyway, if your company needs wheezy security support past end of May, now is the time to worry about it. Please send us a mail with the list of source packages that you would like to see supported. The more companies get involved, the less it will cost to each of them. Our plans are to gather the required data from interested companies in the next few weeks and make a first estimation of the price they will have to pay for the first quarter by mid-march. Then they confirm that they are OK with the offer and we will emit invoices in April so that they can be paid before end of May.

Note however that we decided that it would not be possible to get extended wheezy support if you are not among the regular LTS sponsors (at bronze level at least). Extended LTS would not be possible without the regular LTS so if you need the former, you have to support the latter too.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on February 20, 2018 04:57 PM

LXD Weekly Status #35

Ubuntu Insights

Introduction

This past week we’ve been focusing on a number of open pull requests, getting closer to merging improvements to our storage volume handling, unix char/block devices handling and the massive clustering branch that’s been cooking for a while.

We’re hoping to see most of those land at some point this coming week.

On the LXC side of things, the focus was on bugfixes and cleanups as well as preparing for the removal of the python3 and lua bindings from the main repository. We’re also making good progress on distrobuilder and hope to start moving some of our images to using it as the build tool very soon.

On the snap front, we’ve now added automatic testing for Ubuntu 17.10, Fedora 27 and Centos 7, all of which are now passing for all tracks and channels.

We’ve also been doing some work on our CI infrastructure, automating a large portion of it to limit the amount of manual interactions we have with Jenkins.

Upcoming conferences and events

Ongoing projects

The list below is feature or refactoring work which will span several weeks/months and can’t be tied directly to a single Github issue or pull request.

Upstream changes

The items listed below are highlights of the work which happened upstream over the past week and which will be included in the next release.

LXD

LXC

LXCFS

  • Nothing to report

Distribution work

This section is used to track the work done in downstream Linux distributions to ship the latest LXC, LXD and LXCFS as well as work to get various software to work properly inside containers.

Ubuntu

  • Nothing to report this week

Snap

  • Fixed ZFS version detection with built-in kernel modules
  • Tweaked lxd.migrate to better handle btrfs submodules
  • Added a mntns symlink under /var/snap/lxd/common
  • Cherry-picked a large number of bugfixes
on February 20, 2018 02:02 PM

It’s been a few weeks now since FOSDEM and if you didn’t have a chance to attend or watch the  livestream of the FOSDEM 2018 Community DevRoom, Leslie my co-chair, and I are doing a round up summary on posts on each of the talks to bring you the video and the highlights of each presentation. You can read the preview post of Rich Sands and Simon Phipps pre FOSDEM blog post here.

You’ve Got Some Explaining to Do! So Use An FAQ!

 

I found this talk to be very informative kicking off with a brief history lesson – It was truly amazing hearing the history of Java Opensource release from Simon Phipps and Rich Sands. Releasing Java under the GPL would not have been possible without writing an FAQ, which did have some ramification such as “Chief Open Source Officers hated us”!

It’s really worth finding the time to watch the video of the talk however some highlights from the talk stood out to us as:

  • Write FAQs for developers. If you make them happy you will cover everyone’s needs. Transparency comes from the no spin developers want to see.
  • Write FAQs to learn answers yourself, test and refine your strategy, negotiate consensus and see how you craft your message and strategy.
  • Truth the whole truth and nothing but the truth is what devs want. “Yes its hard. That’s why it works.”
  • When making code corp may want legal to drive strategy. Radical transparency is legally risky but engenders trust. If you want to build dev community its needed.
  • Leadership needs to make legal minimise risk but a trusted partner in successful launch
  • Use change tracking in your FAQ. Publish all comments, both good and bad to build trust.

You can follow Simon and Rich on twitter and follow up with them on the topic!

 

You can read the follow up posts Leslie has written on Brian Proffitt and Jermery Garcia and her first post from Deb Nicholson & Mike McQuaid.

on February 20, 2018 01:30 PM

As mentioned in my earlier blog, I give a talk about Hacking at the Toastmasters club at EPFL tonight. Please feel free to join us and remember to turn off your mobile device or leave it at home, you never know when it might ring or become part of a demonstration.

on February 20, 2018 11:39 AM

How to run TeamViewer in LXD

Simos Xenitellis

TeamViewer is a popular remote desktop tool. It is the typical tool to use, in order to help remotely your colleagues that keep using Windows. You can install TeamViewer on Linux using these instructions on installing TeamViewer on Linux. In this post though, you will install TeamViewer in a LXD (pronounced LexDee) container on your …

Continue reading

on February 20, 2018 12:15 AM

February 19, 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 160 work hours have been dispatched among 11 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased slightly at 187 hours per month. It would be nice if the slow growth could continue as the amount of work seems to be slowly growing too.

The security tracker currently lists 23 packages with a known CVE and the dla-needed.txt file 23 too. The number of open issues seems to be stable compared to last month which is a good sign.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on February 19, 2018 05:18 PM

Article on Opensource.com

Matthew Helmke

An article I wrote has just been posted on Opensource.com: How the Grateful Dead were a precursor to Creative Commons licensing.

on February 19, 2018 12:17 PM

I’m joining Weaveworks

Daniel Holbach

Weaveworks

My sabbatical is over and today is my first day working at Weaveworks where I’m joining the Developer Experience team. I’m incredibly excited about this.

I got to know quite a few of my colleagues in the past weeks and they were without exception all incredibly likeable and smart people. The company believes in open source, is quite diverse and has an office in Berlin – also I’ll get to work with Cezzaine, Jonathan and Steve again.

Right from the start the technology really impressed me. Weave Cloud solves key problems many organisations and companies face today: being able to deploy services seamlessly, securely and easily and making monitoring and snapshots obvious and simple-to-use have an immediate impact on what you can do and what you spend your time on.

Here’s what it looks like in action:

If you’re on Google Cloud Platform, you can even use it for free or you can check out the tutorials to play around with it without having to install anything.

It’s going to be great to immerse myself and learn more about the underlying technologies and connect with the Cloud Native communities. It’s a big landscape with lots of activity and overlap, strong roots in the open source world and the passion to make modern workloads a more manageable problem.

At Weaveworks, I’ll be able to work on what I like best: talk to and work with devs, figure out what people need, look at docs and tools, connect people and make people’s lives easier.

One thing makes this experience even sweeter: I’ll get to reconnect with a lot of you folks! If you’re working in the space and I haven’t talked to you in a while, hit me up and let’s catch up soon again!

Alright, I need to start packing and off to the office for today… 🚲

on February 19, 2018 07:26 AM

Stop changing the clocks

Bryan Quigley

Florida, Tennessee, the EU and more are considering one timezone for the entire year - no more changing the clocks. Massachusetts had a group study the issue and recommend making the switch, but only if a majority of Northeast states decide to join them. I would like to see the NJ legislature vote to join them.

Interaction between countries would be helped by having one less factor that can impact collaboration. Below are two examples of ways this will help.

Meeting Times

Let's consider a meeting scheduled in EST with partipants from NJ, the EU, and Arizona.
NJ - normal disruption of changing times, but the clock time for the meeting stays the same.
Arizona - The clock time for the meeting changes twice a year.
EU - because they also change their clocks at different points throughout the year. Due to this, they have 4 clock time changes during each year.

This gets more complicated as we add partipants from more countries. UTC can help, but any location that has a time change has to be considered for both of it's timezones.

Global shift work or On-call

Generally, these are scheduled in UTC, but the shifts people actually work are in their local time. That can be disruptive in other ways, like finding child care.

In conclusion, while these may be minor compared to other concerns (like the potential health effects associated with change the clocks), the concerns of global collaboration should also be considered.

on February 19, 2018 12:00 AM

February 18, 2018

A few people have recently asked me about the SwissID, as SwissPost has just been sending spam emails out to people telling them "Link your Swiss Post user account to SwissID".

This coercive new application of technology demands users email addresses and mobile phone numbers "for security". A web site coercing people to use text messages "for security" has quickly become a red flag for most people and many blogs have already covered why it is only an illusion of security, putting your phone account at risk so companies can profit from another vector for snooping on you.

SwissID is not the only digital identity solution in Switzerland but as it is run by SwissPost and has a name similar to another service it is becoming very well known.

In 2010 they began offering a solution which they call SuisseID (notice the difference? They are pronounced the same way.) based on digital certificates and compliant with Swiss legislation. Public discussion focussed on the obscene cost with little comment about the privacy consequences and what this means for Switzerland as a nation.

Digital certificates often embed an email address in the certificate.

With SwissID, however, they have a web site that looks like little more than vaporware, giving no details at all whether certificates are used. It appears they are basically promoting an app that is designed to harvest the email addresses and phone numbers of any Swiss people who install it, lulling them into that folly by using a name that looks like their original SuisseID. If it looks like phishing, if it feels like phishing and if it smells like phishing to any expert takes a brief sniff of their FAQ, then what else is it?

The thing is, the original SuisseID runs on a standalone smartcard so it doesn't need to have your mobile phone number, have permissions to all the data in your phone and be limited to working in areas with mobile phone signal.

The emails currently being sent by SwissPost tell people they must "Please use a private e-mail address for this purpose" but they don't give any information about the privacy consequences of creating such an account or what their app will do when it has access to read all the messages and contacts in your phone.

The actions you can take that they didn't tell you about

  • You can post a registered letter to SwissPost and tell them that for privacy reasons, you are immediately retracting the email addresses and mobile phone numbers they currently hold on file and that you are exercising your right not to give an email address or mobile phone number to them in future.
  • If you do decide you want a SwissID, create a unique email address for it and only use that email address with SwissPost so that it can't be cross-referenced with other companies. This email address is also like a canary in a coal mine: if you start receiving spam on that email address then you know SwissPost/SwissID may have been hacked or the data has been leaked or sold.
  • Don't install their app and if you did, remove it and you may want to change your mobile phone number.

Oddly enough, none of these privacy-protecting ideas were suggested in the email from SwissPost. Who's side are they on?

Why should people be concerned?

SwissPost, like every postal agency, has seen traditional revenues drop and so they seek to generate more revenue from direct marketing and they are constantly looking for ways to extract and profit from data about the public. They are also a huge company with many employees: when dealing with vast amounts of data in any computer system, it only takes one employee to compromise everything: just think of how Edward Snowden was able to act alone to extract many of the NSA's most valuable secrets.

SwissPost is going to great lengths to get accurate data on every citizen and resident in Switzerland, including deploying an app to get your mobile phone number and demanding an email address when you use their web site. That also allows them to cross-reference with your IP addresses.

  • Any person or organization who has your email address or mobile number may find it easier to get your home address.
  • Any person or organization who has your home address may be able to get your email address or mobile phone number.
  • When you call a company from your mobile phone and their system recognizes your phone number, it becomes easier for them to match it to your home address.
  • If SwissPost and the SBB successfully convince a lot of people to use a SwissID, some other large web sites may refuse to allow access without getting you to link them to your SwissID and all the data behind it too. Think of how many websites already try to coerce you to give them your mobile phone number and birthday to "secure" your account, but worse.

The Google factor

The creepiest thing is that over seventy percent of people are apparently using Gmail addresses in Switzerland and these will be a dependency of their registration for SwissID.

Given that SwissID is being promoted as a solution compliant with ZertES legislation that can act as an interface between citizens and the state, the intersection with such a powerful foreign actor as Gmail is extraordinary. For example, if people are registering to vote in Switzerland's renowned referendums and their communication is under the surveillance of a foreign power like the US, that is a mockery of democracy and it makes the allegations of Russian election hacking look like child's play.

Switzerland's referendums, decentralized system of Government, part-time army and privacy regime are all features that maintain a balance between citizen and state: by centralizing power in the hands of SwissID and foreign IT companies, doesn't it appear that the very name SwissID is a mockery of the Swiss identity?

Yellow in motion

No canaries were harmed in the production of this blog.

on February 18, 2018 10:17 PM

February 17, 2018

My Kuro5hin Diary Entries

Benjamin Mako Hill

Kuro5hin logo

Kuro5hin (pronounced “corrosion” and abbreviated K5) was a website created in 1999 that was popular in the early 2000s. K5 users could post stories to be voted upon as well as entries to their personal diaries.

I posted a couple dozen diary entries between 2002 and 2003 during my final year of college and the months immediately after.

K5 was taken off-line in 2016 and the Internet Archive doesn’t seem to have snagged comments or full texts of most diary entries. Luckily, someone managed to scrape most of them before they went offline.

Thanks to this archive, you can now once again hear from 21-year-old-me in the form of my old K5 diary entries which I’ve imported to my blog Copyrighteous. I fixed the obvious spelling errors but otherwise restrained myself and left them intact.

If you’re interested in preserving your own K5 diaries, I wrote some Python code to parse the K5 HTML files for diary pages and import them into WordPress using it’s XML-RPC API. You’ll need to tweak the code to use it but it’s pretty straightforward.

on February 17, 2018 03:23 AM

February 16, 2018

February 2008, Canonical's office in Lexington, MA
10 years ago today, I joined Canonical, on the very earliest version of the Ubuntu Server Team!

And in the decade since, I've had the tremendous privilege to work with so many amazing people, and the opportunity to contribute so much open source software to the Ubuntu ecosystem.

Marking the occasion, I've reflected about much of my work over that time period and thought I'd put down in writing a few of the things I'm most proud of (in chronological order)...  Maybe one day, my daughters will read this and think their daddy was a real geek :-)

1. update-motd / motd.ubuntu.com (September 2008)

Throughout the history of UNIX, the "message of the day" was always manually edited and updated by the local system administrator.  Until Ubuntu's message-of-the-day.  In fact, I received an email from Dennis Ritchie and Jon "maddog" Hall, confirming this, in April 2010.  This started as a feature request for the Landscape team, but has turned out to be tremendously useful and informative to all Ubuntu users.  Just last year, we launched motd.ubuntu.com, which provides even more dynamic information about important security vulnerabilities and general news from the Ubuntu ecosystem.  Mathias Gug help me with the design and publication.

2. manpages.ubuntu.com (September 2008)

This was the first public open source project I worked on, in my spare time at Canonical.  I had a local copy of the Ubuntu archive and I was thinking about what sorts of automated jobs I could run on it.  So I wrote some scripts that extracted the manpages out of each one, formatted them as HTML, and published into a structured set of web directories.  10 years later, it's still up and running, serving thousands of hits per day.  In fact, this was one of the ways we were able to shrink the Ubuntu minimal image, but removing the manpages, since they're readable online.  Colin Watson and Kees Cook helped me with the initial implementation, and Matthew Nuzum helped with the CSS and Ubuntu theme in the HTML.

3. Byobu (December 2008)

If you know me at all, you know my passion for the command line UI/UX that is "Byobu".  Byobu was born as the "screen-profiles" project, over lunch at Google in Mountain View, in December of 2008, at the Ubuntu Developer Summit.  Around the lunch table, several of us (including Nick Barcet, Dave Walker, Michael Halcrow, and others), shared our tips and tricks from our own ~/.screenrc configuration files.  In Cape Town, February 2010, at the suggestion of Gustavo Niemeyer, I ported Byobu from Screen to Tmux.  Since Ubuntu Servers don't generally have GUIs, Byobu is designed to be a really nice interface to the Ubuntu command line environment.

4. eCryptfs / Ubuntu Encrypted Home Directories (October 2009)

I was familiar with eCryptfs from its inception in 2005, in the IBM Linux Technology Center's Security Team, sitting next to Michael Halcrow who was the original author.  When I moved to Canonical, I helped Michael maintain the userspace portion of eCryptfs (ecryptfs-utils) and I shepherded into Ubuntu.  eCryptfs was super powerful, with hundreds of options and supported configurations, but all of that proved far to difficult for users at large.  So I set out to simplify it drastically, with an opinionated set of basic defaults.  I started with a simple command to mount a "Private" directory inside of your home directory, where you could stash your secrets.  A few months later, on a long flight to Paris, I managed to hack a new PAM module, pam_ecryptfs.c, that actually encrypted your entire home directory!  This was pretty revolutionary at the time -- predating Apple's FileVault or Microsoft's Bitlocker, even.  Today, tens of millions of Ubuntu users have used eCryptfs to secure their personal data.  I worked closely with Tyler Hicks, Kees Cook, Jamie Strandboge, Michael Halcrow, Colin Watson, and Martin Pitt on this project over the years.

5. ssh-import-id (March 2010)

With the explosion of virtual machines and cloud instances in 2009 / 2010, I found myself constantly copying public SSH keys around.  Moreover, given Canonical's globally distributed nature, I also regularly found myself asking someone for their public SSH keys, so that I could give them access to an instance, perhaps for some pair programming or assistance debugging.  As it turns out, everyone I worked with, had a Launchpad.net account, and had their public SSH keys available there.  So I created (at first) a simple shell script to securely fetch and install those keys.  Scott Moser helped clean up that earliest implementation.  Eventually, I met Casey Marshall, who helped rewrite it entirely in Python.  Moreover, we contacted the maintainers of Github, and asked them to expose user public SSH keys by the API -- which they did!  Now, ssh-import-id is integrated directly into Ubuntu's new subiquity installer and used by many other tools, such as cloud-init and MAAS.

6. Orchestra / MAAS (August 2011)

In 2009, Canonical purchased 5 Dell laptops, which was the Ubuntu Server team's first "cloud".  These laptops were our very first lab for deploying and testing Eucalyptus clouds.  I was responsible for those machines at my house for a while, and I automated their installation with PXE, TFTP, DHCP, DNS, and a ton of nasty debian-installer preseed data.  That said -- it worked!  As it turned out, Scott Moser and Mathias Gug had both created similar setups at their houses for the same reason.  I was mentoring a new hire at Canonical, named Andres Rodriquez at the time, and he took over our part-time hacks and we worked together to create the Orchestra project.  Orchestra, itself was short lived.  It was severely limited by Cobbler as a foundation technology.  So the Orchestra project was killed by Canonical.  But, six months later, a new project was created, based on the same general concept -- physical machine provisioning at scale -- with an entire squad of engineers led by...Andres Rodriguez :-)  MAAS today is easily one of the most important projects the Ubuntu ecosystem and one of the most successful products in Canonical's portfolio.

7. pollinate / pollen / entropy.ubuntu.com (February 2014)

In 2013, I set out to secure Ubuntu at large from a set of attacks ranging from insufficient entropy at first boot.  This was especially problematic in virtual machine instances, in public clouds, where every instance is, by design, exactly identical to many others.  Moreover, the first thing that instance does, is usually ... generate SSH keys.  This isn't hypothetical -- it's quite real.  Raspberry Pi's running Debian were deemed susceptible to this exact problem in November 2015.  So designed and implemented a client (shell script that runs at boot, and fetches some entropy from one to many sources), as well as a high-performance server (golang).  The client is the 'pollinate' script, which runs on the first boot of every Ubuntu server, and the server is the cluster of physical machines processing hundreds of requests per minute at entropy.ubuntu.com.  Many people helped review the design and implementation, including Kees Cook, Jamie Strandboge, Seth Arnold, Tyler Hicks, James Troup, Scott Moser, Steve Langasek, Gustavo Niemeyer, and others.

8. The Orange Box (May 2014)

In December of 2011, in my regular 1:1 with my manager, Mark Shuttleworth, I told him about these new "Intel NUCs", which I had bought and placed them around my house.  I had 3, each of which was running Ubuntu, and attached to a TV around the house, as a media player (music, videos, pictures, etc).  In their spare time, though, they were OpenStack Nova nodes, capable of running a couple of virtual machines.  Mark immediately asked, "How many of those could you fit into a suitcase?"  Within 24 hours, Mark had reached out to the good folks at TranquilPC and introduced me to my new mission -- designing the Orange Box.  I worked with the Tranquil folks through Christmas, and we took our first delivery of 5 of these boxes in January of 2014.  Each chassis held 10 little Intel NUC servers, and a switch, as well as a few peripherals.  Effectively, it's a small data center that travels.  We spend the next 4 months working on the hardware under wraps and then unveiled them at the OpenStack Summit in Atlanta in May 2014.  We've gone through a couple of iterations on the hardware and software over the last 4 years, and these machines continue to deliver tremendous value, from live demos on the booth, to customer workshops on premises, or simply accelerating our own developer productivity by "shipping them a lab in a suitcase".  I worked extensively with Dan Poler on this project, over the course of a couple of years.

9. Hollywood (December 2014)

Perhaps the highlight of my professional career came in October of 2016.  Watching Saturday Night Live with my wife Kim, we were laughing at a skit that poked fun at another of my favorite shows, Mr. Robot.  On the computer screen behind the main character, I clearly spotted Hollywood!  Hollywood is just a silly, fun little project I created on a plane one day, mostly to amuse Kim.  But now, it's been used in Saturday Night LiveNBC Dateline News, and an Experian TV commercials!  Even Jess Frazelle created a Docker container

10. petname / golang-petname / python-petname (January 2015)

From "warty warthog" to "bionic beaver", we've always had a focus on fun, and user experience here in Ubuntu.  How hard is it to talk to your colleague about your Amazon EC2 instance, "i-83ab39f93e"?  Or your container "adfxkenw"?  We set out to make something a little more user-friendly with our "petnames".  Petnames are randomly generated "adjective-animal" names, which are easy to pronounce, spell, and remember.  I curated and created libraries that are easily usable in Shell, Golang, and Python.  With the help of colleagues like Stephane Graber and Andres Rodriguez, we now use these in many places in the Ubuntu ecosystem, such as LXD and MAAS.

If you've read this post, thank you for indulging me in a nostalgic little trip down memory lane!  I've had an amazing time designing, implementing, creating, and innovating with some of the most amazing people in the entire technology industry.  And here's to a productive, fun future!

Cheers,
:-Dustin
on February 16, 2018 05:12 PM

We’re on our way to the 18.04 LTS release and it’s time for another community wallpaper contest!

How to participate?

For a chance to win, submit your submission at contest.xubuntu.org.

Important dates

  • Start of submissions: Immediately
  • Submission deadline: March 15th, 2018
  • Announcement of selections: Late March

All dates are in UTC.

Contest terms

All submissions must adhere to the Terms and Guidelines, including specifics about subject matter, image resolution and attribution.

After the submission deadline, the Xubuntu team will pick 6 winners from all submissions for inclusion on the Xubuntu 18.04 ISO, and will also be available to other Xubuntu version users as a xubuntu-community-wallpaper package . The winners will also receive some Xubuntu stickers.

Any questions?

Please join #xubuntu-devel on Freenode for assistance or email the Xubuntu developer mailing list if you have any problems with your submission.

on February 16, 2018 04:17 PM

If you’ve spent any time in the Snapcraft forum, it’s quite likely you’ve come across Dan Llewellyn – a keen community advocate or self-proclaimed Snapcrafter. Dan has always had a passion for computing and is completely self-taught. Outside of the community, Dan is a freelance WordPress developer. After getting into the open source world around 1998, he has switched between various Linux distros including Suse, RedHat, Gentoo before settling on Ubuntu from the 5.04 release onwards. A longtime participant in the UK Ubuntu chatroom – where he met Canonical’s Alan Pope – Dan admits he was never that active before Snapcraft came along.

It was spending time in the UK chatroom around 2016 that he discovered snaps which piqued his interest. “I saw the movement of changing Clicks to snaps and thought it was an interesting idea. It’s more widely focused than a mobile app delivery system and I’ve always liked things that also worked on the server, IoT and elsewhere” Dan comments. With a previous desire to get into mobile app development and seeing the move away from Ubuntu Touch, Dan was eager to see Snapcraft succeed and felt like it was something he could contribute to.

Being such a regular presence in the Snapcraft forum, Dan is close to what the most commonly covered topics are and where the trends are forming. “Aside from the basics like ‘how do I build a snap?’, there are lots of discussions around documentation. Games are always a popular demand as it is entertainment which is often one of the first things adopted” adds Dan. Other observations include a natural weighting towards desktop snaps although notes Nextcloud is a nice example of a server snap. Audacity and a non-beta version of picture editing tool Gimp are among the most demanded, as well as a Git snap which Dan is working on himself. Outside of Ubuntu users, Arch is the most vocal alternative Linux distro present in the forum.

Dan is also in the perfect position to identify himself and also gather opinion on what’s good and what could be good additions to the snap world. Initially, the idea of a transactional update system outside of the standard distro release process was an appeal personally. Allowing snaps to define their own interfaces on a per snap basis along the lines of how a non-auto connected interface, where store admins vote whether it is acceptable, would be an interesting addition to consider according to Dan. Arbitrary interfaces will allow snap developers to move faster than the current fixed interfaces allow by being able to define own confinement rules. The store admin vote requirement will retain security by denying inappropriate land-grab interfaces.

Dan believes people can come to the format with pre-conceived ideas which tend not to be accurate. “We need to show it is easy, although there are always steps to make it easier. The Canonical team are aware that documentation improvements could be made” says Dan. The onboarding process has seen improvement in Dan’s eyes highlighted by the Google Summer of Code and Google Code-in programmes where dozens of participants quickly got snaps out the door.

With that in mind, what is the main piece of advice he would give to someone investigating snaps for the first time? “If you’re running Ubuntu, then running ‘snap install chromium’ is an easy start and gets you a full web browser that works out the box” Dan states.

The last few weeks has seen some significant snap additions with Spotify, Skype and Slack all joining the store which sets the snaps ecosystem up nicely for the year ahead. Dan highlights the importance of these three; “Attracting these big corporations will light the way for others. They’ve clearly seen snaps can enable them to target as much of the Linux eco-system in one go as possible. As well as showcasing a commercial product is viable as a snap, it could also encourage some that haven’t bothered with Linux in the past to reconsider.”

The store presence is key to the adoption and discoverability of all snaps. Dan has seen the Spotify snap go from strength to strength following the promotion capabilities in the GNOME software centre – while noting it’s just as important for smaller developers who may not have the same resource. Another subtle benefit is the simple fact that people like stores as it provides a sense of gatekeeping to prevent ‘nasty stuff’ appearing.

Dan further embedded himself within the Snapcraft world by joining the recent Snapcraft Summit in Seattle. Despite being one of the most regular contributors in the Snapcraft community, this week-long event helped Dan understand other use cases for snaps that he hadn’t anticipated. Summing up the event, Dan concludes, “It was great to get to know everyone in one room face to face. I’d love to see more of them occur. To be invited, I feel valued, I really do.”

on February 16, 2018 10:00 AM

Damage Control Report

Stephen Michael Kellat

In no particular order:

  • There was another "partial government shutdown" of the federal government of the United States of America last Thursday. As a federal civil servant, I still rated an "essential-excepted" designation which required working without pay until the end of the crisis. President Trump could have solved the matter if anybody could have rousted him from bed at 0940Z on February 9th. That didn't happen. We had a "technical" shutdown that lasted two hours at the start of the working day with resolution at roughly 1300Z on February 9th. A good chunk of staff "technically" did not bothering to show up for duty when it was required and escaped any consequences.
  • Except for the Department of Defense, the remainder of the federal government of the United States of America remains without full-year appropriations for Fiscal Year 2018 which started on October 1, 2017. Appropriations are set to lapse once again on March 22, 2018. I've been given provisional approval for a vacation day on March 23rd but if we have another government shutdown that would be revoked and I would have to report to duty as "essential-excepted" personnel. Under current command guidance that designation lapses as of 0400Z on April 18, 2018. Chances remain pretty high this will happen again.
  • Donations are always accepted via PayPal although they are totally not tax-deductible. I've been trying to broaden the scope of the Domestic Mission Field Activity at West Avenue Church of Christ a bit. One area of interest is moving beyond just the outreach to one of the local nursing homes where we've been the main spiritual link for some of the residents for the past several months regardless of the denomination they're normally part of. Fortunately I'm not alone in conducting the Activity's functions.
  • I'm open to considering proposed transitions from the federal civil service and the data on LinkedIn is probably a good starting point if anybody wants to talk. My current job puts me at the forefront of seeing broken and shattered lives while I try to both protect the federal government's financial interests and also help meet the needs of callers. A change is needed. There is a limit to how much misery and suffering you end up seeing that you cannot help alleviate.
  • The house is still standing. We haven't lost anything due to wintry weather. With luck we'll be able to mount on top of the roof of the garage the VHF/UHF aerial that is currently mounted inside the garage to the bottom of the roof.
  • Being away for 12 hours per day for work plus commute time leaves little time for Xubuntu let alone Ubuntu MATE unless I give up sleeping. This long of a commute is a problem.
  • I am looking at edX MicroMasters as ways to jumpstart picking up the second graduate degree to be able to teach at the community college level. Beyond that, there is a program from Bowling Green State University as well as one at Thomas Edison State University in New Jersey and something at the Holden University Center if I am not feeling daring. I have one earned master's so an organized program leading to an accredited award from a US institution bearing at least 18 semester hours of postgraduate-level credit is the minimum sought.

Things are looking up. This year has gotten off to a rocky start.

on February 16, 2018 03:39 AM

February 14, 2018

I would like to thank Brian Mullan for sharing his notes on getting X2Go to work on LXD containers. Prior to this, I never used X2Go before. You would typically use LXD (pronounced LexDee) containers to run server software. However, you can also run GUI apps. If the LXD containers are located on your desktop …

Continue reading

on February 14, 2018 09:19 PM

After the initial release of Plasma 5.12 was made available for Artful 17.10 via our backports PPA last week, we are pleased to say the the PPA has now been updated to the 1st bugfix release 5.12.1.

The full changelog for 5.12.1 can be found here.

Including fixes and polish for Discover and the desktop.

Also included is an update to the latest KDE Frameworks 5.43.

Upgrade instructions and caveats are as per last week’s blog post, which can be found here.

The Kubuntu team wishes users a happy experience with the excellent 5.12 LTS desktop, and thanks the KDE/Plasma team for such a wonderful desktop to package.

on February 14, 2018 08:35 PM

Hello MAASters!

I’m happy to announce that MAAS 2.4.0 alpha 1 and python-libmaas 0.6.0 have now been released and are available for Ubuntu Bionic.
MAAS Availability
MAAS 2.4.0 alpha 1 is available in the Bionic -proposed archive or in the following PPA:
ppa:maas/next
 
Python-libmaas Availability
Libmaas is available in the Ubuntu Bionic archive or you can download the source from:

MAAS 2.4.0 (alpha1)

Important announcements

Dependency on tgt (iSCSI) has now been dropped

Starting from MAAS 2.3, the way run ephemeral environments and perform deployments was changed away from using iSCSI. Instead, we introduced the ability to do the same using a squashfs image. With that, we completely removed the requirement for having tgt at all, but we didn’t drop the dependency in 2.3. As of 2.4, however, tgt has now been completely removed.

Dependency on apache2 has now been dropped in the debian packages

Starting from MAAS 2.0, MAAS now made the UI available in port 5240 and deprecated the use of port 80. However, as a mechanism to not break users when upgrading from the previous LTS, MAAS continued to have apache2 as a dependency to provide a reverse proxy to allow users to connect via port 80.

However, the MAAS snap changed that behavior no longer providing access to MAAS via port 80. In order to keep MAAS consistent with the snap, starting from MAAS 2.4, the debian package no longer depends on apache2 to provide a reverse proxy capability from port 80.

Python libmaas (0.6.0) now available in the Ubuntu Archive

I’m happy to announce that the new MAAS Client Library is now available in the Ubuntu Archives for Bionic. Libmaas is an asyncio based client library that provides a nice interface to interact with MAAS. More details below.

New Features & Improvements

Machine Locking

MAAS now adds the ability to lock machines, which prevents the user from performing actions on machines that could change their state. This gives MAAS a prevention mechanism of potentially catastrophic actions. For example, it will prevent mistakenly powering off machines or mistanly releasing machines that could bring workloads down.

Audit logging

MAAS 2.4 now allows the administrators to audit the user’s actions, with the introduction of audit logging. The audit logs are available to administrators via the MAAS CLI/API, giving administrators a centralized location to access these logs.

Documentation is in the process of being published. For raw access please refer to the following link:

https://github.com/CanonicalLtd/maas-docs/pull/766/commits/eb05fb5efa42ba850446a21ca0d55cf34ced2f5d

Commissioning Harness – Supporting firmware upgrade and hardware specific scripts

The commissioning harness has been expanded with various improvements to help administrators write their own firmware upgrade and hardware specific scripts. These improvements addresses various of the challenges administrators face when performing such tasks at scale. The improvements include:

  • Ability to auto-select all the firmware upgrade/storage hardware changes (API only, UI will be available soon)

  • Ability to run scripts only for the hardware they are intended to run on.

  • Ability to reboot the machine while on the commissioning environment without disrupting the commissioning process.

This allows administrators to:

  • Create a hardware specific by declaring in which machine it needs to be run, by specifying the hardware specific PCI ID, modalias, vendor or model of the machine or device.

  • Create firmware upgrade scripts that require a reboot before the machine finishes the commissioning process, by allowing to describe this in the script’s metadata.

  • Allows administrators to define where the script can obtain proprietary firmware and/or proprietary tools to perform any of the operations required.

Minor improvements – Gather information about BIOS & firmware

MAAS now gathers more information about the underlying system, such as the Model, Serial, BIOS and firmware information of a machine (where available). It also gathers the information for storage devices as well as network interfaces.

MAAS Client Library (python-libmaas)

New upstream release – 0.6.0

A new upstream release is now available in the Ubuntu Archive for Bionic. The new release includes the following changes:

  • Add/read/update/delete storage devices attached to machines.

  • Configure partitions and mount points

  • Configure Bcache

  • Configure RAID

  • Configure LVM

Known issues & work arounds

LP: #1748712  – 2.4.0a1 upgrade failed with old node event data

It has been reported that an upgrade to MAAS 2.4.0a1 failed due to having old data from a non-existent know stored in the database. This could have been due to a older devel version of MAAS which would have left an entry in the node event table. A work around is provided in the bug report.

If you hit this issue, please update the bug report immediately so MAAS developers.

Bug fixes

Please refer to the following for all bug fixes in this release.

https://launchpad.net/maas/+milestone/2.4.0alpha1

on February 14, 2018 07:44 PM

Releasing software is no small feat, especially in 2018. You could just upload your source code somewhere (a Git, Subversion, CVS, etc, repo – or tarballs on Sourceforge, or whatever), but it matters what that source looks like and how easy it is to consume. What does the required build environment look like? Are there any dependencies on other software, and if so, which versions? What if the versions don’t match exactly?

Most languages feature solutions to the build environment dependency – Ruby has Gems, Perl has CPAN, Java has Maven. You distribute a manifest with your source, detailing the versions of the dependencies which work, and users who download your source can just use those.

Then, however, we have distributions. If openSUSE or Debian wants to include your software, then it’s not just a case of calling into CPAN during the packaging process – distribution builds need to be repeatable, and work offline. And it’s not feasible for packagers to look after 30 versions of every library – generally a distribution will contain 1-3 versions of a given library, and all software in the distribution will be altered one way or another to build against their version of things. It’s a long, slow, arduous process.

Life is easier for distribution packagers, the more the software released adheres to their perfect model – no non-source files in the distribution, minimal or well-formed dependencies on third parties, swathes of #ifdefs to handle changes in dependency APIs between versions, etc.

Problem is, this can actively work against upstream development.

Developers love npm or NuGet because it’s so easy to consume – asking them to abandon those tools is a significant impediment to developer flow. And it doesn’t scale – maybe a friendly upstream can drop one or two dependencies. But 10? 100? If you’re consuming a LOT of packages via the language package manager, as a developer, being told “stop doing that” isn’t just going to slow you down – it’s going to require a monumental engineering effort. And there’s the other side effect – moving from Yarn or Pip to a series of separate download/build/install steps will slow down CI significantly – and if your project takes hours to build as-is, slowing it down is not going to improve the project.

Therein lies the rub. When a project has limited developer time allocated to it, spending that time on an effort which will literally make development harder and worse, for the benefit of distribution maintainers, is a hard sell.

So, a concrete example: MonoDevelop. MD in Debian is pretty old. Why isn’t it newer? Well, because the build system moved away from a packager ideal so far it’s basically impossible at current community & company staffing levels to claw it back. Build-time dependency downloads went from a half dozen in the 5.x era (somewhat easily patched away in distributions) to over 110 today. The underlying build system changed from XBuild (Mono’s reimplementation of Microsoft MSBuild, a build system for Visual Studio projects) to real MSbuild (now FOSS, but an enormous shipping container of worms of its own when it comes to distribution-shippable releases, for all the same reasons & worse). It’s significant work for the MonoDevelop team to spend time on ensuring all their project files work on XBuild with Mono’s compiler, in addition to MSBuild with Microsoft’s compiler (and any mix thereof). It’s significant work to strip out the use of NuGet and Paket packages – especially when their primary OS, macOS, doesn’t have “distribution packages” to depend on.

And then there’s the integration testing problem. When a distribution starts messing with your dependencies, all your QA goes out the window – users are getting a combination of literally hundreds of pieces of software which might carry your app’s label, but you have no idea what the end result of that combination is. My usual anecdote here is when Ubuntu shipped Banshee built against a new, not-regression-tested version of SQLite, which caused a huge performance regression in random playback. When a distribution ships a broken version of an app with your name on it – broken by their actions, because you invested significant engineering resources in enabling them to do so – users won’t blame the distribution, they’ll blame you.

Releasing software is hard.

on February 14, 2018 11:21 AM

With full GTK+ 2 and 3 support and numerous enhancements, Exo 0.12.0 provides a solid development base for new and refreshed Xfce applications.

What’s New?

Since this is the first stable release in nearly 2.5 years, I am going to provide a quick summary of the changes since version 0.10.7, released September 13, 2015.

New Features

GTK Extensions
Helpers
  • WebBrower: Added Brave, Google Chrome, and Vivaldi
  • MailReader: Added Geary, dropped Opera Mail (no longer available for Linux)
Utilities
  • exo-csource: Added a new --output flag to write the generated output to a file
  • exo-helper: Added a new --query flag to determine the preferred application

ICONS

  • Replaced non-standard gnome-* icons
  • Replaced non-existent “missing-image” icon

BUILD CHANGES

  • Build requirements were updated. Exo now requires GTK+ 2.24, GTK+ 3.22, GLib 2.42, libxfce4ui 4.12, and libxfce4util 4.12. Building GTK+ 3 libraries is not optional.
  • Default debug setting is now “yes” instead of “full”.

DOCUMENTATION UPDATES

  • Added missing per-release API indices
  • Resolved undocumented symbols (100% symbol coverage)
  • Updated project documentation (HACKING, README, THANKS)

Release Notes

  • The full release notes can be found here.
  • The full change log can be found here.

Downloads

The latest version of Exo can always be downloaded from the Xfce archives. Grab version 0.12.0 from the below link.

https://archive.xfce.org/src/xfce/exo/0.12/exo-0.12.0.tar.bz2

  • SHA-256: 64b88271a37d0ec7dca062c7bc61ca323116f7855092ac39698c421a2f30a18f
  • SHA-1: 364a9aaa1724b99fe33f46b93969d98e990e9a1f
  • MD5: 724afcca224f5fb22b510926d2740e52
on February 14, 2018 11:05 AM
The Penetration Testing with Kali Linux (PWK) course is one of the most popular information security courses, culminating in a hands-on exam for the Offensive Security Certified Professional certification. It provides a hands-on learning experience for those looking to get into penetration testing or other areas of offensive security. These are some of the things you might want to know before attempting the PWK class or the OSCP exam.

Read more...

on February 14, 2018 08:00 AM

February 13, 2018

with an SMS text warning two minutes before interruption, using CloudWatch Events Rules And SNS

The EC2 Spot instance marketplace has had a number of enhancements in the last couple months that have made it more attractive for more use cases. Improvements include:

  • You can run an instance like you normally do for on-demand instances and add one option to make it a Spot instance! The instance starts up immediately if your bid price is sufficient given spot market conditions, and will generally cost much less than on-demand.

  • Spot price volatility has been significantly reduced. Spot prices are now based on long-term trends in supply and demand instead of hour-to-hour bidding wars. This means that instances are much less likely to be interrupted because of short-term spikes in Spot prices, leading to much longer running instances on average.

  • You no longer have to specify a bid price. The Spot Request will default to the instance type’s on-demand price in that region. This saves looking up pricing information and is a reasonable default if you are using Spot to save money over on-demand.

  • CloudWatch Events can now send a two-minute warning before a Spot instance is interrupted, through email, text, AWS Lambda, and more.

Putting these all together makes it easy to take instances you formerly ran on-demand and add an option to turn them into new Spot instances. They are much less likely to be interrupted than with the old spot market, and you can save a little to a lot in hourly costs, depending on the instance type, region, and availability zone.

Plus, you can get a warning a couple minutes before the instance is interrupted, giving you a chance to save work or launch an alternative. This warning could be handled by code (e.g., AWS Lambda) but this article is going to show how to get the warning by email and by SMS text message to your phone.

WARNING!

You should not run a Spot instance unless you can withstand having the instance stopped for a while from time to time.

Make sure you can easily start a replacement instance if the Spot instance is stopped or terminated. This probably includes regularly storing important data outside of the Spot instance (e.g., S3).

You cannot currently re-start a stopped or hibernated Spot instance manually, though the Spot market may re-start it automatically if you configured it with interruption behavior “stop” (or “hibernate”) and if the Spot price comes back down below your max bid.

If you can live with these conditions and risks, then perhaps give this approach a try.

Start An EC2 Instance With A Spot Request

An aws-cli command to launch an EC2 instance can be turned into a Spot Request by adding a single parameter: --instance-market-options ...

The option parameters we will use do not specify a max bid, so it defaults to the on-demand price for the instance type in the region. We specify “stop” and “persistent” so that the instance will be restarted automatically if it is interrupted temporarily by a rising Spot market price that then comes back down.

Adjust the following options to suite. The important part for this example is the instance market options.

ami_id=ami-c62eaabe # Ubuntu 16.04 LTS Xenial HVM EBS us-west-2 (as of post date)
region=us-west-2
instance_type=t2.small
instance_market_options="MarketType='spot',SpotOptions={InstanceInterruptionBehavior='stop',SpotInstanceType='persistent'}"
instance_name="Temporary Demo $(date +'%Y-%m-%d %H:%M')"

instance_id=$(aws ec2 run-instances \
  --region "$region" \
  --instance-type "$instance_type" \
  --image-id "$ami_id" \
  --instance-market-options "$instance_market_options" \
  --tag-specifications \
    'ResourceType=instance,Tags=[{Key="Name",Value="'"$instance_name"'"}]' \
  --output text \
  --query 'Instances[*].InstanceId')
echo instance_id=$instance_id

Other options can be added as desired. For example, specify an ssh key for the instance with an option like:

  --key $USER

and a user-data script with:

  --user-data file:///path/to/user-data-script.sh

If there is capacity, the instance will launch immediately and be available quickly. It can be used like any other instance that is launched outside of the Spot market. However, this instance has the risk of being stopped, so make sure you are prepared for this.

The next section presents a way to get the early warning before the instance is interrupted.

CloudWatch Events Two-Minute Warning For Spot Interruption

As mentioned above, Amazon recently released a feature where CloudWatch Events will send a two-minute warning before a Spot instance is interrupted. This section shows how to get that warning sent to an email address and/or SMS text to a phone number.

Create an SNS topic to receive Spot instance activity notices:

sns_topic_name=spot-activity

sns_topic_arn=$(aws sns create-topic \
  --region "$region" \
  --name "$sns_topic_name" \
  --output text \
  --query 'TopicArn'
)
echo sns_topic_arn=$sns_topic_arn

Subscribe an email address to the SNS topic:

email_address="YOUR@EMAIL.ADDRESS"

aws sns subscribe \
  --region "$region" \
  --topic-arn "$sns_topic_arn" \
  --protocol email \
  --notification-endpoint "$email_address"

IMPORTANT! Go to your email inbox now and click the link to confirm that you want to subscribe that email address to the SNS topic.

Subscribe an SMS phone number to the SNS topic:

phone_number="+1-999-555-1234" # Your phone number

aws sns subscribe \
  --region "$region" \
  --topic-arn "$sns_topic_arn" \
  --protocol sms \
  --notification-endpoint "$phone_number"

Grant CloudWatch Events permission to post to the SNS topic:

aws sns set-topic-attributes \
  --region "$region" \
  --topic-arn "$sns_topic_arn" \
  --attribute-name Policy \
  --attribute-value '{
    "Version": "2008-10-17",
    "Id": "cloudwatch-events-publish-to-sns-'"$sns_topic_name"'",
    "Statement": [{
      "Effect": "Allow",
      "Principal": {
        "Service": "events.amazonaws.com"
      },
      "Action": [ "SNS:Publish" ],
      "Resource": "'"$sns_topic_arn"'"
    }]
  }'

Create a CloudWatch Events Rule that filters for Spot instance interruption warnings for this specific instance:

rule_name_interrupted="ec2-spot-interruption-$instance_id"
rule_description_interrupted="EC2 Spot instance $instance_id interrupted"

event_pattern_interrupted='{
  "source": [
    "aws.ec2"
  ],
  "detail-type": [
    "EC2 Spot Instance Interruption Warning"
  ],
  "detail": {
    "instance-id": [ "'"$instance_id"'" ]
  }
}'

aws events put-rule \
  --region "$region" \
  --name "$rule_name_interrupted" \
  --description "$rule_description_interrupted" \
  --event-pattern "$event_pattern_interrupted" \
  --state "ENABLED"

Set the target of CloudWatch Events rule to the SNS topic using an input transfomer to make sensible text for an English reader:

sns_target_interrupted='[{
  "Id": "target-sns-'"$sns_topic_name"'",
  "Arn": "'"$sns_topic_arn"'",
  "InputTransformer": {
    "InputPathsMap": {
      "title": "$.detail-type",
      "source": "$.source",
      "account": "$.account",
      "time": "$.time",
      "region": "$.region",
      "instance": "$.detail.instance-id",
      "action": "$.detail.instance-action"
    },
    "InputTemplate":
      "\"<title>: <source> will <action> <instance> ('"$instance_name"') in <region> of <account> at <time>\""
  }
}]'

aws events put-targets \
  --region "$region" \
  --rule "$rule_name_interrupted" \
  --targets "$sns_target_interrupted"

Here’s a sample message for the two-minute interruption warning:

“EC2 Spot Instance Interruption Warning: aws.ec2 will stop i-0f47ef25380f78480 (Temporary Demo) in us-west-2 of 121287063412 at 2018-02-11T08:56:26Z”

Bonus: CloudWatch Events Alerts For State Changes

In addition to the two-minute interruption alert, we can send ourselves messages when the instance is actually stopped, and when it is started again, and when it is running. This is done with slightly different CloudWatch Events pattern and input transformer, but following basically the same pattern.

Create a CloudWatch Events Rule that filters for Spot instance interruption warnings for this specific instance:

rule_name_state="ec2-instance-state-change-$instance_id"
rule_description_state="EC2 instance $instance_id state change"

event_pattern_state='{
  "source": [
    "aws.ec2"
  ],
  "detail-type": [
    "EC2 Instance State-change Notification"
  ],
  "detail": {
    "instance-id": [ "'"$instance_id"'" ]
  }
}'

aws events put-rule \
  --region "$region" \
  --name "$rule_name_state" \
  --description "$rule_description_state" \
  --event-pattern "$event_pattern_state" \
  --state "ENABLED"

And again, set the target of the new CloudWatch Events rule to the same SNS topic using another input transfomer:

sns_target_state='[{
  "Id": "target-sns-'"$sns_topic_name"'",
  "Arn": "'"$sns_topic_arn"'",
  "InputTransformer": {
    "InputPathsMap": {
      "title": "$.detail-type",
      "source": "$.source",
      "account": "$.account",
      "time": "$.time",
      "region": "$.region",
      "instance": "$.detail.instance-id",
      "state": "$.detail.state"
    },
    "InputTemplate":
      "\"<title>: <source> reports <instance> ('"$instance_name"') is now <state> in <region> of <account> as of <time>\""
  }
}]'

aws events put-targets \
  --region "$region" \
  --rule "$rule_name_state" \
  --targets "$sns_target_state"

Here’s are a couple sample messages for the instance state change notification:

“EC2 Instance State-change Notification: aws.ec2 reports i-0f47ef25380f78480 (Temporary Demo) is now stopping in us-west-2 of 121287063412 as of 2018-02-11T08:58:29Z”

“EC2 Instance State-change Notification: aws.ec2 reports i-0f47ef25380f78480 (Temporary Demo) is now stopped in us-west-2 of 121287063412 as of 2018-02-11T08:58:47Z”

Cleanup

If we terminate the EC2 Spot instance, the persistent Spot Request will restart a replacement instance. To terminate it permanently, we need to first cancel the Spot Request:

spot_request_id=$(aws ec2 describe-instances \
  --region "$region" \
  --instance-id "$instance_id" \
  --output text \
  --query 'Reservations[].Instances[].[SpotInstanceRequestId]')
echo spot_request_id=$spot_request_id

aws ec2 cancel-spot-instance-requests \
  --region "$region" \
  --spot-instance-request-ids "$spot_request_id"

Then terminate the EC2 instance:

aws ec2 terminate-instances \
  --region "$region" \
  --instance-ids "$instance_id" \
  --output text \
  --query 'TerminatingInstances[*].[InstanceId,CurrentState.Name]'

Remove the targets from the CloudWatch Events “interrupted” rule and delete the CloudWatch Events Rule:

target_ids_interrupted=$(aws events list-targets-by-rule \
  --region "$region" \
  --rule "$rule_name_interrupted" \
  --output text \
  --query 'Targets[*].[Id]')
echo target_ids_interrupted='"'$target_ids_interrupted'"'

aws events remove-targets \
  --region "$region" \
  --rule "$rule_name_interrupted" \
  --ids $target_ids_interrupted

aws events delete-rule \
  --region "$region" \
  --name "$rule_name_interrupted"

Remove the targets from the CloudWatch Events “state” rule (if you created those) and delete the CloudWatch Events Rule:

target_ids_state=$(aws events list-targets-by-rule \
  --region "$region" \
  --rule "$rule_name_state" \
  --output text \
  --query 'Targets[*].[Id]')
echo target_ids_state='"'$target_ids_state'"'

aws events remove-targets \
  --region "$region" \
  --rule "$rule_name_state" \
  --ids $target_ids_state

aws events delete-rule \
  --region "$region" \
  --name "$rule_name_state"

Delete the SNS Topic:

aws sns delete-topic \
  --region "$region" \
  --topic-arn "$sns_topic_arn"

Original article and comments: https://alestic.com/2018/02/ec2-spot-cloudwatch-events-sns/

on February 13, 2018 08:00 AM

February 12, 2018

Three month wrap-up

Ubuntu LoCo Council

The new LoCo Council has been a little lax with updating this blog. It’s admittedly taken us a little bit of time to figure out what exactly we’re doing, but we seem to be on our feet now. I’d like to rectify the blog issue by wrapping up the first three months of our reign in a summary post to get us back on track.

December 2017

This was the first month of the new council, and our monthly meeting took place on the 11th. We had a number of LoCo verification applications to review.

ArizonaTeam

Arizona had a strong application, with lots of activity, and an ambitious roadmap for the coming year. Despite their having multiple members in attendance, no questions were necessary to receive a unanimous vote for re-verification.

MyanmarTeam

This one was more difficult. Their application listed the most recent event to be in 2016, although with some digging it looked like they might have had activity in 2017 as well. Unfortunately, they had no members in attendance to answer our questions, so we voted unanimously to provisionally extend their status for two months in order to give them a little more time to get their application in order.

VenezuelaTeam

This was probably the quickest re-verification in history. Their application was comprehensive, with an incredible number of activities over the last several years. Their re-verification was unanimously granted.

TunisianTeam

This one seemed to have an up-to-date application, but none of the supporting documentation seemed up-to-date, and no members were in attendance. We again voted for a two-month extension.

PortugalTeam

Portugal had several team members in attendance, and their application was impressive. They even split events into those that they organized, and those in which they participated (but did not organize) because the lists were too long to manage. They were unanimously re-verified.

SwissTeam

Their application was still in draft form, and they had no one in attendance. We again provisionally extended two months.

January

Our January meeting took place on the 8th, and our agenda included two LoCos that were provisionally extended in December.

TunisianTeam

This time, Tunisia had members in attendance. Their application was similar to the one we reviewed in December, but this time they were there to explain that they actually have nearly 300 wiki pages that previous leadership had created, and they were in the midst of pruning them. They’re also working very hard to grow membership. After some discussion, we agreed that they seemed to have a solid plan and good leadership, so we unanimously voted to re-verify.

MyanmarTeam

Once again, Myanmar had no members in attendance, and their application timestamp was the same as when we reviewed in December. As a result, we decided to skip reviewing the application and wait for February.

February

Our February meeting took place today, on the 12th. Our agenda included two LoCos that were provisionally extended in December.

MyanmarTeam

This time, Myanmar had some members in attendance. However, the timestamp of their application still hadn’t changed since the December review. Fortunately, members were there to answer our questions. They explained that there was activity, but it hadn’t made it to the application. They promised to update the application if we extended for one more month, which we did. This was not unanimous, however.

SwissTeam

Their application was no longer in draft form, but we still had a number of questions about their application. In an email to the Council, their leadership requested that we have our discussion in Launchpad since they couldn’t make the meeting. We obliged, and provisionally extended their status for one month.

on February 12, 2018 10:18 PM

GNOME 3.28 has reached its 3.27.90 milestone. This milestone is important because it means that GNOME is now at API Freeze, Feature Freeze, and UI Freeze. From this point on, GNOME shouldn’t change much, but that’s good because it allows for distros, translators, and documentation writers to prepare for the 3.28 release. It also gives time to ensure that new feature are working correctly and as many important bugs as possible are fixed. GNOME 3.28 will be released in approximately one month.

If you haven’t read my last 3.28 post, please read it now. So what else has changed in Tweaks this release cycle?

Desktop

As has been widely discussed, Nautilus itself will no longer manage desktop icons in GNOME 3.28. The intention is for this to be handled in a GNOME Shell extension. Therefore, I had to drop the desktop-related tweaks from GNOME Tweaks since the old methods don’t work.

If your Linux distro will be keeping Nautilus 3.26 a bit longer (like Ubuntu), it’s pretty easy for distro maintainers to re-enable the desktop panel so you’ll still get all the other 3.28 features without losing the convenient desktop tweaks.

As part of this change, the Background tweaks have been moved from the Desktop panel to the Appearance panel.

Touchpad

Historically, laptop touchpads had two or three physical hardware buttons just like mice. Nowadays, it’s common for touchpads to have no buttons. At least on Windows, the historical convention was a click in the bottom left would be treated as a left mouse button click, and a click in the bottom right would be treated as a right mouse button click.

Macs are a bit different in handling right click (or secondary click as it’s also called). To get a right-click on a Mac, just click with two fingers simultaneously. You don’t have to worry about whether you are clicking in the bottom right of the touchpad so things should work a bit better when you get used to it. Therefore, this is even used now in some Windows computers.

My understanding is that GNOME used Windows-style “area” mouse-click emulation on most computers, but there was a manually updated list of computers where the Mac style “fingers” mouse-click emulation was used.

In GNOME 3.28, the default is now the Mac style for everyone. For the past few years, you could change the default behavior in the GNOME Tweaks app, but I’ve redesigned the section now to make it easier to use and understand. I assume there will be some people who prefer the old behavior so we want to make it easy for them!

GNOME Tweaks 3.27.90 Mouse Click Emulation

For more screenshots (before and after), see the GitLab issue.

Other

There is one more feature pending for Tweaks 3.28, but it’s incomplete so I’m not going to discuss it here yet. I’ll be sure to link to a blog post about it when it’s ready though.

For more details about what’s changed, see the NEWS file or the commit log.

on February 12, 2018 05:35 PM

A question: how long is reasonable for an ISV to keep releasing software for an older distribution? When is it fair for them to say “look, we can’t feasibly support this old thing any more”.

For example, Debian 7 is still considered supported, via the Debian LTS project. Should ISV app vendors keep producing builds built for Debian 7, with its ancient versions of GCC or CMake, rudimentary C++11 support, ARM64 bugs, etc? How long is it fair to expect an ISV to keep spitting out builds on top of obsolete toolchains?

Let’s take Mono as an example, since, well, that’s what I’m paid to care about. Right now, we do builds for:

  • Debian 7 (oldoldstable, supported until May 2018)
  • Debian 8 (oldstable, supported until April 2020)
  • Debian 9 (stable, supported until June 2022)
  • Raspbian 8 (oldstable, supported until June 2018)
  • Raspbian 9 (stable, supported until June 2020)
  • Ubuntu 12.04 (EOL unless you pay big bucks to Canonical – but was used by TravisCI long after it was EOL)
  • Ubuntu 14.04 (LTS, supported until April 2019)
  • Ubuntu 16.04 (LTS, supported until April 2021)
  • CentOS 6 (LTS, supported until November 2020)
  • CentOS 7 (LTS, supported until June 2024)

Supporting just these is a problem already. CentOS 6 builds lack support for TLS 1.2+, as that requires GCC 4.7+ – but I can’t just drop it, since Amazon Linux (used by a surprising number of people on AWS) is based on CentOS 6. Ubuntu 12.04 support requires build-dependencies on a secret Mozilla-team maintained copy of GCC 4.7 in the archive, used to keep building Firefox releases.

Why not just use the CDN analytics to form my opinion? Well, it seems most people didn’t update their sources.list after we switched to producing per-distribution binaries some time around May 2017 – so they’re still hardcoding wheezy in their sources. And I can’t go by user agent to determine their OS, as Azure CDN helpfully aggregates all of them into “Debian APT-HTTP/1.x” rather than giving me the exact version numbers I’d need to cross-reference to determine OS release.

So, with the next set of releases coming on the horizon (e.g. Ubuntu 18.04), at what point is it okay to say “no more, sorry” to an old version?

Answers on a postcard. Or the blog comments. Or Twitter. Or Gitter.

on February 12, 2018 03:55 PM

February 10, 2018

Red Team: How to Succeed By Thinking Like the Enemy by Micah Zenko focuses on the role that red teaming plays in a variety of institutions, ranging from the Department of Defense to cybersecurity. It’s an excellent book that describes the thought process behind red teaming, when red teaming is a success and when it can be a failure, and the way a red team can best fit into an organization and provide value. If you’re looking for a book that’s highly technical or focused entirely on information security engineering, this book may disappoint. There’s only a single chapter covering the application of red teaming in the information security space (particularly “vulnerability probes” as Zenko refers to many of the tests), but that doesn’t make the rest of the content any less useful – or interesting – to the Red Team practitioner.

Read more...

on February 10, 2018 08:00 AM

February 09, 2018

Took a year off…

Daniel Holbach

Since many of you reached out to me in the past weeks to find out if I was still travelling the world and how things were going, I thought I’d reconnect with the online world and write a blog post again.

After a bit more than a year, my sabbatical is coming to an end now. I had a lot of time to reflect, recharge batteries, be curious again, travel and make new experiences.

In December ’16 I fled the winter in Germany and went to Ecuador. Curiosity was my guidebook, I slowed down, let nature sink in, enjoyed the food and hospitality of the country, met many simply beautiful people along the way, learned some Spanish, went scuba diving with hammerhead sharks and manta rays, sat on top of mountains, hiked, listened to stories from village elders in Kichwa around the fire, went paragliding, camped in the jungle with Shuar people, befriended a macaw in a hippie village and got inspired by many long conversations.

As always when I’m travelling, my list of recommended next destinations grew and I could easily have gone on. After some weeks, I decided to get back to Berlin though and venture new paths there.

When I first got involved in Ubuntu, I was finishing my studies in Computer Sciences. Last March, thirteen years later, I felt the urge to study again. To open myself up to new challenges, learn entirely new skills, exercise different parts of the brain and make way for a possible new career path in the future. I felt quite uncertain, I wasn’t sure if I was crazy to attempt it,  but I was going to try. I went back to square one and started training as a psychotherapist. This was, and still is, an incredibly exciting step for me and has been a very rewarding experience so far.

I wasn’t just looking for a new intellectual exercise – I was also looking for a way to work more closely with people. Although it’s quite different from what I did up until now, this decision still was very consistent with my beliefs, passions and personality in general. Supporting another human being on their path, helping to bring out their potential and working out new perspectives together have always deeply attracted me.

I had the privilege of learning about and witnessing the work of great therapists, counsellors and trainers in seminars, workshops, books, talks and groups, so I had some guidance which supported me and I chose body psychotherapy as the method I wanted to learn. It is part of the humanistic psychotherapy movement and at its core are (among others) the following ideals:

  • All people are inherently good.
  • People are driven towards self-actualisation: development of creativity, free will, and positive human potential.
  • It is based on present-tense experience as the main reference point.
  • It encourages self-awareness and mindfulness.
  • Wikipedia quotes an article, which describes the benefits as having a "crucial opportunity to lead our troubled culture back to its own healthy path. More than any other therapy, Humanistic-Existential therapy models democracy. It imposes ideologies of others upon the client less than other therapeutic practices. Freedom to choose is maximized."

If you know me just a little bit you can probably tell, that this all very much resonated with me. In a way, it’s what led me to the Ubuntu project in 2004 – there is a lot of “humanity towards others” and “I am what I am because of who we all are” in there.

Body psychotherapy was also specifically interesting to me, as it offers a very rich set of interventions and techniques, all experience-based and relying on the wisdom of our body. Furthermore it seeks to reconcile the body and mind split our culture so heavily promotes.

Since last March I immersed myself in this new world: took classes, read books, attended a congress and workshops and had quite a bit of self-experience. In November I took the required exams and became “Heilpraktiker für Psychotherapie”. The actual training in body psychotherapy I’m going to start this year in March. As this is going to take still several years, I’m not exactly sure when or how I will start working in this field. While it’s still quite some time off and right now only an option for some time in the future, I know that this process will encourage me to become more mindful, patient, empathic and a better listener, colleague, partner and friend.

Does this mean, I’m going to leave the tech world? No, absolutely not. My next steps in this domain I’m going to leave to another blog post though.

I feel very privileged having been able to take the time and embark on this adventure and add a new dimension to my coordinate system. All of this wouldn’t have been possible without close people around me who supported and encouraged me. I’m very grateful for this and feel quite lucky.

This has been a very exciting year, a very important experience and I’m very much looking forward to what’s yet to come.

on February 09, 2018 09:59 AM

February 08, 2018

In musicians circles, the Fractal Audio Systems Axe FX range of products has become one of the most highly regarded product lines. Aside from just being a neat product, what is interesting to me is the relationship they have built with their community and value they have created in the product via sustained software updates.

As a little background, the Axe FX and their other AX8/FX8 floor-board products, are hardware units that replicate in software the characteristics of an analog tube guitar amplifier and speaker cabinets. Now, for years there have been companies (e.g. Line6, IK Multimedia) trying to create a software replication of popular Marshall, Mesa Boogie, Ampeg, Peavey, Fender, and other amp tones, with the idea being that you can spend far less on the software and have a wide range of amps to choose from as well. This not only saves on physical space and dollars, but also simplifies recording these amps as you won’t need to plug in a physical microphone – you just play direct through the software. Sadly, the promise has been largely pretty disappointing. Most generally sound like fizzy, cheap knockoffs.

The Axe FX II

While this may be a little strange to grok for the non-musicians reading this, but there isn’t just a tonality assessment to determine if the amp simulator sounds like the real thing, but there is a feel element. Tube amps feel different to play. They have tonal characteristics that adjust as you dial in different settings, and one of the tricky elements for amp simulators to solve is that analog tubes adjust as you use them; the tone adjusts in subtle ways depending on what you play, how you play it, which power supply you are using, how you dial in the amp, and more.

The Axe FX changed much of this. While many saw it initially as just another amp simulator, it has evolved to a point where in A/B testing it is virtually indistinguishable from the amps it is modelling tonally, and the feel is very much there too. This is why bands such as Metallica, U2, Periphery, Steve Vai, and others carry them on tour with them: they can accomplish the same tonal and feel results without the big, unreliable, and complex-to-maintain tube amps.

Sustained Software Updates

The reason why this has been such a game changer is that Cliff Chase, founder of Fractal Audio Systems, has taken a borderline obsessive approach to detail in building this amp/speaker modelling and creating a small team to deliver it.

Cliff Chase, head honcho at Fractal Audio Systems (middle).

From a technology perspective, this is interesting for a few reasons.

Firstly, Fractal have been fairly open about how their technology has evolved. They published a whitepaper on their MIMIC technology and they have been fairly open about how this modelling technology has evolved. You can see the release notes, some further technical details, and a collection of technical posts by Cliff on the forum.

What I found particularly interesting here was Fractal have consistently delivered these improvements via repeated firmware updates out to existing devices. As an example, the MIMIC technology I mentioned above was a major breakthrough in their technology and really (no pun intended) amped up the simulation quality, but it was delivered as a free firmware update to existing hardware.

Now, many organizations would have seen such a technologically important and significant product iteration software update as an opportunity to either release a new hardware product or sell a new line of firmware at a cost. Fractal didn’t do this and have stuck to their philosophy that when you buy their hardware, it is “future proofed” with firmware updates for years to come.

This is true. As an example, the Axe FX II was released in May 2011 and has received 20+ firmware updates which have significantly improved the quality of the product.

In a technology culture where companies release new-feature software updates for a limited period of time (often 2 – 3 years) and then move firmly into maintenance/security updates for a stated “product life” (often 4 – 7 years), Fractal Audio Systems are bucking this trend significantly.

Community

This regular stream of firmware updates that bring additional value, not just security/compatibility fixes, is notable for a few reasons.

Firstly, it has significantly expanded the lifespan and market impact of these devices. Musicians and producers can be a curmudgeonly bunch, and it can take a while for a product to take hold. This is particularly true in a world where “purism” of the art of creating and producing music, and the purism of the tools you use would ordinarily reject any kind of simulated equipment. The Axe FX has become a staple in touring and production rigs because of it’s constant evolution and improvements.

Tones can be shaped using the Axe Edit desktop client.

Secondly, from a consumer perspective, there is something so satisfying about purchasing a hardware product that consistently improves. Psychologically, we are used to software evolving (in either good or bad directions), but hardware has more of a “cast in stone” psychological impression in many of us. We buy it, it provides a function, and we don’t expect it to change much. In the case of the Fractal Audio Systems hardware, it does change, and this provides that all important goal companies focus on: customer delight.

Thirdly, and most interestingly for me, Fractal Audio Systems have fostered a phenomenally devoted, positive, and supportive community. From a community strategy perspective, they have not done anything particularly special: they have a forum, a wiki, and members of the Fractal Audio Systems team post periodically in the forum. They have the usual social media accounts and they release videos on YouTube. This devotion in the community is not from any community engagement fakery…it is from (a) a solid product, and (b) a company who they feel isn’t bullshitting them.

This latter element, the bullshit factor, is key. When I work with my clients I always emphasize the importance of authenticity in the relationship between a company and their community of users/customers. This doesn’t mean pandering to the community and the critics, it means an honest exchange of ideas and discussion in which the company and the community/users can derive equal levels of value out of the relationship.

In my observation of the Fractal Audio Systems community, they have done just this. Cliff Chase, as the supreme leader at Fractal Audio Systems is revered in the community as a mastermind, a reputation that is rightly earned. He is an active participant with the community, sharing his input both on the musical use of his products as well as the technology that has gone into them. He isn’t a CEO who is propped up on conference stages or bouncing from journalist to journalist merely talking about vision, he is knee-deep, sleeves rolled fully-up, working on improvements that then get rolled out…freely…to an excitable community of users.

This puts the community in a valuable position. They become the logical feedback loop (again, no pun intended) for how well the products and firmware updates are working, and while the community can’t participate in improving the products directly (as they don’t have access to the code or in many cases, the skills to contribute) they get to see the fruits of their feedback in these firmware updates.

This serves two important benefits. Firstly, validation is an enormous force in what we do. Everyone, no matter who you are, needs validation of their input and ideas. When the community share feedback that is then validated by Cliff and co., and then rolled out in a freely available firmware update that benefits everyone, this is a deeply satisfying experience. Secondly, in many communities there is a suspicion about providing value (such as feedback or other technical contributions) to a company if only the company benefits from this (e.g. by selling a new product encompassing that feedback). Given that Fractal Audio Systems pushes out these updates freely, it largely eradicates that issue.

In Conclusion

Everything I have outlined here could be construed as a master plan on behalf of the folks at Fractal Audio Systems. I don’t think this is the case. I don’t believe that when Cliff Chase founded the company he layed all of this out as a grand plan for how to build community and customer engagement.

This goes back to purity. My guess is that Cliff and team just wanted to build a solid product that makes their customers happy and providing this regular stream of updates was the most obvious way to do it. It wouldn’t surprise me if they themselves were surprised by how much goodwill would be generated throughout this process.

This is all paving away to the next iteration of this journey, when the Axe FX III was announced last week. It provides significantly greater horsepower, undoubtedly to usher in the next era of improvements. This is a journey I will be following along with when I get an Axe FX III of my own in March.

The post Case Study: Building Product, Community, and, Sustainability at Fractal Audio Systems appeared first on Jono Bacon.

on February 08, 2018 08:26 PM

Sorry Henry

Stuart Langridge

I think I found a bug in a Henry Dudeney book.

Dudeney was a really famous puzzle creator in Victorian/Edwardian times. For Americans: Sam Loyd was sort of an American knock-off of Dudeney, except that Loyd stole half his puzzles from other people and HD didn’t. Dudeney got so annoyed by this theft that he eventually ended up comparing Loyd to the Devil, which was tough talk in 1910.

Anyway, he wrote a number of puzzle books, and at least some are available on Project Gutenberg, so well done the PG people. If you like puzzles, maths or thinking sorts, then there are a few good collections (and there are nicer to read versions at the Internet Archive too). The Canterbury Puzzles is his most famous work, but I’ve been reading Amusements in Mathematics. In there he presents the following puzzle:

81.—THE NINE COUNTERS.

15879
×23×46

I have nine counters, each bearing one of the nine digits, 1, 2, 3, 4, 5, 6, 7, 8 and 9. I arranged them on the table in two groups, as shown in the illustration, so as to form two multiplication sums, and found that both sums gave the same product. You will find that 158 multiplied by 23 is 3,634, and that 79 multiplied by 46 is also 3,634. Now, the puzzle I propose is to rearrange the counters so as to get as large a product as possible. What is the best way of placing them? Remember both groups must multiply to the same amount, and there must be three counters multiplied by two in one case, and two multiplied by two counters in the other, just as at present.

81. ANSWER

In this case a certain amount of mere “trial” is unavoidable. But there are two kinds of “trials”—those that are purely haphazard, and those that are methodical. The true puzzle lover is never satisfied with mere haphazard trials. The reader will find that by just reversing the figures in 23 and 46 (making the multipliers 32 and 64) both products will be 5,056. This is an improvement, but it is not the correct answer. We can get as large a product as 5,568 if we multiply 174 by 32 and 96 by 58, but this solution is not to be found without the exercise of some judgment and patience.


But, you know what? I don’t think he’s right. Now, I appreciate that he probably had to spend hours or days trying out possibilities with a piece of paper and a fountain pen, and I just wrote the following 15 lines of Python in five minutes, but hey, he didn’t have to bear with his government trying to ban encryption, so let’s call it even.

from itertools import permutations
nums = [1,2,3,4,5,6,7,8,9]
values = []
for p in permutations(nums, 9):
    one   = p[0]*100 + p[1]*10 + p[2]
    two   = p[3]*10 + p[4]
    three = p[5]*10 + p[6]
    four  = p[7]*10 + p[8]
    if four > three: continue # or we'll see fg*hi and hi*fg as different
    if one*two == three*four:
        expression = "%s*%s = %s*%s = %s" % (
            one, two, three, four, one*two)
        values.append((expression, one*two))
values.sort(key=lambda x:x[1])
print("Solution for 1-9")
print("\n".join([x[0] for x in values]))

The key point here is this: the little programme above indeed recognises his proposed solutions (158*32 = 79*64 = 5056 and 174*32 = 96*58 = 5568) but it also finds two larger ones: 584*12 = 96*73 = 7008 and 532*14 = 98*76 = 7448. Did I miss something about the puzzle? Or am I actually in the rare position of finding an error in a Dudeney book? And all it took was seventy years of computer technology advancement to put me in that position. Maths, eh? Tch.

It’s an interesting book. There are lots of money puzzles, in which I have to carefully remember that ha’pennies and farthings are a thing (a farthing is a quarter of a penny), there are 12 pennies in a shilling, and twenty shillings in a pound. There’s some rather racist portrayals of comic-opera Chinese characters in a few of the puzzles. And my heart sank when I read a puzzle about husbands and wives crossing a river in a boat, where no man would permit his wife to be in the boat with another man without him, because I assumed that the solution would also say something like “and of course the women cannot be expected to row the boat”, and I was then pleasantly surprised to discover that this was not the case and indeed they were described as probably being capable oarswomen and it was likely their boat to begin with! Writings from another time. But still as good as any puzzle book today, if not better.

on February 08, 2018 06:34 PM

A Decade of Plasma

Jonathan Riddell

I realised that it’s now a decade of KDE releasing its Plasma desktop.  The KDE 4 release event was in January 2008.  Google were kind enough to give us their office space and smoothies and hot tubs to give some talks and plan a way forward.

The KDE 4 release has gained something of a poor reputation, at the time we still shipped Kubuntu with KDE 3 and made a separate unsupported release for Plasma, but I remember it being perfectly useable and notable for being the foundation that would keep KDE software alive.  It had been clear for sometime that Kicker and the other elements of the KDE 3 desktop were functional but unlikely to gain much going forward.  When Qt 4 was announced back in (I’m pretty sure) 2004 Akademy in Ludwigsberg it was seen as a chance to bring KDE’s desktop back up to date and leap forward.  It took 4 long years and to keep community momentum going we had to release even if we did say it would eat your babies.

2008-02-kde4-release-event-kubuntu

Kubuntu at KDE 4 release event

Somewhere along the way it felt like KDE’s desktop lost mindshare with major distros going with other desktops and the rise of lightweight desktops.  But KDE’s software always had the best technological underpinnings with Qt and then QtQuick plus the move to modularise kdelibs into many KDE Frameworks.

This week we released Plasma 5.12 LTS and what a fabulous reception we are getting.  The combination of simple and familiar by default but customisable and functional is making many people realise what an offering we now have with Plasma. When we tried Plasma on an ARM laptop recently we realised it used less memory then the “lightweight” Linux desktop that laptop used pre-installed.  Qt being optimised for embedded use means KDE’s offerings are fast whether you’re experimenting with Plasma Mobile or using it on the very latest KDE Slimbook II means it’ll run smooth and fast.

Some quotes from this week:

“Plasma, as tested on KDE neon specifically, is almost perfect” Ask Noah Show

“This is the real deal.. I’m going all in on this.. ” Linux Unplugged

“Become a Plasma Puppy”

Elite Ubuntu community spod Alan Pope tired to install KDE neon in aeroplane mode (fails because of a bug which we have since fixed, thanks for the poke).

Chris Fisher takes the Plasma Desktop Challenge, can’t wait to find out what he says next week.

On Reddit Plasma 5.12 post:

“KDE plasma is literally worlds ahead of anything I’ve ever seen. It’s one project where I felt I had to donate to let them know I loved it!”
“I’ve switched to Plasma a little over a year ago and have loved it ever since. I’m glad they’re working so hard on it!”
“Yay! Good to see Kickass Desktop Environment get an update!”

Or here’s a random IRC conversation I had today in a LUG channel

<yeehi> Riddell – I adore KDE now!
<yeehi> It is gobsmackingly beautiful
<yeehi> I put in the 12.0 LTS updates yesterday, maybe over a hundered packages, and all the time I was thinking, “Man, I just love those KDE developers!
<yeehi> It is such a pleasure to use and see. Also, I have been finding it to be my most stable GNU+Linux experience

So after a decade of hard work I’m definitely feeling the good vibes this week. Take the Plasma Challenge and be a Plasma Puppy! KDE Plasma is lightweight, functional and rocking your laptop.

 

 

Facebooktwittergoogle_pluslinkedinby feather
on February 08, 2018 05:23 PM
This winter seemed long in many ways, and not just the weather. In life, progress continues during the winter, but it can be slow and hard to see.

Finally though, the snowdrops are up, dogwood tree buds are swelling, and progress is finally apparent in many areas of volunteer life - KDE, Kubuntu, and my genealogy society.


In KDE, Plasma 5.12 has been released, and it is great! It has been released in time to make it into Kubuntu Bionic, our next big release which will become an LTS. Plasma 5.12 is a great fit there, since it is also an LTS. After living through the Meltdown and Spectre vulnerability early-exposure, it feels great to finally be back on track. We have it available right now in Artful (17.10) as well: https://kubuntu.org/news/plasma-5-12-arrives-in-backport-ppa-for-kubuntu-17-10-artful-aardvark/. I'm using it now.

I'm also using the new KDE browser Falkon, which has not yet been released. I've written to the developers in hopes of a KDE release in time to make it into Bionic.

On the social front, it's great to look forward to Akademy in Vienna this August! I have hopes that many of our Kubuntu team will be able to attend, for the wonderful face-to-face meetings of Akademy. And this year, a special treat for me, since the great Boud and Irina have invited me to stay at their house for the week before Akademy and then make our way together from their home to Vienna by train. This will remove so much of the pain of travel!

Finally, my genealogy society has suffered greatly while Rootsweb was down. But our website is finally up again at https://skcgs.org and our Facebook presence is undergoing some long-needed maintainance as well. Finally, our Program committee has been doing fantastic work getting interesting speakers. It's fun to go to meetings, fun to do my work on the newsletter, and fun even to go to board meetings! You can't ask for better than that!

Even in my own genealogy research, Ancestry.com is making it easier than ever to find cousins, and more ancestors. Also looking forward to Google Summer of Code if KDE is accepted as an organization. It will be another very busy year!
on February 08, 2018 12:45 AM

February 06, 2018

Difficult community members are something that every community struggles with from time to time. Whether abundantly obnoxious or merely a minor frustration, designing an environment where a multitude of personalities can work together is complicated and requires careful attention to detail.

This is something I was asked about on a recent interview on the Late Night Linux podcast. We dug into this and also discussed how to build communities in different environments, whether communities can work for proprietary platforms/devices, and how unique the open source world is within the wider context of communities.

It was a fun interview and well worth a listen.

LISTEN TO THE EPISODE HERE

The post Dealing With Difficult Community Members (Interview on Late Night Linux) appeared first on Jono Bacon.

on February 06, 2018 05:56 PM

Users of Kubuntu 17.10 Artful Aardvark can now update to the newly released Plasma 5.12.0 via our backports PPA.

See the Plasma 5.12 release announcement and the release video below for more about the new features available.

To upgrade:

Add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

 

IMPORTANT

Please note that more bugfix releases are scheduled by KDE for Plasma 5.12, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.10.5 as included in the original 17.10 Artful release. Likewise, users who already have Plasma 5.11.5 via the backports PPA can disable the PPA temporarily and wait for more updates before upgrading. KDE will release bugfix updates to Plasma 5.12 following a Fibonacci sequence of weeks since the last release (i.e. 1, 1, 2, 3, 5 etc week gaps) detailed in the Plasma release schedule.

Other upgrade notes:

~ The Kubuntu backports PPA includes various other backported applications, and KDE Frameworks 5.42, so please be aware that enabling the backports PPA for the 1st time and doing a full upgrade will result in a substantial amount of upgraded packages in addition to Plasma 5.12.

~ The PPA will also continue to receive bugfix updates to Plasma 5.12 when they become available, and further updated KDE applications.

~ While we believe that these packages represent a beneficial and stable update, please bear in mind that they have not been tested as comprehensively as those in the main Ubuntu archive, and are supported only on a limited and informal basis. Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

on February 06, 2018 03:46 PM

Ubuntu Snowsports & Friends Team

Dimitri John Ledkov

Ubuntu Snowsports and Friends Team

After talking to a bunch of people, I've realized that a lot of free & open source, debian / ubuntu, etc people do ski or snowboard. So I have this crazy idea, that maybe we can get enough people together to form a social team on Launchpad.

And maybe if we have enough people there, to possibly try to organize a ski trip with or without conference talks. Kind of like a team building meetup / community event / UDS - Ubuntu Developer Snowsports trip, or maybe an Ubucon Snow.

So here we go - please consider joining https://launchpad.net/~ubuntu-snowsports team, join the mailing list there, and/or hop onto IRC to join #ubuntu-snow on freenode.

I hope we can get more members than https://launchpad.net/~ubuntu-cyclists
on February 06, 2018 03:25 PM

With improved support for Budgie, KDE, and MATE desktop environments, MenuLibre 2.1.5 continues to provide one of the best menu editing experiences for the Linux desktop.

What’s New?

New Features

  • Added support for the Budgie and KDE Plasma desktop environments
  • Improved support for the MATE desktop environment (LP: #1529406)
  • Window identification for the StartupWMClass key

General

  • Added manpage for the recently added menulibre-menu-validate command

Bug Fixes

  • Fix icon used when creating new directory (LP: #1744594)
  • Use ‘applications-other’ instead of ‘application-default-icon’ for better icon standards support (LP: #1745840)
  • Ensure categories are saved in the model when updated (LP: #1746802)
  • Fix incorrect display of newly created directories

Desktop Environment Support

MenuLibre is a FreeDesktop.org compliant menu editor for desktop environments implementing the Desktop Entry Specification. Some desktops are improperly configured and do not export the expected variables, and patches are included to infer the running environment in other ways. Some older desktops, such as IceWM, do not implement this specification and handle their menus in other ways.

MenuLibre has been tested with and known to work with the following desktop environments: Budgie, GNOME, KDE (Plasma), LXDE, LXQt (limited support, LXQt does not allow for non-alphabetical menu ordering), MATE, Pantheon, Unity, and Xfce. It is known not to work with IceWM and others that do not implement the Desktop Entry Specification.

If you come across an environment that should be supported but does not work as expected, let me know! It may require some additional patches to properly detect the environment and menu prefix.

Development Status

With this release, MenuLibre 2.1 is now in feature and string freeze for the 2.2.x series. I’m hoping for a stable 2.2.0 release sometime this month. This means two things.

  1. Translators, now it’s your time to shine! There’s been quite a few changes in the past few releases and it looks like some localizations could use a bit of a refresh. Make your way over to the MenuLibre Translations page to get started or pick up where you left off. 🙂
  2. Everyone else, take MenuLibre for a spin, and report bugs! If you are able to conclude that one of the existing bug reports has actually been resolved, leave a comment on the bug report so we can clean it off the list. Check out the MenuLibre Bugs page for more.

Window Identification Demo

Downloads

The latest version of MenuLibre can always be downloaded from the Launchpad archives. Grab version 2.1.5 from the below link. Debian Unstable and Ubuntu Bionic users should expect to see this latest version land in the archives sometime this week.

https://launchpad.net/menulibre/2.1/2.1.5/+download/menulibre-2.1.5.tar.gz

  • SHA-256: ef05b2722bab2acb7070d6c8ed0e7bd58bd4a4540bf498af9e889944f9da08b5
  • SHA-1: e380478a369a3a45eafc6bb9408366bc41972d16
  • MD5: efc7edb49bb0e5fea49e158b40573334
on February 06, 2018 11:14 AM

February 04, 2018

You're on time for submit a conference, workshop, stand or podcast for the next Ubucon!!


Main room. With no edits ;) Just checking things in situ for April

We're working hard for the next Ubucon Europe 2018 and we would like to tell you the current status:

  • Official webpage updated. 
  • You have especial discounts for your travel in bus, train and hotel. More info here.
  • The conferences will be for free. 
  • Social event of Saturday: It will be a traditional espicha. If you are coming, you need to pay that dinner in advance as soon as possible, because there are limited places! More info here.
  • You can follow the last news here: Telegram, Twitter, Google + & Facebook.
  • We'll publish the complete schedule soon.
on February 04, 2018 12:36 PM

February 03, 2018

stress-ng V0.09.15

Colin King

It has been a while since my last post about stress-ng so I thought it would be useful to provide an update on the changes since V0.08.09.

I have been focusing on making stress-ng more portable so it can build with various versions of clang and gcc as well as run against a wide range of kernels.   The portability shims and config detection added to stress-ng allow it to build and run on a wide range of Linux systems, as well as GNU/HURD, Minix, Debian kFreeBSD, various BSD systems, OpenIndiana and OS X.

Enabling stress-ng to work on a wide range of architectures and kernels with a range of compiler versions has helped me to find and fix various corner case bugs.  Also, static analysis with a various set of tools has helped to drive up the code quality. As ever, I thoroughly recommend using static analysis tools on any project to find bugs.

Since V0.08.09 I've added the following stressors:
  • inode-flags  - (using the FS_IOC_GETFLAGS/FS_IOC_SETFLAGS ioctl, see ioctl_iflags(2) for more details.
  • sockdiag - exercise the Linux sock_diag netlink socket diagnostics
  • branch - exercise branch prediction
  • swap - exercise adding and removing variously sized swap partitions
  • ioport - exercise I/O port read/writes to try and cause CPU I/O bus delays
  • hrtimers - high resolution timer stressor
  • physpage - exercise the lookup of a physical page address and page count of a virtual page
  • mmapaddr - mmap pages to randomly unused VM addresses and exercise mincore and segfault handling
  • funccall - exercise function calling with a range of function arguments types and sizes, for benchmarking stack/CPU/cache and compiler.
  • tree - BSD tree (red/black and splay) stressor, good for exercising memory/cache
  • rawdev - exercise raw block device I/O reads
  • revio - reverse file offset random writes, causes lots of fragmentation and hence many file extents
  • mmap-fixed - stress fixed address mmaps, with a wide range of VM addresses
  • enosys - exercise a wide range of random system call numbers that are not wired up, hence generating ENOSYS errors
  • sigpipe - stress SIGPIPE signal generation and handling
  • vm-addr - exercise a wide range of VM addresses for fixed address mmaps with thorough address bit patterns stressing
Stress-ng has nearly 200 stressors and many of these have various stress methods than can be selected to perform specific stress testing.  These are all documented in the manual.  I've also updated the stress-ng project page with various links to academic papers and presentations that have used stress-ng in various ways to stress computer systems.  It is useful to find out how stress-ng is being used so that I can shape this tool in the future.

As ever, patches for fixes and improvements are always appreciated.  Keep on stressing!
on February 03, 2018 05:28 PM

Carla Sella


on February 03, 2018 01:50 PM