February 24, 2017

Lubuntu Zesty Zapus Beta 1 (soon to be 17.04) has been released! We have a couple papercuts listed in the release notes, so please take a look. A big thanks to the whole Lubuntu team and contributors for helping pull this release together. You can grab the images from here: http://cdimage.ubuntu.com/lubuntu/releases/zesty/beta-1/
on February 24, 2017 03:12 AM

The first beta of the Zesty Zapus (to become 17.04) has now been released!

This milestone features images for Kubuntu, Lubuntu, Ubuntu Budgie, Ubuntu GNOME, Ubuntu Kylin, Ubuntu Studio, and Xubuntu.

Pre-releases of the Zesty Zapus are not encouraged for anyone needing a stable system or anyone who is not comfortable running into occasional, even frequent breakage. They are, however, recommended for Ubuntu flavor developers and those who want to help in testing, reporting, and fixing bugs as we work towards getting this release ready.

Beta 1 includes a number of software updates that are ready for wider testing. This is still an early set of images, so you should expect some bugs.

While these Beta 1 images have been tested and work, except as noted in the release notes, Ubuntu developers are continuing to improve the Zesty Zapus. In particular, once newer daily images are available, system installation bugs identified in the Beta 1 installer should be verified against the current daily image before being reported in Launchpad. Using an obsolete image to re-report bugs that have already been fixed wastes your time and the time of developers who are busy trying to make 17.04 the best Ubuntu release yet. Always ensure your system is up to date before reporting bugs.

Kubuntu

Kubuntu is the KDE based flavor of Ubuntu. It uses the Plasma desktop and includes a wide selection of tools from the KDE project.

The Kubuntu 17.04 Beta 1 images can be downloaded from:

More information about Kubuntu 17.04 Beta 1 can be found here:

Lubuntu

Lubuntu is a flavor of Ubuntu based on LXDE and focused on providing a very lightweight distribution.

The Lubuntu 17.04 Beta 1 images can be downloaded from:

More information about Lubuntu 17.04 Beta 1 can be found here:

Ubuntu Budgie

Ubuntu Budgie is a flavor of Ubuntu featuring the Budgie desktop environment.

The Ubuntu Budgie 17.04 Beta 1 images can be downloaded from:

More information about Ubuntu Budgie 17.04 Beta 1 can be found here:

Ubuntu GNOME

Ubuntu GNOME is a flavor of Ubuntu featuring the GNOME desktop environment.

The Ubuntu GNOME 17.04 Beta 1 images can be downloaded from:

More information about Ubuntu GNOME 17.04 Beta 1 can be found here:

Ubuntu Kylin

Ubuntu Kylin is a flavor of Ubuntu that is more suitable for Chinese users.

The Ubuntu Kylin 17.04 Beta 1 images can be downloaded from:

More information about Ubuntu Kylin 17.04 Beta 1 can be found here:

Ubuntu Studio

Ubuntu Studio is a flavor of Ubuntu configured for multimedia production.

The Ubuntu Studio 17.04 Beta 1 images can be downloaded from:

More information about Ubuntu Studio 17.04 Beta 1 can be found here:

Xubuntu

Xubuntu is a flavor of Ubuntu based on the Xfce desktop environment.

The Xubuntu 17.04 Beta 1 images can be downloaded from:

More information about Xubuntu 17.04 Beta 1 can be found here:

If you’re interested in following the changes as we further develop the Zesty Zapus, we suggest that you subscribe to the ubuntu-devel-announce list. This is a low-traffic list (a few posts a month or less) carrying announcements of approved specifications, policy changes, alpha releases, and other interesting events.

A big thank you to the developers and testers for their efforts to pull together this Beta release!

Originally posted to the ubuntu-devel-announce mailing list on Fri Feb 24 00:34:19 UTC 2017 by Simon Quigley on behalf of the Ubuntu Release Team

on February 24, 2017 02:19 AM


Yesterday, I delivered a talk to a lively audience at ContainerWorld in Santa Clara, California.

If I measured "the most interesting slides" by counting "the number of people who took a picture of the slide", then by far "the most interesting slides" are slides 8-11, which pose an answer the question:
"Should I run my PaaS on top of my IaaS, or my IaaS on top of my PaaS"?
In the Ubuntu world, that answer is super easy -- however you like!  At Canonical, we're happy to support:
  1. Kubernetes running on top of Ubuntu OpenStack
  2. OpenStack running on top of Canonical Kubernetes
  3. Kubernetes running along side OpenStack
In all cases, the underlying substrate is perfectly consistent:
  • you've got 1 to N physical or virtual machines
  • which are dynamically provisioned by MAAS or your cloud provider
  • running stable, minimal, secure Ubuntu server image
  • carved up into fast, efficient, independently addressable LXD machine containers
With that as your base, we'll easily to conjure-up a Kubernetes, an OpenStack, or both.  And once you have a Kubernetes or OpenStack, we'll gladly conjure-up one inside the other.


As always, I'm happy to share my slides with you here.  You're welcome to download the PDF, or flip through the embedded slides below.



Cheers,
Dustin
on February 24, 2017 12:42 AM

February 23, 2017

Introducting the Canonical Livepatch Service
Howdy!

Ubuntu 16.04 LTS’s 4.4 Linux kernel includes an important new security capability in Ubuntu -- the ability to modify the running Linux kernel code, without rebooting, through a mechanism called kernel livepatch.

Today, Canonical has publicly launched the Canonical Livepatch Service -- an authenticated, encrypted, signed stream of Linux livepatches that apply to the 64-bit Intel/AMD architecture of the Ubuntu 16.04 LTS (Xenial) Linux 4.4 kernel, addressing the highest and most critical security vulnerabilities, without requiring a reboot in order to take effect.  This is particularly amazing for Container hosts -- Docker, LXD, etc. -- as all of the containers share the same kernel, and thus all instances benefit.



I’ve tried to answer below some questions that you might have. As you have others, you’re welcome
to add them to the comments below or on Twitter with hastag #Livepatch.

Retrieve your token from ubuntu.com/livepatch

Q: How do I enable the Canonical Livepatch Service?

A: Three easy steps, on a fully up-to-date 64-bit Ubuntu 16.04 LTS system.
  1. Go to https://ubuntu.com/livepatch and retrieve your livepatch token
    1. Install the canonical-livepatch snap
      $ sudo snap install canonical-livepatch 
    2. Enable the service with your token
      $ sudo canonical-livepatch enable [TOKEN] 
    And you’re done! You can check the status at any time using:

    $ canonical-livepatch status --verbose

      Q: What are the system requirements?

      A: The Canonical Livepatch Service is available for the generic and low latency flavors of the 64-bit Intel/AMD (aka, x86_64, amd64) builds of the Ubuntu 16.04 LTS (Xenial) kernel, which is a Linux 4.4 kernel. Canonical livepatches work on Ubuntu 16.04 LTS Servers and Desktops, on physical machines, virtual machines, and in the cloud. The safety, security, and stability firmly depends on unmodified Ubuntu kernels and network access to the Canonical Livepatch Service (https://livepatch.canonical.com:443).  You also will need to apt update/upgrade to the latest version of snapd (at least 2.15).

      Q: What about other architectures?

      A: The upstream Linux livepatch functionality is currently limited to the 64-bit x86 architecture, at this time. IBM is working on support for POWER8 and s390x (LinuxOne mainframe), and there’s also active upstream development on ARM64, so we do plan to support these eventually. The livepatch plumbing for 32-bit ARM and 32-bit x86 are not under upstream development at this time.

      Q: What about other flavors?

      A: We are providing the Canonical Livepatch Service for the generic and low latency (telco) flavors of the the Linux kernel at this time.

      Q: What about other releases of Ubuntu?

      A: The Canonical Livepatch Service is provided for Ubuntu 16.04 LTS’s Linux 4.4 kernel. Older releases of Ubuntu will not work, because they’re missing the Linux kernel support. Interim releases of Ubuntu (e.g. Ubuntu 16.10) are targeted at developers and early adopters, rather than Long Term Support users or systems that require maximum uptime.  We will consider providing livepatches for the HWE kernels in 2017.

      Q: What about derivatives of Ubuntu?

      A: Canonical livepatches are fully supported on the 64-bit Ubuntu 16.04 LTS Desktop, Cloud, and Server operating systems. On other Ubuntu derivatives, your mileage may vary! These are not part of our automated continuous integration quality assurance testing framework for Canonical Livepatches. Canonical Livepatch safety, security, and stability will firmly depend on unmodified Ubuntu generic kernels and network access to the Canonical Livepatch Service.

      Q: How does Canonical test livepatches?

      A: Every livepatch is rigorously tested in Canonical's in-house CI/CD (Continuous Integration / Continuous Delivery) quality assurance system, which tests hundreds of combinations of livepatches, kernels, hardware, physical machines, and virtual machines.  Once a livepatch passes CI/CD and regression tests, it's rolled out on a canary testing basis, first to a tiny percentage of the Ubuntu Community users of the Canonical Livepatch Service. Based on the success of that microscopic rollout, a moderate rollout follows.  And assuming those also succeed, the livepatch is delivered to all free Ubuntu Community and paid Ubuntu Advantage users of the service.  Systemic failures are automatically detected and raised for inspection by Canonical engineers.  Ubuntu Community users of the Canonical Livepatch Service who want to eliminate the small chance of being randomly chosen as a canary should enroll in the Ubuntu Advantage program (starting at $12/month).

      Q: What kinds of updates will be provided by the Canonical Livepatch Service?

      A: The Canonical Livepatch Service is intended to address high and critical severity Linux kernel security vulnerabilities, as identified by Ubuntu Security Notices and the CVE database. Note that there are some limitations to the kernel livepatch technology -- some Linux kernel code paths cannot be safely patched while running. We will do our best to supply Canonical Livepatches for high and critical vulnerabilities in a timely fashion whenever possible. There may be occasions when the traditional kernel upgrade and reboot might still be necessary. We’ll communicate that clearly through the usual mechanisms -- USNs, Landscape, Desktop Notifications, Byobu, /etc/motd, etc.

      Q: What about non-security bug fixes, stability, performance, or hardware enablement updates?

      A: Canonical will continue to provide Linux kernel updates addressing bugs, stability issues, performance problems, and hardware compatibility on our usual cadence -- about every 3 weeks. These updates can be easily applied using ‘sudo apt update; sudo apt upgrade -y’, using the Desktop “Software Updates” application, or Landscape systems management. These standard (non-security) updates will still require a reboot, as they always have.

      Q: Can I rollback a Canonical Livepatch?

      A: Currently rolling-back/removing an already inserted livepatch module is disabled in Linux 4.4. This is because we need a way to determine if we are currently executing inside a patched function before safely removing it. We can, however, safely apply new livepatches on top of each other and even repatch functions over and over.

      Q: What about low and medium severity CVEs?

      A: We’re currently focusing our Canonical Livepatch development and testing resources on high and critical security vulnerabilities, as determined by the Ubuntu Security Team.  We'll livepatch other CVEs opportunistically.

      Q: Why are Canonical Livepatches provided as a subscription service?

      A: The Canonical Livepatch Service provides a secure, encrypted, authenticated connection, to ensure that only properly signed livepatch kernel modules -- and most importantly, the right modules -- are delivered directly to your system, with extremely high quality testing wrapped around it.

      Q: But I don’t want to buy UA support!

      A: You don’t have to! Canonical is providing the Canonical Livepatch Service to community users of Ubuntu, at no charge for up to 3 machines (desktop, server, virtual machines, or cloud instances). A randomly chosen subset of the free users of Canonical Livepatches will receive their Canonical Livepatches slightly earlier than the rest of the free users or UA users, as a lightweight canary testing mechanism, benefiting all Canonical Livepatch users (free and UA). Once those canary livepatches apply safely, all Canonical Livepatch users will receive their live updates.

      Q: But I don’t have an Ubuntu SSO account!

      A: An Ubuntu SSO account is free, and provides services similar to Google, Microsoft, and Apple for Android/Windows/Mac devices, respectively. You can create your Ubuntu SSO account here.

      Q: But I don’t want login to ubuntu.com!

      A: You don’t have to! Canonical Livepatch is absolutely not required maintain the security of any Ubuntu desktop or server! You may continue to freely and anonymously ‘sudo apt update; sudo apt upgrade; sudo reboot’ as often as you like, and receive all of the same updates, and simply reboot after kernel updates, as you always have with Ubuntu.

      Q: But I don't have Internet access to livepatch.canonical.com:443!

      A: You should think of the Canonical Livepatch Service much like you think of Netflix, Pandora, or Dropbox.  It's an Internet streaming service for security hotfixes for your kernel.  You have access to the stream of bits when you can connect to the service over the Internet.  On the flip side, your machines are already thoroughly secured, since they're so heavily firewalled off from the rest of the world!

      Q: Where’s the source code?

      A: The source code of livepatch modules can be found here.  The source code of the canonical-livepatch client is part of Canonical's Landscape system management product and is commercial software.

      Q: What about Ubuntu Core?

      A: Canonical Livepatches for Ubuntu Core are on the roadmap, and may be available in late 2016, for 64-bit Intel/AMD architectures. Canonical Livepatches for ARM-based IoT devices depend on upstream support for livepatches.

      Q: How does this compare to Oracle Ksplice, RHEL Live Patching and SUSE Live Patching?

      A: While the concepts are largely the same, the technical implementations and the commercial terms are very different:

      • Oracle Ksplice uses it’s own technology which is not in upstream Linux.
      • RHEL and SUSE currently use their own homegrown kpatch/kgraft implementations, respectively.
      • Canonical Livepatching uses the upstream Linux Kernel Live Patching technology.
      • Ksplice is free, but unsupported, for Ubuntu Desktops, and only available for Oracle Linux and RHEL servers with an Oracle Linux Premier Support license ($2299/node/year).
      • It’s a little unclear how to subscribe to RHEL Kernel Live Patching, but it appears that you need to first be a RHEL customer, and then enroll in the SIG (Special Interests Group) through your TAM (Technical Account Manager), which requires Red Hat Enterprise Linux Server Premium Subscription at $1299/node/year.  (I'm happy to be corrected and update this post)
      • SUSE Live Patching is available as an add-on to SUSE Linux Enterprise Server 12 Priority Support subscription at $1,499/node/year, but does come with a free music video.
      • Canonical Livepatching is available for every Ubuntu Advantage customer, starting at our entry level UA Essential for $150/node/year, and available for free to community users of Ubuntu.

      Q: What happens if I run into problems/bugs with Canonical Livepatches?

      A: Ubuntu Advantage customers will file a support request at support.canonical.com where it will be serviced according to their UA service level agreement (Essential, Standard, or Advanced). Ubuntu community users will file a bug report on Launchpad and we'll service it on a best effort basis.

      Q: Why does canonical-livepatch client/server have a proprietary license?

      A: The canonical-livepatch client is part of the Landscape family of tools available to Canonical support customers. We are enabling free access to the Canonical Livepatch Service for Ubuntu community users as a mark of our appreciation for the broader Ubuntu community, and in exchange for occasional, automatic canary testing.

      Q: How do I build my own livepatches?

      A: It’s certainly possible for you to build your own Linux kernel live patches, but it requires considerable skill, time, computing power to produce, and even more effort to comprehensively test. Rest assured that this is the real value of using the Canonical Livepatch Service! That said, Chris Arges has blogged a howto for the curious a while back:

      http://chrisarges.net/2015/09/21/livepatch-on-ubuntu.html

      Q: How do I get notifications of which CVEs are livepatched and which are not?

      A: You can, at any time, query the status of the canonical-livepatch daemon using: ‘canonical-livepatch status --verbose’. This command will show any livepatches successfully applied, any outstanding/unapplied livepatches, and any error conditions. Moreover, you can monitor the Ubuntu Security Notices RSS feed and the ubuntu-security-announce mailing list.

      Q: Isn't livepatching just a big ole rootkit?

      A: Canonical Livepatches inject kernel modules to replace sections of binary code in the running kernel. This requires the CAP_SYS_MODULE capability. This is required to modprobe any module into the Linux kernel. If you already have that capability (root does, by default, on Ubuntu), then you already have the ability to arbitrarily modify the kernel, with or without Canonical Livepatches. If you’re an Ubuntu sysadmin and you want to disable module loading (and thereby also disable Canonical Livepatches), simply ‘echo 1 | sudo tee /proc/sys/kernel/modules_disabled’.

      Keep the uptime!
      :-Dustin
      on February 23, 2017 11:46 PM

      S10E00 – Cool Skillful Title - Ubuntu Podcast

      Ubuntu Podcast from the UK LoCo

      It’s Season Ten Episode Zero of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

      In your face, we’re back for season 10!

      In this pre-season teaser:

      • We discuss what we’ve been up to since the end of last year.:
        • Having curry together somewhere near Oxford.
        • Upgrading a Steambox with a low-profile nvidia GEFORCE GTX 1050 TI 4GT graphics card.
        • Buying and playing games to “test drive” the nvidia GEFORCE GTX 1050 TI 4GT.
        • Getting an LG G Watch to experiment with Asteroid OS.
        • Decorating the house.
        • Fixing an unreliable car.
        • Hacking another car with Carista OBD2 to make it smarter.
        • Making a better fuel gauge with an OBD2 adpater.
        • Echo, echo, echo…
        • Voice activated Philips Hue lights.
      • We explain some changes to the show format for Season 10.
      • This weeks cover image is taken from Wikimedia.

      That’s all for the season teaser. Episode 1 will be out on March 9th! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

      on February 23, 2017 03:00 PM

      • Ubuntu Core and the TS-4900 makes for one the most secure, easy to manage, and flexible i.MX6 powered boards on the market
      • The TS-4900 with Ubuntu Core will be on the Ubuntu Booth in Hall P3 – 3K31 at Mobile World Congress 2017

      London, UK and Fountain Hills, AZ – 23 February 2017 : Today, Technologic Systems, Inc. announced that it will be partnering with Canonical to make Ubuntu Core available for their TS-4900 Compute Module. The TS-4900 is a high-performance Computer on Module (CoM) based on the NXP i.MX6 CPU which implements the ARM® CortexTM A9 architecture clocked at 1 GHz.

      The TS-4900 is ideal for embedded systems applications, especially those needing wireless connections like industrial IoT gateways. Ubuntu Core is ideal for this environment because of its rich networking and protocol support. In addition, Ubuntu Core offers a secure, reliable, and remotely upgradeable platform to easily update and maintain IoT devices making for a more secure and cost-effective deployment.

      The TS-4900 is available in either single or quad core configurations with up to 2 GB DDR3 RAM. It is designed with connectivity in mind with WiFi 802.11 b/g/n and Bluetooth 4.0 onboard. Several standard interfaces are supported including Gigabit Ethernet, USB, SATA II, and PCIe Express. The TS-4900 is fanless, although a heat sink is recommended for the quad core configuration, and it is rated at an industrial temperature range (-40 oC to 85 oC). In addition, new applications can be simply developed and rolled out across the deployment via snap packages, increasing the utility and value of any IoT deployment.

      Bob Miller, founder of Technologic Systems said, “With the functionality of our TS-4900 and the flexibility of Ubuntu Core, I can see these powering virtually anything from industrial IoT gateways, plant automation, network equipment, high definition digital signage, to remote monitoring stations.”

      Mike Bell, EVP IoT and Devices of Canonical said, “The TS-4900 Compute Module brings Ubuntu Core to the popular i.MX6 platform, delivering a new level of life-cycle management, monetisation and security to a whole range of IoT applications. Ubuntu Core delivers groundbreaking security, management, operations, and upgradability in a compact developer-friendly platform, underpinned by the open “snap” packaging technology.”

      For more information on Ubuntu Core please visit: www.ubuntu.com/core

      For more information on TS-4900 powered by Ubuntu Core please visit: www.embeddedarm.com/software/ubuntu-core

      Mobile World Congress 2017 location:
      To find out Ubuntu Core and the TS-4900 visit the Ubuntu Booth in Hall P3 – 3K31 at Mobile World Congress 2017.

      -ends-

      About Technologic Systems, Inc.
      Technologic Systems has been in business for 32 years, helping more than 8000 OEM customers and building over a hundred COTS products that have never been discontinued. Our commitment to excellent products, low prices, and exceptional customer support has allowed our business to flourish in a very competitive marketplace. We offer a wide variety of single board computers, computer-on-modules, touch panel PCs, PC/104 and other peripherals, and industrial controllers that satisfy most embedded project requirements. We also offer custom configurations and design services. We specialize in the ARM and X86 architectures, FPGA IP-core design, and open-source software support, providing advanced custom solutions using hardware-software co-design strategies.
      Technologic Systems
      Fountain Hills, AZ
      www.embeddedARM.com
      (480)837-5200

      About Canonical
      Canonical is the company behind Ubuntu, the leading OS for cloud operations. Most public cloud workloads use Ubuntu, as do most new smart gateways, switches, self-driving cars and advanced robots. Canonical provides enterprise support and services for commercial users of Ubuntu. Established in 2004, Canonical is a privately held company.
      Media contacts
      EMEA
      Ubuntu@wildfirepr.com
      +44 208 408 8000
      US
      March Communications
      ubuntu@marchcomms.com
      +1 (617) 960-9900.

      on February 23, 2017 01:00 PM

      The internet of things, augmented reality, artificial intelligence, big data, robots/drones, blockchain, industry 4.0 and other hype terms are popping up everywhere. Our partners will be showcasing real industrial solutions that go far beyond the hype and are based on these cutting edge technologies.

      How to tell industrial IoT reality apart from hype?

      Here are three simple questions you should ask when you visit Mobile World Congress next week:

      1. How do you secure IoT?
      2. How can I manage large IoT deployments?
      3. How can I monetize IoT?

      At the Ubuntu booth you will find answers to those three key questions.

      How does Canonical secure IoT?

      Unlike most commercial solutions, Ubuntu’s IoT solution is open source and can be freely downloaded. More eyes are looking at the code, hence more bugs get discovered and fixed. We don’t make you pay to get security updates to the operating system. They are included. Additionally we make sure third-party software is securely contained and can be easily patched. Ubuntu is being used by the biggest brands in the world and every day withstands many of the most complex attacks in the world.

      How can I easily manage large IoT deployments?

      Ubuntu Core comes with transactional rollbacks so if you make a mistake, you can just rollback. You can easily add automated testing and perform DevOps for Devices in which security and other updates can be distributed to millions of devices in a fully automated repeatable and fully tested fashion.

      How can I monetize IoT?

      We are offering the easiest way to monetize IoT and our partners will be showing the solution during MWC: run your own app store for any type of connected devices. You decide which apps you want to resell and if your store is open to anybody or by invitation only. Our large ecosystem of apps mean you don’t have to start from zero.

      App Stores on Industrial Connected Devices

      The future of the industrial IoT is running your own app store or buying apps from somebody’s app store. App Stores for industrial connected devices are a great way to go from idea to new revenues, easier management or other cost lowering solutions. Try a new solution in minutes. No need for complex RFPs and proof of concepts. Our partners are showcasing their app and app store enabled solutions.

      Azeti and Automated Asset Management

      Azeti will showcase why cell towers need an IoT edge gateway to automatically manage your valuable assets. Any problem with the air conditioning, doors, energy usage and more can be detected and many can be automatically solved. A theft attempt can be in many cases deterred.

      DataArt and Smart Elevator App Stores

      The most advanced artificial intelligence demo on the booth is around elevators. Use your voice to control an elevator. Get unexpected useful information as a plus. But what happens if an intruder uses the elevator? The elevator will automatically detect the danger and lock them in until help is available. The same hardware can be easily retrained to also look for people with health problems or suggest the floor you should go by detecting who you are. DataArt is showing the future of industrial control via the next generation of programmable logic controllers, called app logic controllers or ALCs. Kunbus and UniPi are providing the hardware.

      Cloudplugs and Building and Industrial Automation

      Any building and industrial plant has programmable logic controllers. They are the key to understanding what is happening. Cloudplugs edge IoT solution shows how you can easily integrate with many existing industrial control systems. With the help of UniPi and Kunbus they will also show the future of industrial control systems with ALCs. Creating on the fly beautiful touch enabled user interfaces for the smart building and plant of the future.

      IBM and the future of edge computing

      IBM’s Blue Horizon leaves anybody amazed. Smart contracts, software defined radio, edge computing and big data get combined in a distributed peer-to-peer network that shows the future of industrial cloud solutions that can collect and sell data autonomously.

      Skyfield and revenue generating fountains for smart cities

      Why should cities invest in smart city solutions? Many have a so-so answer about cost savings and happier citizens. What if the solution is as simple as converting cost centres like fountains into revenue generating opportunities. Name one city that does not want to increase its revenues?

      Fairwaves & Lime Micro and software defined radios on the edge

      Industrial communication is often still done via wires. This is expensive and slow to make changes. A new trend of wireless industrial communication is starting to shape now. NB-IoT and LTE-M are commercial spectrum versions of low-power wide area networks. Via open source IoT base stations and software defined radios with app stores these and other protocols can be easily supported with an app. Selling spectrum as a service for industrial use cases has enormous business potential. Fairwaves is showing how small software defined radios can fit into small gateways and redefine wireless communications in minutes.

      Pycom will be showing the future of narrow-band WAN

      Arisen from the ashes of Kickstarter, Pycom is now the crowdfunding king around multi-network narrow-band low-power wide area networks. In an era where makers become the next enterprise and industrial giants and enterprise developers rule the roost, be sure to see how Pycom’s powerful yet inexpensive MicroPython programmable boards can support WiFi, Bluetooth, LoRa, Sigfox, LTE-M and NB-IoT. Challenge a friend to “Shake IoT”.

      Dell’s IoT Edge Gateways

      Dell will be showcasing their IoT Edge Gateways. Via apps any sensor, actuator, edge analytics, protocol convertor, cloud integration and whatever other industrial solution can be deployed in minutes. Any industrial customer can solve any use case by just downloading apps from an Industrial App Store.

      Daqri’s Augmented Reality Industrial Helmets

      We all have seen virtual reality goggles but few have seen augmented reality for industrial purposes. Come and try on the Daqri helmet and experience yourself the new world of industrial augmented reality.

      App Stores on Robots

      For a look peek into the future of industrial robotics, two leading robotics companies Pal Robotics and Robotis will be demonstrating how app stores for robots can be to easily reconfigure robots, cutting maintenance time and reducing the risks of error.

      Supporting industrial solutions

      Running production systems means offering complex support, workforce management and billing integrations. RevTwo will be showcasing their next-generation solution for app store enabled devices. Salesforce will show you how easy it is to connect your devices to your Industrial IoT workforce, creating a whole new experience for your customers. Iota will show the future of distributed billing integration based on open source next-generation blockchain technology.

      on February 23, 2017 10:00 AM

      February 22, 2017

      Part 1: Setting up an “All-Snap” Ubuntu Core image in a QEMU/KVM virtual machine Part 2: Basic management of an Ubuntu Core installation Part 3: Mir and graphics on Ubuntu Core Part 4: Confinement After I’ve set up an “All-Snap” Ubuntu Core virtual machine in the last post, let’s see what I can do with it. Logging in After […]
      on February 22, 2017 03:43 PM

      February 21, 2017

      Plasma in a Snap?

      Harald Sitter

      …why not!

      Shortly before FOSDEM, Aleix Pol asked if I had ever put Plasma in a Snap. While I was a bit perplexed by the notion itself, I also found this a rather interesting idea.

      So, the past couple of weeks I spent a bit of time here and there on trying to see if it is possible.

      img_20170220_154814

      It is!

      But let’s start in the beginning. Snap is one of the Linux bundle formats that are currently very much en-vogue. Basically, whatever is necessary to run an application is put into a self-contained archive from which the application then gets run. The motivation is to isolate application building and delivery from the operating system building and delivery. Or in short, you do not depend on your Linux distribution to provide a package, as long as the distribution can run the middleware for the specific bundle format you can get a bundle from the source author and it will run. As an added bonus these bundles usually also get confined. That means that whatever is inside can’t access system files or other programs unless permission for this was given in some form or fashion.

      Putting Plasma, KDE’s award-winning desktop workspace, in a snap is interesting for all the same reasons it is interesting for applications. Distributing binary builds would be less of a hassle, testing is more accessible and confinement in various ways can lessen the impact of security issues in the confined software.

      With the snap format specifically Plasma has two challenges:

      1. The snapped software is mounted in a changing path that is different from the installation directory.
      2. Confining Plasma is a bit tricky because of how many actors are involved in a Plasma session and some of them needing far-reaching access to system services.

      As it turns out problem 1, in particular, is biting Plasma fairly hard. Not exactly a great surprise, after all, relocating (i.e. changing paths of) an installed Plasma isn’t exactly something we’ve done in the past. In fact, it goes further than that as ultimately Plasma’s dependencies need to be relocatable as well, which for example Xwayland is not.

      But let’s talk about the snapping itself first. For the purposes of this proof of concept, I simply recycled KDE neon‘s deb builds. Snapcraft, the build tool for snaps, has built-in support for installing debs into a snap, so that is a great timesaver to get things off the ground as it were. Additionally, I used the Plasma Wayland stack instead of the X11 stack. Confinement makes lots more sense with Wayland compared to X11.

      Relocatability

      Relocatability is a tricky topic. A lot of times one compiles fixed paths into the binary because it is easy to do and it is somewhat secure. Notably, depending on the specific environment at the time of invocation one could be tricked into executing a malicious binary in $PATH instead of the desired one. Explicitly specifying the path is a well-understood safeguard against this sort of problem. Unfortunately, it also means that you cannot move your installed tree anywhere but where it was installed. The relocatable and safe solution is slightly more involved in terms of code as you need to resolve what you want to invoke relative from your location, it being more code and also not exactly trivial to get right is why often times one opts to simply hard-compile paths. This is a problem in terms of packing things into a relocatable snap though. I had to apply a whole bunch of hacks to either resolve binaries from PATH or resolve their location relative. None of these are particularly useful patches but here ya go.

      Session

      Once all relocatability issues were out of the way I finally had an actual Plasma session. Weeeh!

      Confinement

      Confining Plasma as a whole is fairly straightforward, albeit a bit of a drag since it’s basically a matter of figuring out what is or isn’t required to make things fly. A lot of logouts and logins is what it takes. Fortunately, snaps have a built-in mechanism to expose DBus session services offered by them. A full blown Plasma session has an enormous amount of services it offers on DBus, from the general purpose notification service to the special interest Plasma Activity service. Being able to expose them efficiently is a great help in tweaking confinement.

      Not everything is about DBus though! Sometimes a snap needs to talk with a system service, and obviously, a workspace as powerful as Plasma would need to talk to a bunch of them. Doing advanced access control needs to be done in snapd (the thing that manages installed snaps). Snapd’s interfaces control what is and is not allowed for a snap. To get Plasma to start and work with confinement a bunch of holes need to be poked in the confinement that are outside the scope of existing interface. KWin, in particular, is taking the role of a fairly central service in the Plasma Wayland world, so it needs far-reaching access so it can do its job. Unfortunately, interfaces currently can only be built with snapd’s source tree itself. I made an example interface which covers most of the relevant core services but unless you build a snapd this won’t be particularly easy to try 😉

      Summary

      All in all, Plasma is easily bundled up once one gets relocatability problems out of the way. And thanks to the confinement control snap and snapd offer, it is also perfectly possible to restrict the workspace through confinement.

      I did not at all touch on integration issues however. Running the workspace from a confined bundle is all nice and dandy but not very useful since Plasma won’t have any applications it can launch as they either live on the system or in other snaps. A confined Plasma would know about neither right now.

      There is also the lingering question of whether confining like this makes sense at all. Putting all of Plasma into the same snap means this one snap will need lots of permissions and interaction with the host system. At the same time it also means that keeping confinement profiles up to date would be a continuous feat as there are so many things offered and used by this one snap.

      One day perhaps we’ll see this in production quality. Certainly not today 🙂

      mascot_konqi-app-dev

      on February 21, 2017 12:25 PM

      Welcome to the Ubuntu Weekly Newsletter. This is issue #499 for the week February 13 – 19, 2017, and the full version is available here.

      In this issue we cover:

      The issue of The Ubuntu Weekly Newsletter is brought to you by:

      • Elizabeth K. Joseph
      • Simon Quigley
      • Chris Guiver
      • Jim Connett
      • And many others

      If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

      Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

      on February 21, 2017 12:57 AM

      February 20, 2017

      Icons by Me

      Sam Hewitt

      Time for some self-promotion! I bought the domain iconsbysam.com some time ago to eventually create a site to showcase some of the icon design work I’ve done –I finally got around to doing just that, do check it out:

      Icons by Sam

      on February 20, 2017 07:00 PM

      Everyone running their own business except me probably already knows this. But, three years in, I think I’ve finally actually understood in my own mind the difference between a dividend and a director withdrawal. My accountant, Crunch1 have me record both of them when I take money out of the company, and I didn’t really get why until recently. When I finally got it, I wrote myself a note that I could go back to and read when I get confused again, and I thought I’d publish that here so others can see it too.

      (Important note: this is not financial advice. If my understanding here differs from your understanding, trust yourself, or your accountant. I’m also likely glossing over many subtleties, etc, etc. If you think this is downright wrong, I’d be interested in hearing. If you think it’s over-simplified, you’re doubtless correct.)


      A dividend is a promise to pay you X money.

      A director withdrawal is you taking that money out.

      So when a pound comes in, you can create a dividend to say: we’ll pay Stuart 80p.

      When you take the money out, you record a director withdrawal of 80p.

      Dividends are IOUs. Withdrawals are you cashing the IOU in.

      So when the “director’s loan account is overdrawn”, that means: you have recorded dividends of N but have recorded director withdrawals of more than N, i.e., you’ve taken out more than the company wants to pay you. This may be because you are owed the amount you took, and recorded director withdrawals for all that but forgot to do a dividend for it, or because you’ve taken more than you’re allowed.

      When creating a new dividend (in Crunch) it will (usefully) say what the maximum dividend you can take is; that should be the maximum takeable while still leaving enough money in the account to pay the tax bill.

      In the Pay Yourself dashboard (in Crunch) it’ll say “money owed to Stuart”; that’s money that’s been promised with a dividend but not taken out with a withdrawal. (Note: this may be because you forgot to do a withdrawal for money you’ve taken! In theory it would mean money promised with a dividend but not taken, but maybe you took it and just didn’t do a withdrawal to record that you took it. Check.)

      1. who are really handy, online, and are happy to receive emails in which I ask stupid questions over and over again: if you need an accountant too, this referral link will get us both some money off
      on February 20, 2017 10:02 AM

      February 19, 2017

      It is now pretty well accepted that open source is a superior way of producing software. Almost everyone is doing open source those days. In particular, the ability for users to look under the hood and make changes results in tools that are better adapted to their workflows. It reduces the cost and risk of finding yourself locked-in with a vendor in an unbalanced relationship. It contributes to a virtuous circle of continuous improvement, blurring the lines between consumers and producers. It enables everyone to remix and invent new things. It adds up to the common human knowledge.

      And yet

      And yet, a lot of open source software is developed on (and with the help of) proprietary services running closed-source code. Countless open source projects are developed on GitHub, or with the help of Jira for bugtracking, Slack for communications, Google docs for document authoring and sharing, Trello for status boards. That sounds a bit paradoxical and hypocritical -- a bit too much "do what I say, not what I do". Why is that ? If we agree that open source has so many tangible benefits, why are we so willing to forfeit them with the very tooling we use to produce it ?

      But it's free !

      The argument usually goes like this: those platforms may be proprietary, they offer great features, and they are provided free of charge to my open source project. Why on Earth would I go through the hassle of setting up, maintaining, and paying for infrastructure to run less featureful solutions ? Or why would I pay for someone to host it for me ? The trick is, as the saying goes, when the product is free, you are the product. In this case, your open source community is the product. In the worst case scenario, the personal data and activity patterns of your community members will be sold to 3rd parties. In the best case scenario, your open source community is recruited by force in an army that furthers the network effect and makes it even more difficult for the next open source project to not use that proprietary service. In all cases, you, as a project, decide to not bear the direct cost, but ask each and every one of your contributors to pay for it indirectly instead. You force all of your contributors to accept the ever-changing terms of use of the proprietary service in order to participate to your "open" community.

      Recognizing the trade-off

      It is important to recognize the situation for what it is. A trade-off. On one side, shiny features, convenience. On the other, a lock-in of your community through specific features, data formats, proprietary protocols or just plain old network effect and habit. Each situation is different. In some cases the gap between the proprietary service and the open platform will be so large that it makes sense to bear the cost. Google Docs is pretty good at what it does, and I find myself using it when collaborating on something more complex than etherpads or ethercalcs. At the opposite end of the spectrum, there is really no reason to use Doodle when you can use Framadate. In the same vein, Wekan is close enough to Trello that you should really consider it as well. For Slack vs. Mattermost vs. IRC, the trade-off is more subtle. As a sidenote, the cost of lock-in is a lot reduced when the proprietary service is built on standard protocols. For example, GMail is not that much of a problem because it is easy enough to use IMAP to integrate it (and possibly move away from it in the future). If Slack was just a stellar opinionated client using IRC protocols and servers, it would also not be that much of a problem.

      Part of the solution

      Any simple answer to this trade-off would be dogmatic. You are not unpure if you use proprietary services, and you are not wearing blinders if you use open source software for your project infrastructure. Each community will answer that trade-off differently, based on their roots and history. The important part is to acknowledge that nothing is free. When the choice is made, we all need to be mindful of what we gain, and what we lose. To conclude, I think we can all agree that all other things being equal, when there is an open-source solution which has all the features of the proprietary offering, we all prefer to use that. The corollary is, we all benefit when those open-source solutions get better. So to be part of the solution, consider helping those open source projects build something as good as the proprietary alternative, especially when they are pretty close to it feature-wise. That will make solving that trade-off a lot easier.

      on February 19, 2017 01:00 PM

      The Quiet Voice

      Stuart Langridge

      It’s harder to find news these days. On the one hand, there’s news everywhere you turn. Shrieking at you. On the other, we’re each in a bubble. Articles are rushed out to get clicks; everything’s got a political slant in one direction or another. This is not new. But it does feel like it’s getting worse.

      It’s being recognised, though. Buzzfeed have just launched a thing called “Outside Your Bubble“, an admirable effort to “give our audience a glimpse at what’s happening outside their own social media spaces”; basically, it’s a list of links to views for and against at the bottom of certain articles. Boris Smus just wrote up an idea to add easily-digestible sparkline graphs to news articles which provide context to the numbers quoted. There have long been services like Channel 4’s FactCheck and AllSides which try to correct errors in published articles or give a balanced view of the news. Matt Kiser’s WTF Just Happened Today tries to summarise, and there are others.

      (Aside: I am bloody sure that there’s an xkcd or similar about the idea of the quiet voice, where when someone uses a statistic on telly, the quiet voice says “that’s actually only 2% higher than it was under the last president” or something. But I cannot for the life of me find it. Help.)

      So here’s what I’d like.

      I want a thing I can install. A browser extension or something. And when I view an article, I get context and viewpoint on it. If the article says “Trump’s approval rating is 38%”, the extension highlights it and says “other sources say it’s 45% (link)” and “here’s a list of other presidents’ approval ratings at this point in their terms” and “here’s a link to an argument on why it’s this number”. When the article says “the UK doesn’t have enough trade negotiators to set up trade deals” there’s a link to an article claiming that that isn’t a problem and explaining why. If it says “NHS wait times are now longer than they’ve ever been” there’s a graph showing what this response times are, and linking to a study showing that NHS funding is dropping faster than response times are. An article saying that X billion is spent on foreign aid gets a note on how much that costs each taxpayer, what proportion of the budget it is, how much people think it is. It provides context, views from outside your bubble, left and right. You get to see what other people think of this and how they contextualise it; you get to see what quoted numbers mean and understand the background. It’s not political one way or the other; it’s like a wise aunt commentator, the quiet voice that says “OK, here’s what this means” so you’re better informed, of how it’s relevant to you and what people outside your bubble think.

      Now, here’s why it won’t work.

      It won’t work because it’s a hysterical amount of effort and nobody has a motive to do it. It has to be almost instant; there’s little point in brilliantly annotating an article three days after it’s written when everyone’s already read it. It’d be really difficult for it to be non-partisan, and it’d be even more difficult to make people believe it was non-partisan even if it was. There’s no money in it — it’s explicitly not a thing that people go to, but lives on other people’s sites. And there aren’t browser extensions on mobile. The Washington Post offer something like this with their service to annotate Trump’s tweets, but extending it to all news articles everywhere is a huge amount of work. Organisations with a remit to do this sort of thing — the newly-spun-off Open News from Mozilla and the Knight Foundation, say — don’t have the resources to do anything even approaching this. And it’s no good if you have to pay for it. People don’t really want opposing views, thoughts from outside their bubble, graphs and context; that’s what’s caused this thing to need to exist in the first place! So it has to be trivial to add; if you demand money nobody will buy it. So I can’t see how you pay the army of fact checkers and linkers your need to run this. It can’t be crowd sourced; if it were then it wouldn’t be a reliable annotation source, it’d be reddit, which would be disastrous. But it’d be so useful. And once it exists they can produce a thing which generates printable PDF annotations and I can staple them inside my parents copy of the Daily Mail.

      on February 19, 2017 12:17 PM

      From the “I should have posted this months ago” vault…

      When I led technology development at One Laptop per Child Australia, I maintained two golden rules:

      1. everything that we release must ‘just work’ from the perspective of the user (usually a child or teacher), and
      2. no special technical expertise should ever be required to set-up, use or maintain the technology.

      In large part, I believe that we were successful.

      Once the more obvious challenges have been identified and cleared, some more fundamental problems become evident. Our goal was to improve educational opportunities for children as young as possible, but proficiently using computers to input information can require a degree of literacy.

      Sugar Labs have done stellar work in questioning the relevance of the desktop metaphor for education, and in coming up with a more suitable alternative. This proved to be a remarkable platform for developing a touch-screen laptop, in the form of the XO-4 Touch: the icons-based user interface meant that we could add touch capabilities with relatively few user-visible tweaks. The screen can be swivelled and closed over the keyboard as with previous models, meaning that this new version can be easily converted into a pure tablet at will.

      Revisiting Our Assumptions

      Still, a fundamental assumption has long gone unchallenged on all computers: the default typeface and keyboard. It doesn’t at all represent how young children learn the English alphabet or literacy. Moreover, at OLPC Australia we were often dealing with children who were behind on learning outcomes, and who were attending school with almost no exposure to English (since they speak other languages at home). How are they supposed to learn the curriculum when they can barely communicate in the classroom?

      Looking at a standard PC keyboard, you’ll see that the keys are printed with upper-case letters. And yet, that is not how letters are taught in Australian schools. Imagine that you’re a child who still hasn’t grasped his/her ABCs. You see a keyboard full of unfamiliar symbols. You press one, and on the screen pops up a completely different looking letter! The keyboard may be in upper-case, but by default you’ll get the lower-case variants on the screen.

      A standard PC keyboardA standard PC keyboard

      Unfortunately, the most prevalent touch-screen keyboard on the marke isn’t any better. Given the large education market for its parent company, I’m astounded that this has not been a priority.

      The Apple iOS keyboardThe Apple iOS keyboard

      Better alternatives exist on other platforms, but I still was not satisfied.

      A Re-Think

      The solution required an examination of how children learn, and the challenges that they often face when doing so. The end result is simple, yet effective.

      The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)The standard OLPC XO mechanical keyboard (above) versus the OLPC Australia Literacy keyboard (below)

      This image contrasts the standard OLPC mechanical keyboard with the OLPC Australia Literacy keyboard that we developed. Getting there required several considerations:

      1. a new typeface, optimised for literacy
      2. a cleaner design, omitting characters that are not common in English (they can still be entered with the AltGr key)
      3. an emphasis on lower-case
      4. upper-case letters printed on the same keys, with the Shift arrow angled to indicate the relationship
      5. better use of symbols to aid instruction

      One interesting user story with the old keyboard that I came across was in a remote Australian school, where Aboriginal children were trying to play the Maze activity by pressing the opposite arrows that they were supposed to. Apparently they thought that the arrows represented birds’ feet! You’ll see that we changed the arrow heads on the literacy keyboard as a result.

      We explicitly chose not to change the QWERTY layout. That’s a different debate for another time.

      The Typeface

      The abc123 typeface is largely the result of work I did with John Greatorex. It is freely downloadable (in TrueType and FontForge formats) and open source.

      After much research and discussions with educators, I was unimpressed with the other literacy-oriented fonts available online. Characters like ‘a’ and ‘9’ (just to mention a couple) are not rendered in the way that children are taught to write them. Young children are also susceptible to confusion over letters that look similar, including mirror-images of letters. We worked to differentiate, for instance, the lower-case L from the upper-case i, and the lower-case p from the lower-case q.

      Typography is a wonderfully complex intersection of art and science, and it would have been foolhardy for us to have started from scratch. We used as our base the high-quality DejaVu Sans typeface. This gave us a foundation that worked well on screen and in print. Importantly for us, it maintained legibility at small point sizes on the 200dpi XO display.

      On the Screen

      abc123 is a suitable substitute for DejaVu Sans. I have been using it as the default user interface font in Ubuntu for over a year.

      It looks great in Sugar as well. The letters are crisp and easy to differentiate, even at small point sizes. We made abc123 the default font for both the user interface and in activities (applications).

      The abc123 font in Sugar's Write activity, on an XO laptop screenThe abc123 font in Sugar’s Write activity, on an XO laptop screen

      Likewise, the touch-screen keyboard is clear and simple to use.

      The abc123 font on the XO touch-screen keyboard, on an XO laptop screenThe abc123 font on the XO touch-screen keyboard, on an XO laptop screen

      The end result is a more consistent literacy experience across the whole device. What you press on the hardware or touch-screen keyboard will be reproduced exactly on the screen. What you see on the user interface is also what you see on the keyboards.

      on February 19, 2017 07:24 AM

      February 17, 2017

      Part 1: Setting up an “All-Snap” Ubuntu Core image in a QEMU/KVM virtual machine Part 2: Basic management of an Ubuntu Core installation Part 3: Mir and graphics on Ubuntu Core Part 4: Confinement You’ve probably heard a lot about Snappy and Ubuntu Core in the past couple of months. Since the whole ecosystem is slightly becoming “tryable”, […]
      on February 17, 2017 09:48 PM

      The second point release update to our LTS release 16.04 is out now. This contains all the bugfixes added to 16.04 since its first release in April. Users of 16.04 can run the normal update procedure to get these bugfixes. In addition, we suggest adding the Backports PPA to update to Plasma 5.8.5. Read more about it: http://kubuntu.org/news/plasma-5-8-5-bugfix-release-in-xenial-and-yakkety-backports-now/

      Warning: 14.04 LTS to 16.04 LTS upgrades are problematic, and should not be attempted by the average user. Please install a fresh copy of 16.04.2 instead. To prevent messages about upgrading, change Prompt=lts with Prompt=normal or Prompt=never in the /etc/update-manager/release-upgrades file. As always, make a thorough backup of your data before upgrading.

      See the Ubuntu 16.04.2 release announcement and Kubuntu Release Notes.

      Download 16.04.2 images.

      on February 17, 2017 05:42 PM
      Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 16.04.2 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor based on the Lightweight X11 Desktop Environment (LXDE). The project’s goal is to provide a lightweight yet functional distribution. Lubuntu specifically targets older machines with […]
      on February 17, 2017 04:12 AM

      February 16, 2017

      A few months ago, I started Folding@Home in the Ubuntu Folding team. I really enjoy checking my standings each night before I go to bed. What is Folding@Home? https://folding.stanford.edu/home/about-us/. Has Folding at Home actually done anything useful? Check Reddit and see what you think.

      Team 45104 Rankings. http://wiki.ubuntu.com/FoldingAtHomeTeamUbuntu if you are interested in competing while contributing. It seems like interest has fallen off in the past year or so, which is a bit sad. On the other hand, it makes climbing up the standings easier!

      I was reminded to make this post while watching NOVA tonight on PBS, about Origami. There are so many new applications to this ancient art of folding paper in art, in mathematics, physics and material science, and even biology. You can see it online if PBS is not available to you.

      PS: right now, I have 921,667 points, which puts me in the top 180 in TeamUbuntu (#179 to be precise).
      on February 16, 2017 06:36 AM

      February 15, 2017

      Last week there was a webinar from @DellCarePRO titled Ubuntu Basic Webinar.

      Today the webinar video Ubuntu Basics Webinar has been posted online, and here is the summary.

      Introduction

      Ubuntu Certified hardware page

       

      If your Dell laptop comes with Ubuntu, you can get the installation ISO (Recovery Image) from dell.com.

      Ubuntu installation as dual-boot

      Installing Ubuntu.

      Installing Ubuntu.

       

      Ubuntu installed.

      Ubuntu installed.

      Explaining: The Menu Bar

       

      Explaining: Dash

       

      Explaining: Ubuntu Software Center

       

      Explaining: Keyboard shortcuts

       

      Explaining: Software and Updates

       

      Explaining: Multiple Monitor configuration

      Talk by Barton George

      Presenting Barton George and Project Sputnik. Barton George headed an internal effort in Dell to get Ubuntu on a high-end laptop, with a budget of just $40,000 and six months to deliver.

       

      Funding came from the Dell Innovation Fund, with the aim to establish if an Ubuntu laptop would work.

       

      Contrary to other efforts, this one was for a high-end offering. It would involve the community and get feedback from the community in order to change perceptions.

       

      Very well-received. arstechnica, o’reilly radar, techcrunch, The Wall Street Journal.

       

      Positive feedback from the twitter-sphere.

       

      Expansion from the initial XPS 13 with Ubuntu, to a new 6th gen Intel laptop along with a whole line of Latitude Ubuntu laptops. And an All-in-One Ubuntu desktop.

      There was emphasis that the initial fund of $40,000 to investigate whether an Ubuntu laptop would be a viable product, delivered multiple times the profits to Dell.

      on February 15, 2017 05:11 PM

      BSidesSF 2017

      David Tomaschik

      BSidesSF 2017 was, by far, the best yet. I’ve been to the last 5 or so, and had a blast at almost every one. This year, I was super busy – gave a talk, ran a workshop, and I was one of the organizers for the BSidesSF CTF. I’ve posted the summary and slides for my talk and I’ll update the video link once it gets posted.

      I think it’s important to thank the BSidesSF organizers – they did a phenomenal job with an even bigger venue and I think everyone loved it. It was clearly a success, and I can only imagine how much work it takes to plan something like this.

      It’s also important to note that our perennial venue, DNA Lounge, (except that one year we don’t talk about) is having some money problems. Apparently you can’t spend more than you bring in each year. This is the venue that, in addition to hosting BSidesSF, also hosts Cyberdelia. This is a venue that allows all kinds of independent art and events to thrive in one of the most expensive cities in the country. I encourage you to reach out and go to a show, buy some pizza, or just donate to their Patreon. If my encouragement is not enough, how about some from Razor and Blade?

      Again, big thanks to BSidesSF and DNA Lounge for such a successful event!

      on February 15, 2017 08:00 AM

      February 14, 2017

      jak-linux.org moved / backing up

      Julian Andres Klode

      In the past two days, I moved my main web site jak-linux.org (and jak-software.de) from a very old contract at STRATO over to something else: The domains are registered with INWX and the hosting is handled by uberspace.de. Encryption is provided by Let’s Encrypt.

      I requested the domain transfer from STRATO on Monday at 16:23, received the auth codes at 20:10 and the .de domain was transferred completely on 20:36 (about 20 minutes if you count my overhead). The .org domain I had to ACK, which I did at 20:46 and at 03:00 I received the notification that the transfer was successful (I think there was some registrar ACKing involved there). So the whole transfer took about 10 1/2 hours, or 7 hours since I retrieved the auth code. I think that’s quite a good time 🙂

      And, for those of you who don’t know: uberspace is a shared hoster that basically just gives you an SSH shell account, directories for you to drop files in for the http server, and various tools to add subdomains, certificates, virtual users to the mailserver. You can also run your own custom build software and open ports in their firewall. That’s quite cool.

      I’m considering migrating the blog away from wordpress at some point in the future – having a more integrated experience is a bit nicer than having my web presence split over two sites. I’m unsure if I shouldn’t add something like cloudflare there – I don’t want to overload the servers (but I only serve static pages, so how much load is this really going to get?).

      in other news: off-site backups

      I also recently started doing offsite backups via borg to a server operated by the wonderful rsync.net. For those of you who do not know rsync.net: You basically get SSH to a server where you can upload your backups via common tools like rsync, scp, or you can go crazy and use git-annex, borg, attic; or you could even just plain zfs send your stuff there.

      The normal price is $0.08 per GB per month, but there is a special borg price of $0.03 (that price does not include snapshotting or support, really). You can also get a discounted normal account for $0.04 if you find the correct code on Hacker News, or other discounts for open source developers, students, etc. – you just have to send them an email.

      Finally, I must say that uberspace and rsync.net feel similar in spirit. Both heavily emphasise the command line, and don’t really have any fancy click stuff. I like that.


      Filed under: General
      on February 14, 2017 11:52 PM

      RX460 Woes

      I finally found a solution to getting my RX460 to work in Manjaro. Since I did something a little different, I am recording it here for myself and anybody else who finds this post.

      While this is written using Manjaro, it should work with Arch and other distributions (some files might be in a different location, though). It should also work with any supported card.

      I use

      nano
      to edit the files below. Use whatever you are most comfortable with, but remember to run it with sudo.

      The Process

      First, will need these:1)Pacman will fail if you don’t have an up-to-date repository.

      sudo pacman -S "xorg-server>=1.18" "linux>=4.9" xf86-video-amdgpu xf86-input-libinput

      Then, edit

      /etc/default/grub
      , changing:

      GRUB_CMDLINE_LINUX=""

      to:

      GRUB_CMDLINE_LINUX="amdgpu.exp_hw_support=1"

      After this, run:

      sudo update-grub

      Then edit

      /etc/modprobe.d/radeon_blacklist.conf
      (it might create a new file) and add:

      blacklist radeon

      Next, inside of

      /etc/X11/xorg.conf.d/90-mhwd.conf
      , delete everything in the file and add:2)Confession: I didn’t empty out 90-mhwd.conf.

      Section "Device"
          Identifier    "RX460"
          Driver        "amdgpu"
      EndSection

      Finally, I restarted and booted into the latest linux kernel (4.10.0-1-MANJARO) with amdgpu support!

      I hope this helps others!

         [ + ]

      1. Pacman will fail if you don’t have an up-to-date repository.
      2. Confession: I didn’t empty out 90-mhwd.conf.

      The post RX 4XX: Getting AMDGPU to work in Manjaro Linux appeared first on Sina Mashek.

      on February 14, 2017 06:21 PM

      Up until recently, my passwords were stored in a rather precarious manner. For my birthday, I decided it would be a nice gift to myself to perform a complete password refresh. This involved taking inventory of every password there was any record or memory of and resetting it to unique and cryptographically random password of random length--between ~25 and ~200 characters long. Now I have reason to keep these passwords secure!

      My Delay

      Most people that know me would be surprised to learn I never needed a password vault. It was possible to avoid using a password vault by memorizing different algorithms. This worked well because an employer and year/quarter could be fed into the algorithm to produce work-centric time-based passwords.

      This comes with some obvious issues. The first, and likely biggest, issue being that I'm not able to memorize an algorithm that wouldn't reveal a good portion of the pattern after ~5 cracked passwords.

      The previous solution included coming up with a weak and easy algorithm as well as a strong and difficult alterantive. It also included replacing each after a few years of use. Unfortunately, forgetting old didn't fit into the equation.

      The Vault

      The first step is deciding on a tool to use for the password vault. After doing a review and audit of various tools, I settled on KeePassX. Although it uses the same database format as KeePass2, I trust this tool significantly more. Every person considering a solution for storing this much private data should do their own research in order to trust their decision.

      When doing the password refresh, no "current" password was moved to the vault. Instead, new passwords were generated and services updated with the new password before erasing old records.

      In Comes LUKS

      It should be obvious that a very strong password should be set on the keepass database. Maybe less obvious is that it would be rather silly to give keepass our full trust. Despite having reviewed the source code and knowing smarter people have already done the same, it's still a good idea to provide an extra layer of protection. Remember, this is data that should be kept very secure.

      Being familiar with LUKS, I saw it as the obvious tool for this job. LUKS provides the ability to store a tiny little file used for encryption that can be backed up just like any other file.

      LUKS also provides the ability to store headers in a separate file. The headers include the eight available key slots as well as other data required to unlock the encrypted volume. Headers can get a bit large but they are static so they become virtually non-existent with differential backups. The encrypted volume only needs to fit your password database and only needs to be large enough to accommodate growth. This will be the size consumed for any differential backup that includes the encrypted volume.

      To build a playground structure similar to mine:

      mkdir -p ~/.luks/{crypts,headers,mnt}
      

      To build files for encryption:

      dd if=/dev/urandom of=~/.luks/headers/vault bs=1MB count=2
      dd if=/dev/urandom of=~/.luks/crypts/vault bs=200KB count=1
      mkdir /.luks/mnt/vault
      

      It's recommended to use --use-random to ensure a stronger entropy pool. When creating the LUKS volumes, use a memorable and secure password. This will later be removed and kept as a backup.

      Making cryptography:

      sudo cryptsetup luksFormat ~/.luks/crypts/vault \
          --header ~/.luks/headers/vault \
          -s 512 --align-payload=0 --use-random
      

      Now that the encryption stuff has been configured, some sysadmin stuff needs to be performed. This is pretty basic so explanation will be skipped.

      It's a root thing:

      cryptsetup open ~/.luks/crypts/vault \
          --header ~/.luks/headers/vault vault
      mkfs.ext2 -I 128 /dev/mapper/vault
      mount /dev/mapper/vault ~/.luks/mnt/vault
      chown $user:$user ~/.luks/mnt/vault
      

      Closing it up (also root):

      umount ~/.luks/mnt/vault
      cryptsetup close vault
      

      Yubikey Encryption

      The only reasonably secure way to trust the yubikey seems to be with the challenge-response / hmac-sha1 option. This seems to accept an input password up to 64 characters long, combine it with a secret, and produce a 40 character long hash.

      This was actually a pretty big concern for me because [0-9a-f]{40} wouldn't take a computer too terribly much time to crack. After some thinking, it became quite obvious that the simple solution was using the yubikey hash as a portion of the complete password rather than the whole thing.

      Pro-tip: Most of the tools I reviewed that used a yubikey as an authentication factor only utilized this return value. That includes the 'yubikey-luks' package in a few package repositories. Most tools didn't even include a sane option for decryption.

      Configuring the Yubikey:

      1. Install and launch "yubikey-personalization-gui"
      2. Select Challenge-Response tab, then HMAC-SHA1
      3. Configuration Slot: 2
      4. Configuration Protection: <strongly recommended | but not serial>
      5. Require User Input: <recommend yes | this means touching key>
      6. Click Generate, then Write Configuration

      If there's any intention of using the key as a permanent resident, it would be wise to reset slot 1 and ensure it does not respond to contact (user input).

      Password Derivatives

      To produce a strong password for LUKS (the encrypted volume), the algorithm used should produce a key that is both variable in length and character set. As unlikely as it is that the yubikey is storing entered passwords and caching generated hashes, yubikey is now closed source and there's absolutely zero proof that isn't happening. This is describing paranoia, but addressing the silly fear is quite easy.

      My first algorithm looked much like this:

      salt='71'
      read -sp '' -t 20 vault_key
      len="${#vault_key}"
      luks_pass="${vault_key:5}$(/usr/bin/ykchalresp -2 \
          "$(sha256sum <<<"${vault_key:0:8}$salt${vault_key:$(($len - 5)):4}" | cut -d ' ' -f 1)")"
      # sudo cryptsetup open [...]
      unset vault_key luks_pass
      
      # sample_in:  YouAreCorrectHorse,ThatIs@BatteryStaple!
      # sample_out: eCorrectHorse,ThatIs@BatteryStaple!ac3bc63c4949f8c902ea49a7d9409f506c79bcdc
      

      If able, coming up with a more secure algorithm than this would be a good idea. If using this sample, at least change the salt. Verifying checksums of binaries accessed of the script checking checksums would also be an excellent idea.

      If the configuration was set to require user input, processing will stop at the "luks_pass=" line and the yubikey will begin blinking green. Once the key has been touched it will emit solid green until the hash is generated and returned.

      Pro-tip: sha512sum produces a string too large for ykchalresp (64 limit)

      Adding Factors

      Knowing the final derived password means the original plain password can finally be retired. If there is no backup of the headers file, this would be an excellent time to make the copy and stick it away in a safe.

      To add the yubikey-derived key:

      sudo cryptsetup luksAddKey ~/.luks/headers/vault
      # first enter the old (current) password
      # enter the derived password
      # enter it a second time
      

      To delete the old key:

      sudo cryptsetup luksKillSlot ~/.luks/headers/vault 0
      # note: slot 0 is the first used and will have the plain password
      #       this can be verified using luksDump
      # enter the old password (for this slot)
      

      Up to eight key slots are available for storing description keys. The same process that was used above can be repeated to add additional devices with the only exception being that no keys will be deleted.

      Vault Access

      Now that all record of that key for copy/paste purposes and the clipboard has been scrubbed, all that's left is to build a convenient script to make accessing the vault a bit less painful.

      I have included a very simple script to use as a starting point for your venture.

      Final Thoughts

      It would be nice to build a very strong and universal algorithm.

      Most attacks that could hijack this derived password would also imply the attacker has already made it into the system far enough to grab a copy of the keepass file after the volume were mounted. If the intrusion is ever detected, ample time will be available to do another password refresh using a new password vault and encrypted volume.

      Attachments:

      image0 access_vault

      on February 14, 2017 06:00 AM

      Adam Holt and I were interviewed last night by the Australian Council for Computers in Education Learning Network about our not-for-profit work to improve educational opportunities for children in the developing world.

      We talked about One Laptop per Child, OLPC Australia and Sugar Labs. We discussed the challenges of providing education in the developing world, and how that compares with the developed world.

      Australia poses some of its own challenges. As a country that is 90% urbanised, the remaining 10% are scattered across vast distances. The circumstances of these communities often share both developed and developing world characteristics. We developed the One Education programme to accommodate this.

      These lessons have been developed further into Unleash Kids, an initiative that we are currently working on to support the community of volunteers worldwide and take to the movement to the next level.

      on February 14, 2017 05:47 AM

      February 13, 2017

      A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

      Individual reports

      In January, about 159 work hours have been dispatched among 13 paid contributors. Their reports are available:

      Evolution of the situation

      The number of sponsored hours increased slightly thanks to Exonet joining us.

      The security tracker currently lists 37 packages with a known CVE and the dla-needed.txt file 36. The situation is roughly similar to last month even though the number of open issues increased slightly.

      Thanks to our sponsors

      New sponsors are in bold.

      No comment | Liked this article? Click here. | My blog is Flattr-enabled.

      on February 13, 2017 05:33 PM

      KDE neon + Kernel 4.8

      Harald Sitter

      We are currently looking to roll out Kernel 4.8 and I’d love to get some informal testing done first. Everyone who wants to help with testing the 4.8 Kernel please install and reboot afterward:

      pkcon refresh; pkcon install linux-generic-hwe-16.04 xserver-xorg-hwe-16.04

      Once you are on 4.8, please let me know if you have any problems or if everything is fine 🙂

      on February 13, 2017 04:24 PM

      February 11, 2017

      Development for the Xfce media player is back on!  Well over a year since the last release, Parole 0.9.0 brings a fresh set of features and fixes.  What’s New? New “mini mode”, activated from the right-click menu. New play and replay icons in the player content area. Clicking on these will play or replay your … Continue reading Parole Media Player 0.9.0 Released
      on February 11, 2017 08:44 PM

      Boudhayan Gupta dropped by for the final day of the Plasma Sprint because he had 3D printed that save icon and wanted to test it.  Coincidently I found a treasure in the glove compartment of my dad’s car, a Eurythmics Greatest Hits audio CD.

      So how does KDE applications do for legacy media? Mixed results.

      Dolphin works even if it does report it as a 0B media [Update: fixed by the awesome Kai Uwe]

      However classic KDE tool KFloppy less so, it hard codes locations in /dev to find the floppy but my USB floppy drive just appears at /dev/sdc, even one I fixed that it uses an external tool which breaks fdformat.

      Meanwhile CDs are also something we ship apps for but never test.  This makes the Plasma Sprinters sad because they desperately want to hear Love Is a Stranger.

      kio-audio CD didn’t work but then when we looked at it again it worked perfectly, don’t you hate when that happens?  This was a killer feature of KDE back when everyone was ripping CDs to their hard disk for the first time.

      Playing Audio CDs natively less successful, Amarok shows it as a source but says it has 0 tracks.  Dragon plays it fine but Dragon has no concept of a playlist so you can’t select a track.  kscd works but is a perfect example of why skins and client side window decorations are a bad idea because it still looks like it did years ago.

      We also tried k3b which works for making a new audio CD but doesn’t let you add files to a data project (bug 375016) so shouldn’t be released quite yet. [Update: also fixed by Kai Uwe, what a useful chap.]

      Where else does KDE support legacy formats that need checking up on?

       

      Facebooktwittergoogle_pluslinkedinby feather
      on February 11, 2017 01:37 PM

      February 10, 2017

      Unlike some online news might’ve let you believe, Mesa 13.0.x is not going to be in 16.04 LTS. And it has only been days since it received the “dated” 12.0.6 release as a SRU together with 16.10.

      That said, 13.0.4 is now available on ppa:ubuntu-x-swat/updates for both 16.04 and 16.10. The PPA carries latest LLVM 3.9.1 package as well which fixes some bugs for Radeon users. There were some games released recently that benefit from these, which is one more reason why these backports were (finally) made available.

      Update: Ubuntu 17.04 will ship with Mesa 17, so that’s the version which will be backported to 16.04 next.


      on February 10, 2017 08:14 PM

      There is a huge announcement coming: snaps now run in Ubuntu 14.04 Trusty Tahr.

      Take a moment to note how big this is. Ubuntu 14.04 is a long-term release that will be supported until 2019. Ubuntu 16.04 is also a long-term release that will be supported until 2021. We have many many many users in both releases, some of which will stay there until we drop the support. Before this snappy new world, all those users were stuck with the versions of all their programs released in 2014 or 2016, getting only updates for security and critical issues. Just try to remember how your favorite program looked 5 years ago; maybe it didn't even exist. We were used to choose between stability and cool new features.

      Well, a new world is possible. With snaps you can have a stable base system with frequent updates for every program, without the risk of breaking your machine. And now if you are a Trusty user, you can just start taking advantage of all this. If you are a developer, you have to prepare only one release and it will just work in all the supported Ubuntu releases.

      Awesome, right? The Ubuntu devs have been doing a great job. snapd has already landed in the Trusty archive, and we have been running many manual and automated tests on it. So we would like now to invite the community to test it, explore weird paths, try to break it. We will appreciate it very much, but all of those Trusty users out there will love it, when they receive loads of new high quality free software on their oldie machines.

      So, how to get started?

      If you are already running Trusty, you will just have to install snapd:

      $ sudo apt update && sudo apt install snapd
      

      Reboot your system after that in case you had a kernel update pending, and to get the paths for the new snap binaries set up.

      If you are running a different Ubuntu release, you can Install Ubuntu in a virtual machine. Just make sure that you install the http://releases.ubuntu.com/14.04/ubuntu-14.04.5-desktop-amd64.iso.

      Once you have Trusty with snapd ready, try a few commands:

      $ snap list
      $ sudo snap install hello-world
      $ hello-world
      $ snap find something
      

      screenshot of snaps running in Trusty

      Keep searching for snaps until you find one that's interesting. Install it, try it, and let us know how it goes.

      If you find something wrong, please report a bug with the trusty tag. If you are new to the Ubuntu community or get lost on the way, come and join us in Rocket Chat.

      And after a good session of testing, sit down, relax, and get ohmygiraffe. With love from popey:

      $ sudo snap install ohmygiraffe
      $ ohmygiraffe
      

      screenshot of ohmygiraffe

      on February 10, 2017 02:00 PM

      Anouk

      Rhonda D'Vine

      I need music to be more productive. Sitting in an open workspace it helps to shut off outside noice too. And often enough I just turn cmus into shuffle mode and let it play what comes along. Yesterday I just stumbled upon a singer again that I fell in love with her voice a long time ago. This is about Anouk.

      The song was on a compilation series that I followed because it so easily brought great groups to my attention in a genre that I simply love. It was called "Crossing All Over!" and featured several groups that I digged further into and still love to listen to.

      Anyway, don't want to delay the songs for you any longer, so here they are:

      • Nobody's Wife: The first song I heard from her, and her voice totally catched me.
      • Lost: A more quite song for a break.
      • Modern World: A great song about the toxic beauty norms that society likes to paint. Lovely!

      Like always, enjoy!

      /music | permanent link | Comments: 0 | Flattr this

      on February 10, 2017 12:19 PM

      February 09, 2017

      For some time now I have been working with HackerOne to help them shape and grow their hacker community. It has been a pleasure working with the team: they are doing great work, have fantastic leadership (including my friend, Mårten Mickos), are seeing consistent growth, and recently closed a $40 million round of funding. It is all systems go.

      For those of you unfamiliar with HackerOne, they provide a powerful vulnerability coordination platform and a global community of hackers. Put simply, a company or project (such as Starbucks, Uber, GitHub, the US Army, etc) invite hackers to hack their products/services to find security issues, and HackerOne provides a platform for the submission, coordination, dupe detection, and triage of these issues, and other related functionality.

      You can think of HackerOne in two pieces: a powerful platform for managing security vulnerabilities and a global community of hackers who use the platform to make the Internet safer and in many cases, make money. This effectively crowd-sources security using the same “with enough eyeballs are shallow” principle in open source: with enough eyeballs all security issues are shallow too.

      HackerOne and Open Source

      HackerOne unsurprisingly are big fans of open source. The CEO, Mårten Mickos, has led a number of successful open source companies including MySQL and Eucalyptus. The platform itself is built on top of chunks of open source, and HackerOne is a key participant in the Internet Bug Bounty program that helps to ensure core pieces of technology that power the Internet are kept secure.

      One of the goals I have had in my work with HackerOne is to build an even closer bridge between HackerOne and the open source community. I am delighted to share the next iteration of this.

      HackerOne for Open Source Projects

      While not formally announced yet (this is coming soon), I am pleased to share the availability of HackerOne Community Edition.

      Put simply, HackerOne is providing their HackerOne Professional service for free to open source projects.

      This provides features such as a security page, vulnerability submission/coordination, duplicate detection, hacker reputation, a comprehensive API, analytics, CVEs, and more.

      This not only provides a great platform for open source projects to gather vulnerability report and manage them, but also opens your project up to thousands of security researchers who can help identify security issues and make your code more secure.

      Which projects are eligible?

      To be eligible for this free service projects need to meet the following criteria:

      1. Open Source projects – projects in scope must only be Open Source projects that are covered by an OSI license.
      2. Be ready – projects must be active and at least 3 months old (age is defined by shipped releases/code contributions).
      3. Create a policy – you add a SECURITY.md in your project root that provides details for how to submit vulnerabilities (example).
      4. Advertise your program – display a link to your HackerOne profile from either the primary or secondary navigation on your project’s website.
      5. Be active – you maintain an initial response to new reports of less than a week.

      If you meet these criteria and would like to apply, just see the HackerOne Community Edition page and click the button to apply.

      Of course, let me know if you have any questions!

      The post HackerOne Professional, Free for Open Source Projects appeared first on Jono Bacon.

      on February 09, 2017 10:20 PM

      LXD logo

      LXD on other operating systems?

      While LXD and especially its API have been designed in a mostly OS-agnostic way, the only OS supported for the daemon right now is Linux (and a rather recent Linux at that).

      However since all the communications between the client and daemon happen over a REST API, there is no reason why our default client wouldn’t work on other operating systems.

      And it does. We in fact gate changes to the client on having it build and pass unit tests on Linux, Windows and MacOS.

      This means that you can run one or more LXD daemons on Linux systems on your network and then interact with those remotely from any Linux, Windows or MacOS machine.

      Setting up your LXD daemon

      We’ll be connecting to the LXD daemon over the network, so you’ll need to make sure it’s listening and has a password configured so that new clients can add themselves to the trust store.

      This can be done with:

      lxc config set core.https_address "[::]:8443"
      lxc config set core.trust_password "my-password"

      In my case, that remote LXD can be reached with “djanet.maas.mtl.stgraber.net”, you’ll want to replace that with your LXD server’s FQDN or IP in the commands used below.

      Windows client

      Pre-built native binaries

      Our Windows CI service builds a tarball for every commit. You can grab the latest one here:
      https://ci.appveyor.com/project/lxc/lxd/branch/master/artifacts

      Then unpack the archive and open a command prompt in the directory where you unpacked the lxc.exe binary.

      Build from source

      Alternatively, you can build it from source, by first installing Go using the latest MSI based installer from https://golang.org/dl/ and then Git from https://git-scm.com/downloads.

      And then in a command prompt, run:

      git config --global http.https://gopkg.in.followRedirects true
      go get -v -x github.com/lxc/lxd/lxc

      Use Ubuntu on Windows (“bash”)

      For this, you need to use Windows 10 and have the Windows subsystem for Linux enabled.
      With that done, start an Ubuntu shell by launching “bash”. And you’re done.
      The LXD client is installed by default in the Ubuntu 16.04 image.

      Interact with the remote server

      Regardless of which method you picked, you’ve now got access to the “lxc” command and can add your remote server.

      Using the native build does have a few restrictions to do with Windows terminal escape codes, breaking things like the arrow keys and password hiding. The Ubuntu on Windows way uses the Linux version of the LXD client and so doesn’t suffer from those limitations.

      MacOS client

      Even though we do have MacOS CI through Travis, they don’t host artifacts for us and so don’t have prebuilt binaries for people to download.

      Build from source

      Similarly to the Windows instructions, you can build the LXD client from source, by first installing Go using the latest DMG based installer from https://golang.org/dl/ and then Git from https://git-scm.com/downloads.

      Once that’s done, open a new Terminal window and run:

      export GOPATH=~/go
      go get -v -x github.com/lxc/lxd/lxc
      sudo ln -s ~/go/bin/lxc /usr/local/bin/

      At which point you can use the “lxc” command.

      Conclusion

      The LXD client can be built on all the main operating systems and on just about every architecture, this makes it very easy for anyone to interact with existing LXD servers, whether they’re themselves using a Linux machine or not.

      Thanks to our pretty strict backward compatibility rules, the version of the client doesn’t really matter. Older clients can talk to newer servers and newer clients can talk to older servers. Obviously in both cases some features will not be available, but normal container worflow operations will work fine.

      Extra information

      The main LXD website is at: https://linuxcontainers.org/lxd
      Development happens on Github at: https://github.com/lxc/lxd
      Mailing-list support happens on: https://lists.linuxcontainers.org
      IRC support happens in: #lxcontainers on irc.freenode.net
      Try LXD online: https://linuxcontainers.org/lxd/try-it

      on February 09, 2017 05:44 PM

      After a little break, on the first Friday of February we resumed the Ubuntu Testing Days.

      This session was pretty interesting, because after setting some of the bases last year we are now ready to dig deep into the most important projects that will define the future of Ubuntu.

      We talked about Ubuntu Core, a snap package that is the base of the operating system. Because it is a snap, it gets the same benefits as all the other snaps: automatic updates, rollbacks in case of error during installation, read-only mount of the code, isolation from other snaps, multiple channels on the store for different levels of stability, etc.

      The features, philosophy and future of Core were presented by Michael Vogt and Zygmunt Krynicki, and then Federico Giménez did a great demo of how to create an image and test it in QEMU.

      Click the image below to watch the full session.

      Alt text

      There are plenty of resources in the Ubuntu websites related to Ubuntu Core.

      To get started, we recommend to follow this guide to run the operating system in a virtual machine.

      After that, and if you are feeling brave and want to help Michael, Zygmund and Federico, you can download the candidate image instead, from http://cdimage.ubuntu.com/ubuntu-core/16/candidate/pending/ubuntu-core-16-amd64.img.xz This is the image that's being currently tested, so if you find something wrong or weird, please report a bug in Launchpad.

      Finally, if you want to learn more about the snaps that compose the image and take a peek at the things that we'll cover in the following testing days, you can follow the tutorial to create your own Core image.

      On this session we were also accompanied by Robert Wolff who works on 96boards at Linaro. He has an awesome show every Thursday called Open Hours. At 96boards they are building open Linux boards for prototyping and embedded computing. Anybody can jump into the Open Hours to learn more about this cool work.

      The great news that Robert brought is that both Open Hours and Ubuntu Testing Days will be focused on Ubuntu Core this month. He will be our guest again next Friday, February 10th, where he will be talking about the DragonBoard 410c. Also my good friend Oliver Grawert will be with us, and he will talk about the work he has been doing to enable Ubuntu in this board.

      Great topics ahead, and a full new world of possiblities now that we are mixing free software with open hardware and affordable prototyping tools. Remember, every Friday in http://ubuntuonair.com/, no se lo pierda.

      on February 09, 2017 05:38 AM

      February 08, 2017

      The KDE neon Docker Images are the easiest and fastest way to test out KDE software from a different branch than your host system.

      Coming live from the Plasma Sprint sponsored by Affenfels here in Stuttgart, the KDE neon Docker images now support Wayland.  This runs on both X and Wayland host systems.  Instructions on the wiki page.

      Below you can see my host system running Plasma 5.9 on X is running Plasma master with Wayland.

      Hugs to David E.

      neon-docker-wayland

      Facebooktwittergoogle_pluslinkedinby feather
      on February 08, 2017 02:05 PM

      This year’s Plasma Sprint is kindly being hosted by von Affenfels, a software company in Stuttgart, Germany, focusing on mobile apps. Let me try to give you an idea of what we’re working on this week.

      Bundled apps

      Welcome, KDE hackers!Welcome, KDE hackers!
      One problem we’re facing in KDE is that for Linux, our most important target platform, we depend on Linux distributors to ship our apps and updates for it. This is problematic on the distro side, since the work on packaging has to be duplicated by many different people, but it’s also a problem for application developers, since it may take weeks, months or until forever until an update becomes available for users. This is a serious problem and puts us far, far behind for example deployment cycles for webapps.

      Bundled app technologies such as flatpak, appimage and snap solve this problem by allowing us to create one of these packages and deploy them across a wide range of distributions. That means that we could go as far as shipping apps ourselves and cutting out the distros as middle men. This has a bunch of advantages:

      • Releases and fixes can reach the user much quicker as we don’t have to wait for distros with their own cycles, policies and resources to pick up our updates
      • Users can easily get the lastest version of the software they need, without being bound to what the distro ships
      • Packaging and testing effort is vastly reduced as it has to only be done once, and not for every distro out there
      • Distros with less man-power, who may not be able to package and offer a lot of software can make available many more appliations,…
      • …and at the same time concentrate their efforts on the core of their OS

      From a Plasma point of view, we want to concentrate on a single technology, and not three of them. My personal favorite is flatpak, as it is technologically the most advanced, it doesn’t rely on a proprietary and centralized server component. Unless Canonical changes the way they control snaps, flatpak should be the technology KDE concentrates on. This hasn’t been formally decided however, and the jury is still out. I think it’s important to realize that KDE isn’t served by adopting a technology for a process as important as software distribution that could be switched off by a single company. This would pose an unacceptable risk, and it would send the wrong signal to the rest of the Free software community.

      How would this look like to the user? I can imagine KDE to ship applications directly. We already build our code on pretty much every commit, we are actually the best candidate to know how to build it properly. We’d integrate this seamlessly in Discover through the KDE store, and users should be able to install our applications very easily, perhaps similarly to openSUSE’s one click install, but based on appstream metadata.

      Website work

      Hackers hacking.Hackers hacking.

      We started off the meeting by going over and categorizing topics and then dove straight into the first topic: Communication and Design. There’s a new website for Plasma (and the whole of KDE) coming, thanks to the tireless work of Ken Vermette. We went over most of his recent work to review and suggest fixes, but also to get a bit excited about this new public face of Plasma. The website is part of a bigger problem: In KDE, we’re doing lots of excellent work, but we fail to communicate it properly, regularly and in ways and media that reach our target audience. In fact, we haven’t even clearly defined the target audience. This is something we want to tackle in the near future as well, so stay tuned.

      But also webbrowsers….

      KDE Plasma in 2017KDE Plasma in 2017

      Kai Uwe demo’ed his work on better integration of browsers: Native notifications instead of the out-of-place notifications created by the browser, controls for media player integration between Plasma and the browser (so your album artwork gets shown in the panel’s media controller), acccess to tabs, closing all incognito tabs from Plasma, including individual browser and a few more cool features. Plasma already has most of this functionality, so the bigger part of this has to be in the browser. Kai has implemented the browser side of things as an extension for Chromium (that’s what he uses, Firefox support is also planned), and we’re discussing how we can bring this extension to the attention of the users, possibly preinstalling it so you get the improvements in browser integration without having to spend a thought on it.

      On and on…

      We only just started our sprint, and there are many more things we’re working on and discussing. The above is my account of some things we discussed so far, but I’m planning to keep you posted.

      on February 08, 2017 10:39 AM

      February 07, 2017

      ansible deploying video boxes

      Mark Van den Borre

       
      This was how an ansible deploy of the https://fosdem.org video boxes looked like... More info to come.
      on February 07, 2017 01:14 PM

      In the tutorial How to create a snap for how2 (stackoverflow from the terminal) in Ubuntu 16.04 we saw how to create a snap with snapcraft for the CLI utility called how2. That was a software based on nodejs.

      In this post we will repeat the process for another CLI utility called howdoi by Benjamin Gleitzman, which does a similar task with how2 but is implemented in Python and has a few usability differences as well. howdoi does not have yet a package in the repositories of Ubuntu, either.

      Since we covered already the details in How to create a snap for how2 (stackoverflow from the terminal) in Ubuntu 16.04, this post would be more focused, and shorter. 🙂

      Planning

      Reading through https://github.com/gleitz/howdoi we see that howdoi

      1. is software based on Python (therefore: plugin: python)
      2. requires networking (therefore: plugs: [network])
      3. and has no need to save files (therefore it does not need access to the filesystem)

      Crafting with snapcraft

      Let’s start with snapcraft.

      $ mkdir howdoi
      $ cd howdoi/
      $ snapcraft init
      Created snap/snapcraft.yaml.
      Edit the file to your liking or run `snapcraft` to get started

      Now we edit snap/snapcraft.yaml and here are our changes (in bold) from the initial generated file.

      $ cat snap/snapcraft.yaml 
      name: howdoi # you probably want to 'snapcraft register <name>'
      version: '20170207' # just for humans, typically '1.2+git' or '1.3.2'
      summary: instant coding answers via the command line # 79 char long summary
      description: |
        Are you a hack programmer? Do you find yourself constantly Googling 
        for how to do basic programing tasks?
        Suppose you want to know how to format a date in bash. Why open your browser 
        and read through blogs (risking major distraction) when you can simply 
        stay in the console and ask howdoi.
      
      grade: stable # must be 'stable' to release into candidate/stable channels
      confinement: strict # use 'strict' once you have the right plugs and slots
      
      apps:
        howdoi:
          command: howdoi
          plugs: [network]
      
      parts:
        howdoi:
          plugin: python
          source: https://github.com/gleitz/howdoi.git

      First, we selected to use the name howdoi because again, it’s not a reserved name :-). Also, we registered it with snapcraft,

      $ snapcraft register howdoi
      Registering howdoi.
      Congratulations! You're now the publisher for 'howdoi'.

      Second, we did not notice a particular branch or tag for howdoi, therefore we put the date of the snap creation.

      Third, the summary and the description are just pasted from the Readme.md of the howdoi repository.

      Fourth, we select the grade stable and enforce the strict confinement.

      The apps: howdoi: command: howdoi is the standard sequence to specify the command that will be exposed to the user. The user will be typing howdoi and the command howdoi inside the snap will be invoked.

      The parts: howdoi: plugin: python source: … is the standard sequence to specify that the howdoi that was referenced just earlier, is software written in Python and the source comes from this github repository.

      Let’s craft the snap.

      $ snapcraft 
      Preparing to pull howdoi 
      ...                                                                     
      Pulling howdoi 
      ...
      Preparing to build howdoi 
      Building howdoi 
      ...
      Successfully built howdoi
      ...
      Installing collected packages: howdoi, cssselect, Pygments, requests, lxml, pyquery, requests-cache
      Successfully installed Pygments-2.2.0 cssselect-1.0.1 howdoi-1.1.9 lxml-3.7.2 pyquery-1.2.17 requests-2.13.0 requests-cache-0.4.13
      Staging howdoi 
      Priming howdoi 
      Snapping 'howdoi' |                                                                       
      Snapped howdoi_20170207_amd64.snap
      $ snap install howdoi_20170207_amd64.snap --dangerous
      howdoi 20170207 installed
      $ howdoi format date bash
      DATE=`date +%Y-%m-%d`
      $ _

      Beautiful! It worked!

      Publish to the Ubuntu Store

      Let’s publish the snap to the Ubuntu Store. We are going to push the file howdoi_20170207_amd64.snap and then check that it has passed the automatic checking. Once it has done so, we release to the stable channel.

      $ snapcraft push howdoi_20170207_amd64.snap 
      Pushing 'howdoi_20170207_amd64.snap' to the store.
      Uploading howdoi_20170207_amd64.snap [=============================================================] 100%
      Ready to release!|                                                                                       
      Revision 1 of 'howdoi' created.

      Just a reminder: We can release the snap to the stable channel simply by running snapcraft release howdoi 1 stable. The alternative to this command, is to do all the following through the Web.

      We log in into https://myapps.developer.ubuntu.com/ to check whether snap is ready to publish. In the following screenshots, you would click where the arrows are showing. See the captions for explanations.

      Here is the uploaded snap in our account page in the Ubuntu Store. The snap was uploaded using snapcraft, although it is also possible to uploaded from the account page as well.

       

      The package (the snap) is ready to publish, because it passed the automated tests and was not flagged for manual review.

      By default, the package has not been released to a channel. We click on Release in order to select which channels to release it to.

      For this specific package, we select the stable channel. It is not necessary to select the other channels, because by default a higher channel implies those below. Then, click on the Release button.

      The package got released, and it shown it got released in stable, candidate, beta and edge (we selected stable, but the rest are implied because “stable” beats the rest.) Note that the Package status has changed to “Published”, and we have the option to Unpublish or even Make private. Ignore the arrow, it was pasted by mistake.

      on February 07, 2017 11:42 AM

      February 06, 2017

      Hace exactamente 2 años, el 6 de Febrero del 2015, Canonical me hacía entrega como insider del bq E4.5, un par de meses antes de su venta al público.

      Presentación Ubuntu Phone en Londres


      Y sí, usé Ubuntu Phone durante 2 años en exclusiva (excepto unos pocos días que jugué con Firefox OS y Android).
       
      E4.5


      Pasado

      Yo estaba muy feliz con mi bq E4.5 cuando ¡Oh sorpresa! Canonical nos entregaba un Meizu MX4.


      Eran los buenos tiempos, con dos compañías volcadas en Ubuntu Touch, sacando a posteriori el bq E5, el Meizu PRO 5 y la tablet bq M10. Y una Canonical publicando actualizaciones OTA cada mes y pico.

      Tablet M10

      En estos 2 años leí muchos artículos sobre los primeros terminales. Casi todos desfavorables. Se olvidaban de que eran móviles para early adopters y les hacían reviews comparándolos con lo mejor de Android. ¡Fail! Para ser justos estas primeras versiones de Ubuntu Phone superaban a las primeras versiones de Android e iOS.

      A nivel personal, nacían uNav y uWriter :')) Con un éxito arrollador que me sorprendió.

      Ubucon Paris 15.10

      Presente

      Grandes baluartes de Ubuntu, como David Planella, Daniel Holbach o Martin Pitt abandonan Ubuntu. Y junto a eso leo que Canonical para el desarrollo del móvil, con una redacción que no invita al optimismo. Pero ese 'para' no significa 'abandona'.

      UBPorts coge relevancia en estos últimos meses trabajando en los ports de Fair Phone 2 y OnePlus One.


      FairPhone 2

      Futuro

      El presente no puede hacer que me sienta especialmente optimista. Ya no sólo por Ubuntu Touch en particular, si no por el mercado móvil en general. Un excelente Firefox OS que murió, un SailfishOS que se mantiene a duras penas, un Tizen que sólo papa Samsung mantiene con vida y un Windows Phone que se mantiene tercero en base a pasta del number one en el escritorio.
      Y es que a pesar de la falta de privacidad, seguridad y en especial de software libre, nadie tose a Android.

      Imagen de neurogadget



      ¿Y cómo plantea Ubuntu ese futuro tan negro? Pues podemos decir que Canonical se va a jugar el todo o nada a una sola carta: snap.

      snap


      Debo aclarar aquí el estado actual: En PC tenemos Ubuntu con Unity 7 y en móvil Ubuntu con Unity 8. Pero todo es el mismo Ubuntu, la misma base.

      Y esa es la jugada, a corto plazo deberíamos tener un Ubuntu con Unity 8 tanto en PC como en móvil y basado en paquetes snap (que no tienen problemas de dependencias y tienen muchísima seguridad al 'isolar' las aplicaciones).

      Y ahí entra en juego la convergencia: Mismo Ubuntu, mismas aplicaciones, distintos dispositivos.

      Imagen de OMG Ubuntu!

      Pero el coste de esta jugada podría ser muy caro: Dejar atrás toda la base actual de móviles (se salva la tablet), por usar Android de 32 bits y el salto implicaría usar 64bits lo cual no parece factible.
      on February 06, 2017 08:44 PM