September 17, 2019

This was a fairly busy two weeks for the Web & design team at Canonical.  Here are some of the highlights of our completed work.

Web squad

Web is the squad that develop and maintain most of the brochure websites across the Canonical.

Takeover templates

In order to save some time with our more and more frequently updated takeovers on the homepage of ubuntu.com, we have designed and implemented a set of five (soon to be six) templates that make it very simple for us to create new takeovers. It is a set of scss mixins for the background gradients and angles and a jinja2 template for all the logic. You can see our internal help doc if you are interested.

We have also built our first three takeovers with the new templates:

buy.ubuntu.com

From VMware to Charmed OpenStack webinar

Machine Learning at scale webinar

Created an initial version of the MicroStack.run website

We have launched a new microsite for MicroStack, a single node version of OpenStack for developers and edge cloud users.

Adding GitHub buttons to our open-source projects

Not every Canonical project is hosted on GitHub, but we have started to add a strip of buttons on a few sites that are.

Eoan Ermine mascot

With 19.10 around the corner, we have finally completed work on our new release mascot.  

The Desktop backgrounds have also been completed and are being baked into the code base ready for release.

JAAS

The JAAS squad develops the UI for the JAAS store and Juju GUI  projects.

JAAS.ai

New hero section and new expert page (Omnivector Solutions)

A couple of news on the JAAS.ai website, with a new slideshow in the hero area of our homepage, to display different messages and provide multiple entry points to our landing pages.

We also implemented a new landing page for our partner Omnivector Solutions, our new Juju experts.

JAAS Dashboard

The team is building and implementing the design of the JAAS dashboard / monitoring tool. The new Juju GUI allows Juju to scale up, targeting enterprises and users with many models to manage. The new GUI brings all bootstrapping together, highlighting the status of all models with metadata about the controllers, analytics and stats. JAAS is the intersection of Juju, CLI, models and solutions. This iteration we focused in particular on exploring some solutions for the navigation of the application.

Juju, JAAS, CharmHub – Workshop

We organised a workshop to explore the user journey across the standalone Juju website (with docs, discourse, marketing pages), the new new jaas.ai and JAAS dashboard and the CharmHub store and website. We started defining personas and scenarios from user interviews, and used this resource for our explorations.

ChurmHub & CLI

The team is working on defining the user experience and the interface of the publisher flow and pages of the new store, aligning the user experience with Snap and Snapcraft. The same alignment is reflected in the CLI, where the commands on Snapcraft and Charm (publishing stream) and Snap and Juju (operational stream) are getting consistent, with a common user experience and approach.

RBAC

The team worked on implementing a more granular permission settings for RBAC administrators.

Vanilla

The Vanilla squad design and maintain the design system and Vanilla framework library. They ensure a consistent style throughout web assets.

login.ubuntu.com on Vanilla

Vanilla migration is now complete, the next phase in the project is to finalise how we’re going to deploy the site and roll out the updates.

KPI dashboard

We now have a dashboard to track the most important metrics for our framework, bringing all our KPIs together in one place. We can see how we’re performing on each different measurement, from previous to current releases.

Metrics we are tracking:

  • Site analytics – Users, sessions, bounce rate and acquisition 
  • Events – Downloads and click rate
  • npm downloads – Comparing major releases per cycle
  • Migrations and upgrades – Live projects on Vanilla, release versions and status
  • GitHub – package.json installs, packages, forked and stargazers
  • Email marketing – Subscribers, campaigns, open and click rate

2FA Backup device enforcement

Updates to Backup codes functionality for 2Factor Authentication, new proposed user flow has been signed-off and UI visuals are being developed with review planned next week.

The system right now generate backup codes for users but it does not enforce a regular check-in which leads people to perhaps more chances to lose their codes as they are physical printable elements.

The idea is that the system will ask authenticated users for one of the backup codes.

Component colour theming

Vanilla has always been a single color theme framework, with localised overrides like dark navs and dark strips. We’ve been planning to generalize this into a flexible theming system, and this iteration we finalised the architecture for it.

Future releases will see a gradual rollout of a new dark theme across all components. 

Snapcraft

The Snapcraft team work closely with the snap store team to develop and maintain the snap store website.

Release UI

This iteration we’ve been focusing on preparing the Publisher Release UI for some new, powerful features that should be landing across the Snap ecosystem in the coming months. Here’s a brief summary of a couple of those features.

Build tags

If you’re a user of the Snapcraft automated build system you’ll soon have visibility on revisions, across multiple architectures, that were built at the same time. You’ll also be able to release these sets together and promote them through all channels to stable as one coherent set.

Canarying / Phased Releases / Staged Rollouts

While the name is not fully defined, the feature is. Canarying will allow publishers to release new revisions to a subset of devices in order to receive feedback and test for bugs before rolling out the revision to all devices.

on September 17, 2019 07:30 AM

September 16, 2019

Welcome to the Ubuntu Weekly Newsletter, Issue 596 for the week of September 8 – 14, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on September 16, 2019 08:46 PM

Every developer, systems admin and tech enthusiast is interested in learning Kubernetes. Kubernetes is a complex container orchestration tool that can be overwhelming for beginners. Kubernetes has been the buzzword in the tech industry and for good reason. If you’re itching to get started with Kubernetes and not looking forward to the complexities involved, this first blog of a series is for you. We’ll walk you through getting up and running in a jiffy with a Kubernetes deployment using MicroK8s. The following blogs will do a deeper dive into add-ons and usage.

What is MicroK8s?

MicroK8s is a powerful, lightweight, reliable production-ready Kubernetes distribution. It is an enterprise grade Kubernetes distribution that has a small disk and memory footprint while offering production grade add-ons out-the-box such as Istio, Knative, Grafana, Cilium and more. Whether you are running a production environment or interested in exploring K8s, MicroK8s serves your needs.

Why MicroK8s?

MicroK8s is the smallest, fastest multi-node Kubernetes. Single-package fully conformant lightweight Kubernetes that works on 42 flavours of Linux as well as Mac and Windows using Multipass. Perfect for: Developer workstations, IoT, Edge, CI/CD.

Anyone who’s tried to work with Kubernetes knows the pain of having to deal with getting setup and running with the deployment. There are minimalist solutions in the market that reduce time-to-deployment and complexity but the light weight solutions come at the expense of critical extensibility and missing add-ons.

If you don’t want to spend time jumping through hoops to get Kubernetes up and running, MicroK8s gets you started in under 60 seconds.

  • Small: Developers want the smallest K8s for laptop and workstation development. MicroK8s provides a standalone K8s compatible with Azure AKS, Amazon EKS, Google GKE when you run it on Ubuntu.
  • Simple: Minimize administration and operations with a single-package install that has no moving parts for simplicity and certainty. All dependencies and batteries included.
  • Secure: Updates are available for all security issues and can be applied immediately or scheduled to suit your maintenance cycle.
  • Current: MicroK8s tracks upstream and releases beta, RC and final bits the same day as upstream K8s. You can track latest K8s or stick to any release version from 1.10 onwards.
  • Comprehensive: MicroK8s includes a curated collection of manifests for common K8s capabilities and services:
    • Service Mesh: Istio, Linkerd
    • Serverless: Knative
    • Monitoring: Fluentd, Prometheus, Grafana, Metrics
    • Ingress, DNS, Dashboard, Clustering
    • Automatic updates to the latest Kubernetes version
    • GPGPU bindings for AI/ML
    • Cilum, Helm and Kubeflow!

Basic Definitions of Concepts

  • Snaps: Snaps are app packages for desktop, cloud and IoT that are easy to install, secure, cross-platform and dependency-free.
  • kubectl: A command line interface for running commands on Kubernetes cluster.
  • Container: Containers are used as the building blocks of creating applications.
  • Pod: A pod is a collection of one or more containers that share storage and network resources. Pods contain the definition of how the containers should be run in Kubernetes. For example, you can define you need two pods. During execution, if a pod goes down, a new pod will be automatically started.
  • Service: Since pods are replaceable, Kubernetes needs an abstraction layer to keep the interaction between the different pods seamless. For example, if a pod dies and a new pod is created, the application users shouldn’t be bothered by it. Services are wrappers around the pods to create levels of abstraction.
  • Master: Master coordinates the cluster. It’s like the brains of the operation.
  • Node: Workers who run the pods.

Prerequisites

To run MicroK8s, you will need a computer with a Linux distribution that supports Snaps such as Ubuntu 🙂 If you have a Windows PC or a Mac, you can use Multipass to get MicroK8s running.

Getting Started

Now that we have the context on what MicroK8s is, and said how easy it is to get started, let’s take it for a spin.

1. Installation

sudo snap install microk8s –classic

In under 60 seconds you should have your distribution up and running!

2. Check the status of MicroK8s using the following command:

sudo Microk8s.status

You screen should look something like the figure above. You can see MicroK8s is running which means you have your Kubernetes going!

What’s Next?

Well, this one liner setup that’s made so simple by MicroK8s usually has a lot of hurdles and complexities involved if you’re setting it up manually. Now that you have your Kubernetes deployment up, that’s just the beginning. To do useful stuff you need to do even more complex tasks for setting up components depending on your work needs. This is where the add-ons come in; MicroK8s comes packed with powerful add-ons which again, will save you from the complexities of setting these up and get you going with a few lines of commands.

Head on to the next blog post to explore a use case and see the power and magic of simplified K8s using MicroK8s!

on September 16, 2019 04:57 PM

September 15, 2019

I have always had a bit of a soft spot for the TWiT team and more specifically Leo Laporte. Years ago I used to co-host FLOSS Weekly on their network and occasionally I pop over to the studio for a natter with Leo.

With ‘People Powered: How communities can supercharge your business, brand, and teams‘ coming out, I thought it would be fun to hop over there. Leo graciously agreed and we recorded an episode of their show, Triangulation.

As usual, it was a fun discussion and we got into a number of topics, including:

  • What are communities? Are social media networks communities?
  • Why do people form into communities?
  • What kind of technology should people use to set up a community?
  • How do prevent toxic communities?
  • Who the hell turned on that fire down there behind us?
  • How should companies handle criticism from a community? Should they censor it?
  • What kind of community should TWiT set up?

Click below to watch the show:

The post Talking About Communities and ‘People Powered’ with Leo Laporte appeared first on Jono Bacon.

on September 15, 2019 03:00 PM

September 13, 2019

S12E23 – Wing Commander

Ubuntu Podcast from the UK LoCo

This week we’ve been playing Pillars of Eternity. We discuss boot speed improvements for Ubuntu 19.10, using LXD to map ports, NVIDIA Prime Renderer switching, changes in the Yaru theme and the Librem 5 shipping (perhaps). We also round up some events and some news from the tech world.

It’s Season 12 Episode 23 of the Ubuntu Podcast! Alan Pope and Mark Johnson are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on September 13, 2019 02:00 PM

September 12, 2019

A few Kubuntu Members (and Councillors!) met Thursday before KDE Akademy’s end. We discussed the coming release (will be 19.10) and the upcoming LTS (20.10) – which will be Plasma LTS *and* Qt LTS. This combination will make this LTS super-supported and stable.

We also discussed snaps and when Ubuntu possibly moves to “all snaps all the time” for applications at least. This may be in our future, so it is worth thinking and discussing.

Tobias Fischbach came by the BOF and told us about Limux which is based on Kubuntu. This has been the official computer distribution of Munich for the past few years. Now however, unless the Mayor changes (or changes his mind) the city is moving to Windows again, which will be unfortunate for the City.

Slightly off-topic but relevent is that KDE neon will be moving to 20.04 base soon after release, but they will not stay on the Plasma LTS or Qt LTS. So users who want the very latest in KDE Plasma and applications will continue to have the option of using Neon, while our users, who expect more testing and stability can choose between the LTS for the ultimate in stability and our interim releases for newer Plasma and applications.

Of course we continue to ask for those of our users who want to help the Kubuntu project to volunteer, especially to test. We’ll soon need testers for the upcoming Eoan, which will become 19.10. Drop into the development IRC channel: #kubuntu-devel on freenode, or subscribe to the Kubuntu Development list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel

on September 12, 2019 03:42 PM

Building collaborative online platforms is hard. To make a platform that is truly compelling, and rewards the right kind of behavior and teamwork, requires a careful balance of effective design, workflow, and understanding the psychology of how people work together.

Jeff Atwood has an enormous amount of experience doing precisely this. Not only was he the co-founder of Stack Overflow (and later Stack Exchange), but he is also the founder of Discourse, an enormously popular Open Source platform for online discussions.

In this episode of Conversations With Bacon we get into the evolution of online communities, how they have grown, and Jeff’s approach to the design and structure of the systems he has worked on. We delve into Slack vs. forums (and where they are most appropriately used), how Discourse has designed a platform where capabilities are earned, different cultural approaches to communication, and much more.

There is so much insight in this discussion from Jeff, and it is well worth a listen.

Oh, and by the way, Jeff endorsed my new book ‘People Powered: How communities can supercharge your business, brand, and teams’. Be sure to check it out!

Listen


   Listen on Google Play Music
   

The post Jeff Atwood on Discourse, Stack Overflow, and Building Online Community Platforms appeared first on Jono Bacon.

on September 12, 2019 04:31 AM

September 11, 2019

Full Circle Weekly News #144

Full Circle Magazine


Default Ubuntu Yaru Theme Rebased on Adwaita 3.32

https://www.linuxuprising.com/2019/08/default-ubuntu-yaru-theme-rebased-on.html

Announcing the EPEL 8.0 Official Release

http://smoogespace.blogspot.com/2019/08/announcing-epel-80-official-release.html

Mozilla Revamps Firefox’s HTTPS Address Bar Information

https://www.ghacks.net/2019/08/13/mozilla-revamps-firefoxs-https-address-bar-information/

XFCE 4.14 Desktop Officially Released

https://www.omgubuntu.co.uk/2019/08/xfce-4-14

Credits:

Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust

https://soundcloud.com/ftdmusic

https://creativecommons.org/licenses/by/4.0/

on September 11, 2019 06:08 PM

Both this blog post and the paper it describes are collaborative work led by Charles Kiene with Jialun “Aaron” Jiang.

Introducing new technology into a work place is often disruptive, but what if your work was also completely mediated by technology? This is exactly the case for the teams of volunteer moderators who work to regulate content and protect online communities from harm. What happens when the social media platforms these communities rely on change completely? How do moderation teams overcome the challenges caused by new technological environments? How do they do so while managing a “brand new” community with tens of thousands of users?

For a new study that will be published in CSCW in November, we interviewed 14 moderators of 8 “subreddit” communities from the social media aggregation and discussion platform Reddit to answer these questions. We chose these communities because each community had recently adopted the real-time chat platform Discord to support real-time chat in their community. This expansion into Discord introduced a range of challenges—especially for the moderation teams of large communities.

We found that moderation teams of large communities improvised their own creative solutions to challenges they faced by building bots on top of Discord’s API. This was not too shocking given that APIs and bots are frequently cited as tools that allow innovation and experimentation when scaling up digital work. What did surprise us, however, was how important moderators’ past experiences were in guiding the way they used bots. In the largest communities that faced the biggest challenges, moderators relied on bots to reproduce the tools they had used on Reddit. The moderators would often go so far as to give their bots the names of moderator tools available on Reddit. Our findings suggest that support for user-driven innovation is important not only in that it allows users to explore new technological possibilities but also in that it allows users to mine their past experiences to introduce old systems into new environments.

What Challenges Emerged in Discord?

Discord’s text channels allow for more natural, in the moment conversations compared to Reddit. In Discord, this social aspect also made moderation work much more difficult. One moderator explained:

“It’s kind of rough because if you miss it, it’s really hard to go back to something that happened eight hours ago and the conversation moved on and be like ‘hey, don’t do that.’ ”

Moderators we spoke to found that the work of managing their communities was made even more difficult by their community’s size:

On the day to day of running 65,000 people, it’s literally like running a small city…We have people that are actively online and chatting that are larger than a city…So it’s like, that’s a lot to actually keep track of and run and manage.”

The moderators of large communities repeatedly told us that the tools provided to moderators on Discord were insufficient. For example, they pointed out tools like Discord’s Audit Log was inadequate for keeping track of the tens of thousands of members of their communities. Discord also lacks automated moderation tools like the Reddit’s Automoderator and Modmail leaving moderators on Discord with few tools to scale their work and manage communications with community members. 

How Did Moderation Teams Overcome These Challenges?

The moderation teams we talked with adapted to these challenges through innovative uses of Discord’s API toolkit. Like many social media platforms, Discord offers a public API where users can develop apps that interact with the platform through a Discord “bot.” We found that these bots play a critical role in helping moderation teams manage Discord communities with large populations.

Guided by their experience with using tools like Automoderator on Reddit, moderators working on Discord built bots with similar functionality to solve the problems associated with scaled content and Discord’s fast-paced chat affordances. This bots would search for regular expressions and URLs that go against the community’s rules:

“It makes it so that rather than having to watch every single channel all of the time for this sort of thing or rely on users to tell us when someone is basically running amuck, posting derogatory terms and terrible things that Discord wouldn’t catch itself…so it makes it that we don’t have to watch every channel.”

Bots were also used to replace Discord’s Audit Log feature with what moderators referred to often as “Mod logs”—another term borrowed from Reddit. Moderators will send commands to a bot like “!warn username” to store information such as when a member of their community has been warned for breaking a rule and automatically store this information in a private text channel in Discord. This information helps organize information about community members, and it can be instantly recalled with another command to the bot to help inform future moderation actions against other community members.

Finally, moderators also used Discord’s API to develop bots that functioned virtually identically to Reddit’s Modmail tool. Moderators are limited in their availability to answer questions from members of their community, but tools like the “Modmail” helps moderation teams manage this problem by mediating communication to community members with a bot:

“So instead of having somebody DM a moderator specifically and then having to talk…indirectly with the team, a [text] channel is made for that specific question and everybody can see that and comment on that. And then whoever’s online responds to the community member through the bot, but everybody else is able to see what is being responded.”

The tools created with Discord’s API — customizable automated content moderation, Mod logs, and a Modmail system — all resembled moderation tools on Reddit. They even bear their names! Over and over, we found that moderation teams essentially created and used bots to transform aspects of Discord, like text channels into Mod logs and Mod Mail, to resemble the same tools they were using to moderate their communities on Reddit. 

What Does This Mean for Online Communities?

We think that the experience of moderators we interviewed points to a potentially important underlooked source of value for groups navigating technological change: the potent combination of users’ past experience combined with their ability to redesign and reconfigure their technological environments. Our work suggests the value of innovation platforms like APIs and bots is not only that they allow the discovery of “new” things. Our work suggests that these systems value also flows from the fact that they allow the re-creation of the the things that communities already know can solve their problems and that they already know how to use.


For more details, check out check out the full 23 page paper. The work will be presented in Austin, Texas at the ACM Conference on Computer-supported Cooperative Work and Social Computing (CSCW’19) in November 2019. The work was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). If you have questions or comments about this study, contact Charles Kiene at ckiene [at] uw [dot] edu.

on September 11, 2019 03:04 AM

September 10, 2019

Welcome to the Ubuntu Weekly Newsletter, Issue 595 for the week of September 1 – 7, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on September 10, 2019 12:15 PM
The early boot requires loading and decompressing the kernel and initramfs from the boot storage device.   This speed is dependent on several factors, speed of loading an image from the boot device, the CPU and memory/cache speed for decompression and the compression type.

Generally speaking, the smallest (best) compression takes longer to decompress due to the extra complexity in the compression algorithm.  Thus we have a trade-off between load time vs decompression time.

For slow rotational media (such as a 5400 RPM HDD) with a slow CPU the loading time can be the dominant factor.  For faster devices (such as a SSD) with a slow CPU, decompression time may be the dominate factor.  For devices with fast 7200-10000 RPM HDDs with fast CPUs, the time to seek to the data starts to dominate the load time, so load times for different compressed kernel sizes is only slightly different in load time.

The Ubuntu kernel team ran several experiments benchmarking several x86 configurations using the x86 TSC (Time Stamp Counter) to measure kernel load and decompression time for 6 different compression types: BZIP2, GZIP, LZ4, LZMA, LZMO and XZ.  BZIP2, LZMA and XZ are slow to decompress so they got ruled out very quickly from further tests.

In compression size, GZIP produces the smallest compressed kernel size, followed by LZO (~16% larger) and LZ4 (~25% larger).  With decompression time, LZ4 is over 7 times faster than GZIP, and LZO being ~1.25 times faster then GZIP on x86.

In absolute wall-clock times, the following kernel load and decompress results were observed:

Lenovo x220 laptop, 5400 RPM HDD:
  LZ4 best, 0.24s faster than the GZIP total time of 1.57s

Lenovo x220 laptop, SSD:
  LZ4 best, 0.29s faster than the GZIP total time of 0.87s

Xeon 8 thread desktop with 7200 RPM HDD:
  LZ4 best, 0.05s faster than the GZIP total time of 0.32s

VM on a Xeon 8 thread desktop host with SSD RAID ZFD backing store:
  LZ4 best, 0.05s faster than the GZIP total time of 0.24s

Even with slow spinning media and a slow CPU, the longer load time of the LZ4 kernel is overcome by the far faster decompression time. As media gets faster, the load time difference between GZIP, LZ4 and LZO diminishes and the decompression time becomes the dominant speed factor with LZ4 the clear winner.

For Ubuntu 19.10 Eoan Ermine, LZ4 will be the default decompression for x86, ppc64el and s390 kernels and for the initramfs too.

References:
Analysis: https://kernel.ubuntu.com/~cking/boot-speed-eoan-5.3/kernel-compression-method.txt
Data: https://kernel.ubuntu.com/~cking/boot-speed-eoan-5.3/boot-speed-compression-5.3-rc4.ods

on September 10, 2019 09:49 AM

September 06, 2019

LXD supports proxy devices, which is a way to proxy connections between the host and containers. This includes TCP, UDP and Unix socket connections, in any combination between each other, in any direction. For example, when someone connects to your host on port 80 (http), then this connection can be proxied to a container using a proxy device. In that way, you can isolate your Web server into a LXD container. By using a TCP proxy device, you do not need to use iptables instead.

There are 3³=9 combinations for connections between TCP, UDP and Unix sockets, as follows. Yes, you can proxy, for example, a TCP connection to a Unix socket!

  1. TCP to TCP, for example, to expose a container’s service to the Internet.
  2. TCP to UDP
  3. TCP to Unix socket
  4. UDP to UDP
  5. UDP to TCP
  6. UDP to Unix socket
  7. Unix socket to Unix socket, for example, to share the host’s X11 socket to a container. Or, to make available a host’s Unix socket into the container.
  8. Unix socket to TCP
  9. Unix socket to UDP

Earlier I wrote that you can make a connection in any direction. For example, you can expose the host’s Unix socket for X11 into the container so that the container can run X11 applications and have them appear on the host’s X11 server. Or, in the other way round, you can make available LXD’s Unix socket at the host to a container so that you can manage LXD from inside a container.

Note that LXD 3.0.x only supports TCP to TCP proxy devices. Support for UDP and Unix sockets was added in later versions.

Launching a container and setting up a Web server

Let’s launch a container, install a Web server, and, then expose the Web server to the local network (or the Internet, if you are using a VPS/Internet server).

First, launch the container.

$ lxc launch ubuntu:18.04 mycontainer
Creating mycontainer
Starting mycontainer

We get a shell into the container, update the package list and install nginx. Finally, verify that nginx is running.

ubuntu@mycontainer:~$ sudo apt update
ubuntu@mycontainer:~$ sudo apt install -y nginx
ubuntu@mycontainer:~$ curl http://localhost
...
 Welcome to nginx! 

Exposing the Web server of a container to the Internet

We logout to the host and verify that there is no Web server already running on port 80. If port 80 is not available on your host, change it to something else, like 8000. Finally, we create the TCP to TCP LXD Proxy Device.

ubuntu@mycontainer:~$ logout
$ lxc config device add mycontainer myport80 proxy listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80
Device myport80 added to mycontainer

The command that creates the proxy device is made of the following components.

  1. lxc config device add, we configure to have a device added,
  2. mycontainer, to the container mycontainer,
  3. myport80, with name myport80,
  4. proxy, a proxy device, we are adding a LXD Proxy Device.
  5. listen=tcp:0.0.0.0:80, we listen (on the host by default) on all network interfaces on TCP port 80.
  6. connect=tcp:127.0.0.1:80, we connect (to the container by default) to the existing TCP port 80 on localhost, which is our nginx.

Note that previously you would specify hostnames when you were creating LXD Proxy Devices. This is no longer supported (has security implications), therefore you get an error if you specify a hostname such as localhost. This post was primarily written because the top Google result on proxy devices is an old Reddit read-only post that suggests to use localhost.

Let’s test that the Web server in the container is accessible on the host. We can use both localhost (or 127.0.0.1) on the host to access the website of the container. We can also use the public IP address of the host (in this case, the LAN IP address) to access the container.

$ curl http://localhost
...
 Welcome to nginx! 
...
$ curl http://192.168.1.100
...
 Welcome to nginx! 
...

Other features of the proxy devices

By default, a proxy device exposes an existing service in the container to the host. If we need to expose an existing service on the host to a container, we would add the parameter bind=container to the proxy device command.

You can expose a single webserver to a port on the host. But how do you expose many web servers in containers to the host? You can use a reverse proxy that goes in front of the containers. To retain the remote IP address of the clients visiting the Web servers, you can add the proxy_protocol=true to enable support for the PROXY protocol. Note that you also need to enable the PROXY protocol on the reverse proxy.

on September 06, 2019 10:08 PM

September 05, 2019

S12E22 – Shadow of the Beast

Ubuntu Podcast from the UK LoCo

This week we’ve been playing with the GPD WIN 2. We interview Sarah Townson about Science Oxford and making fighting robots, bring you some command line love and go over all your feedback.

It’s Season 12 Episode 22 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

Desktop

sudo apt install --install-recommends linux-generic-hwe-18.04 xserver-xorg-hwe-18.04

Server

sudo apt install --install-recommends linux-generic-hwe-18.04
  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

  • Image taken from Shadow of the Beast published in 1989 for the Amiga by Psygnosis.

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on September 05, 2019 05:00 PM

This week I went to Parliament square in Edinburgh where the highest court of the land, the Court of Session sits.  The court room viewing gallery was full,  concerned citizens there to watch and journalists enjoying the newly allowed ability to post live from the courtroom.  They were waiting for Joanna Cherry, Jo Maugham and the Scottish Government to give legal challenge to the UK Governement not to shut down parliament.  The UK government filed their papers late and didn’t bother completing them missing out the important signed statement from the Prime Minister saying why he had ordered parliament to be shut.  A UK government who claims to care about Scotland but ignores its people, government and courts is not one who can argue it it working for democracy or the union it wants to keep.

Outside I spoke to the assembled vigil gathering there to support, under the statue of Charles II, I said how democracy can’t be shut down but it does need the people to pay constant attention and play their part.

Charles II was King of Scots who led Scots armies that were defeated twice by the English Commonwealth army busy invading neighbouring countries claiming London and it’s English parliament gave them power over us all.  So I went to London to check it out.

In London that parliament is falling down.  Scaffold covers it in an attempt to patch it up.  The protesters outside held a rally where politicians from the debates inside wandered out to give updates as they frantically tried to stop an unelected Prime Minister to take away our freedoms and citizenship.  Comedian Mitch Benn compared it, leading the rally saying he wanted everyone to show their English  flags with pride, the People’s Vote campaign trying to reclaim them from the racists, it worked with the crowd and shows how our politics is changing.

Inside the Westminster Parliament compound, past the armed guards and threatening signs of criminal repercussions the statue of Cromwell stands proud, he invaded Scotland and murdered many Irish, a curious character to celebrate.

The compound is a bubble, the noise of the protesters outside wanting to keep freedoms drowned out as we watched a government lose its majority and the confidence on their faces familiar from years of self entitlement vanish.

Pete Wishart, centre front, is an SNP MP who runs the All Party Intellectual Property group, he invited us in for the launch of OpenUK a new industry body for companies who want to engage with governement for open source solutions.  Too often governement puts out tenders for jobs and won’t talk to providers of open source solutions because we’re too small and the names are obscure.  Too often when governements do implement open source and free software setups they get shut down because someone with more money comes along and offers their setup and some jobs.  I’ve seen that in Nigeria, I’ve seen it happen in Scotland, I’ve seen it happen in Germany.  The power and financial structures that proprietary software create allows for the corruption of best solutions to a problem.

The Scottish independence supporter Pete spoke of the need for Britain to have the best Intellectual Property rules in the world, to a group who want to change how intellectual property influences us, while democracy falls down around us.

The protesters marched over the river closing down central London in the name of freedom but in the bubble of Westminster we sit sipping wine looking on.

The winners of the UK Open Source Awards were celebrated and photos taken, (previously) unsung heros working to keep the free operating system running, opening up how plant phenomics work, improving healthcare in ways that can not be done when closed.

Getting governement engagement with free software is crucial to improving how our society works but the politicians are far too easily swayed by big branding and names budgets rather than making sure barriers are reduced to be invisible.

The crumbling of one democracy alongside a celebration and opening of a project to bring business to those who still have little interest in it.  How to get government to prefer openness over barriers?  This place will need to be rebuilt before that can happen.

Onwards to Milan for KDE Akademy.

 

on September 05, 2019 04:38 PM

One last post from Summer Camp this year (it’s been a busy month!) – this one about the “Data Duplication Village” at DEF CON. In addition to talks, the Data Duplication Village offers an opportunity to get your hands on the highest quality hacker bits – that is, copies of somewhere between 15 and 18TB of data spread across 3 6TB hard drives.

I’d been curious about the DDV for a couple of years, but never participated before. I decided to change that when I saw 6TB Ironwolf NAS drives on sale a few weeks before DEF CON. I wasn’t quite sure what to expect, as the description provided by the DDV is a little bit sparse:

6TB drive 1-3: All past convention videos that DT can find - essentially a clone of infocon.org - building on last year’s collection and re-squished with brand new codecs for your size constraining pleasures.

6TB drive 2-3: freerainbowtables hash tables (lanman, mysqlsha1, NTLM) and word lists (1-2)

6TB drive 3-3: freerainbowtables GSM A5/1, md5 hash tables, and software (2-2)

Drive 1-3 seems pretty straightforward, but I spent a lot of time debating if the other two were worth getting. (And, to be honest, I think they’re cool to have, but not sure if I’ll really make good use of them.)

I want to thank the operators of the DDV for their efforts, and also my wife for dropping off and picking up my drives while I was otherwise occupied (work obligations).

It’s worth noting that, as far as I can tell, all of the contents of the drives here is available as a torrent, so you can always get the data that way. On the other hand, torrenting 15.07 TiB (16189363384 KiB to be precise) might not be your cup of tea, especially if you have a mere 75 Mbps internet connection like mine.

If you want a detailed list of the contents of each drive (along with sha256sums), I’ve posted them to Github. If you choose to participate next year, note that your drives must be 7200 RPM SATA drives (apparently several people had to be turned away due to 5400 RPM drives, which slow down the entire cloning process).

Drive 1

Drive 1 really does seem to be a copy of infocon.org, it’s got dozens of conferences archived on it, adding up to a total of 132,253 files. Just to give you a taste, here’s a high-level index:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
./cons
./cons/2600
./cons/44Con
./cons/ACK Security Conference
./cons/ACoD
./cons/AIDE
./cons/ANYCon
./cons/ATT&CKcon
./cons/AVTokyo
./cons/Android Security Symposium
./cons/ArchC0N
./cons/Area41
./cons/AthCon
./cons/AtlSecCon
./cons/AusCERT
./cons/BalCCon
./cons/Black Alps
./cons/Black Hat
./cons/BloomCON
./cons/Blue Hat
./cons/BodyHacking
./cons/Bornhack
./cons/BotConf
./cons/BrrCon
./cons/BruCON
./cons/CERIAS
./cons/CODE BLUE
./cons/COIS
./cons/CONFidence
./cons/COUNTERMEASURE
./cons/CYBERWARCON
./cons/CackalackyCon
./cons/CactusCon
./cons/CarolinaCon
./cons/Chaos Computer Club - Camp
./cons/Chaos Computer Club - Congress
./cons/Chaos Computer Club - CryptoCon
./cons/Chaos Computer Club - Easterhegg
./cons/Chaos Computer Club - SigInt
./cons/CharruaCon
./cons/CircleCityCon
./cons/ConVerge
./cons/CornCon
./cons/CrikeyCon
./cons/CyCon
./cons/CypherCon
./cons/DEF CON
./cons/DakotaCon
./cons/DeepSec
./cons/DefCamp
./cons/DerbyCon
./cons/DevSecCon
./cons/Disobey
./cons/DojoCon
./cons/DragonJAR
./cons/Ekoparty
./cons/Electromagnetic Field
./cons/FOSDEM
./cons/FSec
./cons/GreHack
./cons/GrrCON
./cons/HCPP
./cons/HITCON
./cons/Hack In Paris
./cons/Hack In The Box
./cons/Hack In The Random
./cons/Hack.lu
./cons/Hack3rcon
./cons/HackInBo
./cons/HackWest
./cons/Hackaday
./cons/Hacker Hotel
./cons/Hackers 2 Hackers Conference
./cons/Hackers At Large
./cons/Hackfest
./cons/Hacking At Random
./cons/Hackito Ergo Sum
./cons/Hacks In Taiwan
./cons/Hacktivity
./cons/Hash Days
./cons/HouSecCon
./cons/ICANN
./cons/IEEE Security and Privacy
./cons/IETF
./cons/IRISSCERT
./cons/Infiltrate
./cons/InfoWarCon
./cons/Insomnihack
./cons/KazHackStan
./cons/KiwiCon
./cons/LASCON
./cons/LASER
./cons/LangSec
./cons/LayerOne
./cons/LevelUp
./cons/LocoMocoSec
./cons/Louisville Metro InfoSec
./cons/MISP Summit
./cons/NANOG
./cons/NoNameCon
./cons/NolaCon
./cons/NorthSec
./cons/NotACon
./cons/NotPinkCon
./cons/Nuit Du Hack
./cons/NullCon
./cons/O'Reilly Security
./cons/OISF
./cons/OPCDE
./cons/OURSA
./cons/OWASP
./cons/Observe Hack Make
./cons/OffensiveCon
./cons/OzSecCon
./cons/PETS
./cons/PH-Neutral
./cons/Pacific Hackers
./cons/PasswordsCon
./cons/PhreakNIC
./cons/Positive Hack Days
./cons/Privacy Camp
./cons/QuahogCon
./cons/REcon
./cons/ROMHACK
./cons/RSA
./cons/RVAsec
./cons/Real World Crypto
./cons/RightsCon
./cons/RoadSec
./cons/Rooted CON
./cons/Rubicon
./cons/RuhrSec
./cons/RuxCon
./cons/S4
./cons/SANS
./cons/SEC-T
./cons/SHA2017
./cons/SIRAcon
./cons/SOURCE
./cons/SaintCon
./cons/SecTor
./cons/SecureWV
./cons/Securi-Tay
./cons/Security BSides
./cons/Security Fest
./cons/Security Onion
./cons/Security PWNing
./cons/Shakacon
./cons/ShellCon
./cons/ShmooCon
./cons/ShowMeCon
./cons/SkyDogCon
./cons/SteelCon
./cons/SummerCon
./cons/SyScan
./cons/THREAT CON
./cons/TROOPERS
./cons/TakeDownCon
./cons/Texas Cyber Summit
./cons/TheIACR
./cons/TheLongCon
./cons/TheSAS
./cons/Thotcon
./cons/Toorcon
./cons/TrustyCon
./cons/USENIX ATC
./cons/USENIX Enigma
./cons/USENIX Security
./cons/USENIX WOOT
./cons/Unrestcon
./cons/Virus Bulletin
./cons/WAHCKon
./cons/What The Hack
./cons/Wild West Hackin Fest
./cons/You Shot The Sheriff
./cons/Zero Day Con
./cons/ZeroNights
./cons/c0c0n
./cons/eth0
./cons/hardware.io
./cons/outerz0ne
./cons/r00tz Asylum
./cons/r2con
./cons/rootc0n
./cons/t2 infosec
./cons/x33fcon
./documentaries
./documentaries/Hacker Movies
./documentaries/Hacking Documentaries
./documentaries/Other
./documentaries/Pirate Documentary
./documentaries/Tech Documentary
./documentaries/Tools
./infocon.jpg
./mirrors
./mirrors/cryptome.org-July-2019.rar
./mirrors/gutenberg-15-July-2019.net.au.rar
./rainbow tables
./rainbow tables/## READ ME RAINBOW TABLES ##.txt
./rainbow tables/rainbow table software
./skills
./skills/Lock Picking
./skills/MAKE

Drive 2

Drive 2 contains the promised rainbow tables (lanman, ntlm, and mysqlsha1) as well as a bunch of wordlists. I actually wonder how a 128GB wordlist would compare to applying rules to something like rockyou – bigger is not always better, and often, you want high yield unless you’re trying to crack something obscure.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
./lanman
./lanman/lm_all-space#1-7_0
./lanman/lm_all-space#1-7_1
./lanman/lm_all-space#1-7_2
./lanman/lm_all-space#1-7_3
./lanman/lm_lm-frt-cp437-850#1-7_0
./lanman/lm_lm-frt-cp437-850#1-7_1
./lanman/lm_lm-frt-cp437-850#1-7_2
./lanman/lm_lm-frt-cp437-850#1-7_3
./mysqlsha1
./mysqlsha1/mysqlsha1_loweralpha#1-10_0
./mysqlsha1/mysqlsha1_loweralpha#1-10_1
./mysqlsha1/mysqlsha1_loweralpha#1-10_2
./mysqlsha1/mysqlsha1_loweralpha#1-10_3
./mysqlsha1/mysqlsha1_loweralpha-numeric#1-10_0
./mysqlsha1/mysqlsha1_loweralpha-numeric#1-10_16
./mysqlsha1/mysqlsha1_loweralpha-numeric#1-10_24
./mysqlsha1/mysqlsha1_loweralpha-numeric#1-10_8
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-8_0
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-8_1
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-8_2
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-8_3
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-9_0
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-9_1
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-9_2
./mysqlsha1/mysqlsha1_loweralpha-numeric-space#1-9_3
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-7_0
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-7_1
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-7_2
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-7_3
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-8_0
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-8_1
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-8_2
./mysqlsha1/mysqlsha1_loweralpha-numeric-symbol32-space#1-8_3
./mysqlsha1/mysqlsha1_loweralpha-space#1-9_0
./mysqlsha1/mysqlsha1_loweralpha-space#1-9_1
./mysqlsha1/mysqlsha1_loweralpha-space#1-9_2
./mysqlsha1/mysqlsha1_loweralpha-space#1-9_3
./mysqlsha1/mysqlsha1_mixalpha-numeric-symbol32-space#1-7_0
./mysqlsha1/mysqlsha1_mixalpha-numeric-symbol32-space#1-7_1
./mysqlsha1/mysqlsha1_mixalpha-numeric-symbol32-space#1-7_2
./mysqlsha1/mysqlsha1_mixalpha-numeric-symbol32-space#1-7_3
./mysqlsha1/mysqlsha1_numeric#1-12_0
./mysqlsha1/mysqlsha1_numeric#1-12_1
./mysqlsha1/mysqlsha1_numeric#1-12_2
./mysqlsha1/mysqlsha1_numeric#1-12_3
./mysqlsha1/rainbow table software
./ntlm
./ntlm/ntlm_alpha-space#1-9_0
./ntlm/ntlm_alpha-space#1-9_1
./ntlm/ntlm_alpha-space#1-9_2
./ntlm/ntlm_alpha-space#1-9_3
./ntlm/ntlm_hybrid2(alpha#1-1,loweralpha#5-5,loweralpha-numeric#2-2,numeric#1-3)#0-0_0
./ntlm/ntlm_hybrid2(alpha#1-1,loweralpha#5-5,loweralpha-numeric#2-2,numeric#1-3)#0-0_1
./ntlm/ntlm_hybrid2(alpha#1-1,loweralpha#5-5,loweralpha-numeric#2-2,numeric#1-3)#0-0_2
./ntlm/ntlm_hybrid2(alpha#1-1,loweralpha#5-5,loweralpha-numeric#2-2,numeric#1-3)#0-0_3
./ntlm/ntlm_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_0
./ntlm/ntlm_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_1
./ntlm/ntlm_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_2
./ntlm/ntlm_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_3
./ntlm/ntlm_loweralpha-numeric#1-10_0
./ntlm/ntlm_loweralpha-numeric#1-10_16
./ntlm/ntlm_loweralpha-numeric#1-10_24
./ntlm/ntlm_loweralpha-numeric#1-10_8
./ntlm/ntlm_loweralpha-numeric-space#1-8_0
./ntlm/ntlm_loweralpha-numeric-space#1-8_1
./ntlm/ntlm_loweralpha-numeric-space#1-8_2
./ntlm/ntlm_loweralpha-numeric-space#1-8_3
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-7_0
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-7_1
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-7_2
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-7_3
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-8_0
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-8_1
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-8_2
./ntlm/ntlm_loweralpha-numeric-symbol32-space#1-8_3
./ntlm/ntlm_loweralpha-space#1-9_0
./ntlm/ntlm_loweralpha-space#1-9_1
./ntlm/ntlm_loweralpha-space#1-9_2
./ntlm/ntlm_loweralpha-space#1-9_3
./ntlm/ntlm_mixalpha-numeric#1-8_0
./ntlm/ntlm_mixalpha-numeric#1-8_1
./ntlm/ntlm_mixalpha-numeric#1-8_2
./ntlm/ntlm_mixalpha-numeric#1-8_3
./ntlm/ntlm_mixalpha-numeric#1-9_0
./ntlm/ntlm_mixalpha-numeric#1-9_16
./ntlm/ntlm_mixalpha-numeric#1-9_32
./ntlm/ntlm_mixalpha-numeric#1-9_48
./ntlm/ntlm_mixalpha-numeric-all-space#1-7_0
./ntlm/ntlm_mixalpha-numeric-all-space#1-7_1
./ntlm/ntlm_mixalpha-numeric-all-space#1-7_2
./ntlm/ntlm_mixalpha-numeric-all-space#1-7_3
./ntlm/ntlm_mixalpha-numeric-all-space#1-8_0
./ntlm/ntlm_mixalpha-numeric-all-space#1-8_16
./ntlm/ntlm_mixalpha-numeric-all-space#1-8_24
./ntlm/ntlm_mixalpha-numeric-all-space#1-8_32
./ntlm/ntlm_mixalpha-numeric-all-space#1-8_8
./ntlm/ntlm_mixalpha-numeric-space#1-7_0
./ntlm/ntlm_mixalpha-numeric-space#1-7_1
./ntlm/ntlm_mixalpha-numeric-space#1-7_2
./ntlm/ntlm_mixalpha-numeric-space#1-7_3
./ntlm/rainbow table software
./rainbow table software
./rainbow table software/Free Rainbow Tables » Distributed Rainbow Table Generation » LM, NTLM, MD5, SHA1, HALFLMCHALL, MSCACHE.mht
./rainbow table software/converti2_0.3_src.7z
./rainbow table software/converti2_0.3_win32_mingw.7z
./rainbow table software/converti2_0.3_win32_vc.7z
./rainbow table software/converti2_0.3_win64_mingw.7z
./rainbow table software/converti2_0.3_win64_vc.7z
./rainbow table software/rcracki_mt_0.7.0_linux_x86_64.7z
./rainbow table software/rcracki_mt_0.7.0_src.7z
./rainbow table software/rcracki_mt_0.7.0_win32_mingw.7z
./rainbow table software/rcracki_mt_0.7.0_win32_vc.7z
./rainbow table software/rti2formatspec.pdf
./rainbow table software/rti2rto_0.3_beta2_win32_vc.7z
./rainbow table software/rti2rto_0.3_beta2_win64_vc.7z
./rainbow table software/rti2rto_0.3_src.7z
./rainbow table software/rti2rto_0.3_win32_mingw.7z
./rainbow table software/rti2rto_0.3_win64_mingw.7z
./word lists
./word lists/SecLists-master.rar
./word lists/WPA-PSK WORDLIST 3 Final (13 GB).rar
./word lists/Word Lists archive - infocon.org.torrent
./word lists/crackstation-human-only.txt.rar
./word lists/crackstation.realuniq.rar
./word lists/fbnames.rar
./word lists/human0id word lists.rar
./word lists/openlibrary_wordlist.rar
./word lists/pwgen.rar
./word lists/pwned-passwords-2.0.txt.rar
./word lists/pwned-passwords-ordered-2.0.rar
./word lists/xsukax 128GB word list all 2017 Oct.7z

Drive 3

Drive 3 contains more rainbow tables, this time for A5-1 (GSM encryption), and extensive tables for MD5. It appears to contain the same software and wordlists as Drive 2.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
./A51
./A51 rainbow tables - infocon.org.torrent
./A51/Decoding-Gsm.pdf
./A51/a51_table_100.dlt
./A51/a51_table_108.dlt
./A51/a51_table_116.dlt
./A51/a51_table_124.dlt
./A51/a51_table_132.dlt
./A51/a51_table_140.dlt
./A51/a51_table_148.dlt
./A51/a51_table_156.dlt
./A51/a51_table_164.dlt
./A51/a51_table_172.dlt
./A51/a51_table_180.dlt
./A51/a51_table_188.dlt
./A51/a51_table_196.dlt
./A51/a51_table_204.dlt
./A51/a51_table_212.dlt
./A51/a51_table_220.dlt
./A51/a51_table_230.dlt
./A51/a51_table_238.dlt
./A51/a51_table_250.dlt
./A51/a51_table_260.dlt
./A51/a51_table_268.dlt
./A51/a51_table_276.dlt
./A51/a51_table_292.dlt
./A51/a51_table_324.dlt
./A51/a51_table_332.dlt
./A51/a51_table_340.dlt
./A51/a51_table_348.dlt
./A51/a51_table_356.dlt
./A51/a51_table_364.dlt
./A51/a51_table_372.dlt
./A51/a51_table_380.dlt
./A51/a51_table_388.dlt
./A51/a51_table_396.dlt
./A51/a51_table_404.dlt
./A51/a51_table_412.dlt
./A51/a51_table_420.dlt
./A51/a51_table_428.dlt
./A51/a51_table_436.dlt
./A51/a51_table_492.dlt
./A51/a51_table_500.dlt
./A51/rainbow table software
./LANMAN rainbow tables - infocon.org.torrent
./MD5 rainbow tables - infocon.org.torrent
./MySQL SHA-1 rainbow tables - infocon.org.torrent
./NTLM rainbow tables - infocon.org.torrent
./md5
./md5/md5_alpha-space#1-9_0
./md5/md5_alpha-space#1-9_1
./md5/md5_alpha-space#1-9_2
./md5/md5_alpha-space#1-9_3
./md5/md5_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_0
./md5/md5_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_1
./md5/md5_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_2
./md5/md5_hybrid2(loweralpha#7-7,numeric#1-3)#0-0_3
./md5/md5_loweralpha#1-10_0
./md5/md5_loweralpha#1-10_1
./md5/md5_loweralpha#1-10_2
./md5/md5_loweralpha#1-10_3
./md5/md5_loweralpha-numeric#1-10_0
./md5/md5_loweralpha-numeric#1-10_16
./md5/md5_loweralpha-numeric#1-10_24
./md5/md5_loweralpha-numeric#1-10_8
./md5/md5_loweralpha-numeric-space#1-8_0
./md5/md5_loweralpha-numeric-space#1-8_1
./md5/md5_loweralpha-numeric-space#1-8_2
./md5/md5_loweralpha-numeric-space#1-8_3
./md5/md5_loweralpha-numeric-space#1-9_0
./md5/md5_loweralpha-numeric-space#1-9_1
./md5/md5_loweralpha-numeric-space#1-9_2
./md5/md5_loweralpha-numeric-space#1-9_3
./md5/md5_loweralpha-numeric-symbol32-space#1-7_0
./md5/md5_loweralpha-numeric-symbol32-space#1-7_1
./md5/md5_loweralpha-numeric-symbol32-space#1-7_2
./md5/md5_loweralpha-numeric-symbol32-space#1-7_3
./md5/md5_loweralpha-numeric-symbol32-space#1-8_0
./md5/md5_loweralpha-numeric-symbol32-space#1-8_1
./md5/md5_loweralpha-numeric-symbol32-space#1-8_2
./md5/md5_loweralpha-numeric-symbol32-space#1-8_3
./md5/md5_loweralpha-space#1-9_0
./md5/md5_loweralpha-space#1-9_1
./md5/md5_loweralpha-space#1-9_2
./md5/md5_loweralpha-space#1-9_3
./md5/md5_mixalpha-numeric#1-9_0
./md5/md5_mixalpha-numeric#1-9_0-complete
./md5/md5_mixalpha-numeric#1-9_16
./md5/md5_mixalpha-numeric#1-9_32
./md5/md5_mixalpha-numeric#1-9_48
./md5/md5_mixalpha-numeric-all-space#1-7_0
./md5/md5_mixalpha-numeric-all-space#1-7_1
./md5/md5_mixalpha-numeric-all-space#1-7_2
./md5/md5_mixalpha-numeric-all-space#1-7_3
./md5/md5_mixalpha-numeric-all-space#1-8_0
./md5/md5_mixalpha-numeric-all-space#1-8_16
./md5/md5_mixalpha-numeric-all-space#1-8_24
./md5/md5_mixalpha-numeric-all-space#1-8_32
./md5/md5_mixalpha-numeric-all-space#1-8_8
./md5/md5_mixalpha-numeric-space#1-7_0
./md5/md5_mixalpha-numeric-space#1-7_1
./md5/md5_mixalpha-numeric-space#1-7_2
./md5/md5_mixalpha-numeric-space#1-7_3
./md5/md5_mixalpha-numeric-space#1-8_0
./md5/md5_mixalpha-numeric-space#1-8_1
./md5/md5_mixalpha-numeric-space#1-8_2
./md5/md5_mixalpha-numeric-space#1-8_3
./md5/md5_numeric#1-14_0
./md5/md5_numeric#1-14_1
./md5/md5_numeric#1-14_2
./md5/md5_numeric#1-14_3
./rainbow table software
./rainbow table software/Free Rainbow Tables » Distributed Rainbow Table Generation » LM, NTLM, MD5, SHA1, HALFLMCHALL, MSCACHE.mht
./rainbow table software/converti2_0.3_src.7z
./rainbow table software/converti2_0.3_win32_mingw.7z
./rainbow table software/converti2_0.3_win32_vc.7z
./rainbow table software/converti2_0.3_win64_mingw.7z
./rainbow table software/converti2_0.3_win64_vc.7z
./rainbow table software/rcracki_mt_0.7.0_linux_x86_64.7z
./rainbow table software/rcracki_mt_0.7.0_src.7z
./rainbow table software/rcracki_mt_0.7.0_win32_mingw.7z
./rainbow table software/rcracki_mt_0.7.0_win32_vc.7z
./rainbow table software/rti2formatspec.pdf
./rainbow table software/rti2rto_0.3_beta2_win32_vc.7z
./rainbow table software/rti2rto_0.3_beta2_win64_vc.7z
./rainbow table software/rti2rto_0.3_src.7z
./rainbow table software/rti2rto_0.3_win32_mingw.7z
./rainbow table software/rti2rto_0.3_win64_mingw.7z
./word lists
./word lists/SecLists-master.rar
./word lists/WPA-PSK WORDLIST 3 Final (13 GB).rar
./word lists/Word Lists archive - infocon.org.torrent
./word lists/crackstation-human-only.txt.rar
./word lists/crackstation.realuniq.rar
./word lists/fbnames.rar
./word lists/human0id word lists.rar
./word lists/openlibrary_wordlist.rar
./word lists/pwgen.rar
./word lists/pwned-passwords-2.0.txt.rar
./word lists/pwned-passwords-ordered-2.0.rar
./word lists/xsukax 128GB word list all 2017 Oct.7z
on September 05, 2019 07:00 AM

September 04, 2019

I recently attended GUADEC 2019 in Thessaloniki, Greece. This is the seventh GUADEC I've attended, which came as a bit of a surprise when I added it up! It was great to catch up in person (some again, and some new!) and as always the face to face communication makes future online interactions that much easier.

Photo by Cassidy James Blaede

This year we had seven people from Canonical Ubuntu desktop team in attendance. Many other companies and projects had representatives (including Collabora, Elementary OS, Endless, Igalia, Purism, RedHat, SUSE and System76). I think this was the most positive GUADEC I've attended, with people from all these organizations actively leading discussions and a general consideration of each other as we try and maximise where we can collaborate.

Of course, the community is much bigger than a group of companies. In particular is was great to meet Carlo and Frederik from the Yaru theme project. They've been doing amazing work on a new theme for Ubuntu and it will be great to see it land in a future release.

In the annual report there was a nice surprise; I made the most merge requests this year! I think this is a reflection on the step change in productivity in GNOME since switching to GitLab. So now I have a challenge to maintain that for next year...


If you were unable to attend you can watch the all talks on YouTube. Two talks I'd like to highlight; the first one is by Britt Yazel from the Engagement team. In it he talks about Setting a Positive Voice for GNOME. He talked about how open source communities have a lot of passion - and that has good and bad points. The Internet being as it is can lead to the trolls taking over but we can counter that but highlighting positive messages and showing the people behind GNOME. One of the examples showed how Ubuntu and GNOME have been posting positive messages on their channels about each-other, which is great!


The second talk was by Georges Basile Stavracas Neto and he talked About Maintainers and Contributors. In it he talked about the difficulties of being a maintainer and the impacts of negative feedback. It resonated with Britt's talk in that we need to highlight that maintainers are people who are doing their best! As state in the GNOME Code of Conduct - Assume people mean well (they really do!).


Georges and I are co-maintainers of Settings and we had a productive GUADEC and managed to go through and review all the open merge requests.

There were a number of discussions around Snaps in GNOME. There seemed a lot more interest in Snap technology compared to last GUADEC and it was great to be able to help people better understand them. Work included discussions about portals, better methods of getting the Freedesktop and GNOME stacks snapped, Snap integration in Settings and the GNOME publisher name in the Snap Store.

I hope to be back next year!
on September 04, 2019 11:34 PM

September 03, 2019

Monitoring Dorian

Stephen Michael Kellat

Currently the hurricane known as Dorian is pounding the daylights out of The Bahamas. The Hurricane Watch Net is up and the Hurricane VoIP Net is up. Presently members of the public can monitor audio from the Hurricane VoIP net by putting http://74.208.24.77:8000 into a suitable streaming media player like VLC to be able to bring up net audio. Updates are generally on the hour. Members of Ubuntu Hams looking to follow matters on EchoLink should utilize the *WX5FWD* and *KC4QLP-C* conferences.

The storm is moving fairly slowly. This event is likely to continue for a while.

on September 03, 2019 02:29 AM

September 02, 2019

Suspending Patreon

Sam Hewitt

I originally wrote a version of this post on Patreon itself but suspending my page hides my posts on there. Oops.

There’s been a lot of change for me over the past year or two, in real life and as a member of the free software community (like my recent joining of Purism), that has shifted my focus away from why I originally launched a Patreon, so I felt it was time to deactivate my creator page.

The support I got on Patreon for my humble projects and community participation over the many months my page was active will always be much appreciated! Having a Patreon (or some other kind of small recurring financial support service) as a free software contributor fueled not only my ability to contribution but my enthusiasm for free software. Support for small independent free software developers, designers, contributors and projects from folks in the community (not just through things like Patreon) goes a long way and I look forward to shifting into a more supportive role myself.

I’m going forward with gratitude to the community, so much thanks to all the folks who were my patrons. Go forth and spread the love! ❤️

on September 02, 2019 06:00 PM

Ah, spring time at last. The last month I caught up a bit with my Debian packaging work after the Buster freeze, release and subsequent DebConf. Still a bit to catch up on (mostly kpmcore and partitionmanager that’s waiting on new kdelibs and a few bugs). Other than that I made two new videos, and I’m busy with renovations at home this week so my home office is packed up and in the garage. I’m hoping that it will be done towards the end of next week, until then I’ll have little screen time for anything that’s not work work.

2019-08-01: Review package hipercontracer (1.4.4-1) (mentors.debian.net request) (needs some work).

2019-08-01: Upload package bundlewrap (3.6.2-1) to debian unstable.

2019-08-01: Upload package gnome-shell-extension-dash-to-panel (20-1) to debian unstable.

2019-08-01: Accept MR!2 for gamemode, for new upstream version (1.4-1).

2019-08-02: Upload package gnome-shell-extension-workspaces-to-dock (51-1) to debian unstable.

2019-08-02: Upload package gnome-shell-extension-hide-activities (0.00~git20131024.1.6574986-2) to debian unstable.

2019-08-02: Upload package gnome-shell-extension-trash (0.2.0-git20161122.ad29112-2) to debian unstable.

2019-08-04: Upload package toot (0.22.0-1) to debian unstable.

2019-08-05: Upload package gamemode (gamemode-1.4.1+git20190722.4ecac89-1) to debian unstable.

2019-08-05: Upload package calamares-settings-debian (10.0.24-2) to debian unstable.

2019-08-05: Upload package python3-flask-restful (0.3.7-3) to debian unstable.

2019-08-05: Upload package python3-aniso8601 (7.0.0-2) to debian unstable.

2019-08-06: Upload package gamemode (1.5~git20190722.4ecac89-1) to debian unstable.

2019-08-06: Sponsor package assaultcube (1.2.0.2.1-1) for debian unstable (mentors.debian.org request).

2019-08-06: Sponsor package assaultcube-data (1.2.0.2.1-1) for debian unstable (mentors.debian.org request).

2019-08-07: Request more info on Debian bug #825185 (“Please which tasks should be installed at a default installation of the blend”).

2019-08-07: Close debian bug #689022 in desktop-base (“lxde: Debian wallpaper distorted on 4:3 monitor”).

2019-08-07: Close debian bug #680583 in desktop-base (“please demote librsvg2-common to Recommends”).

2019-08-07: Comment on debian bug #931875 in gnome-shell-extension-multi-monitors (“Error loading extension”) to temporarily avoid autorm.

2019-08-07: File bug (multimedia-devel)

2019-08-07: Upload package python3-grapefruit (0.1~a3+dfsg-7) to debian unstable (Closes: #926414).

2019-08-07: Comment on debian bug #933997 in gamemode (“gamemode isn’t automatically activated for rise of the tomb raider”).

2019-08-07: Sponsor package assaultcube-data (1.2.0.2.1-2) for debian unstable (e-mail request).

2019-08-08: Upload package calamares (3.2.12-1) to debian unstable.

2019-08-08: Close debian bug #32673 in aalib (“open /dev/vcsa* write-only”).

2019-08-08: Upload package tanglet (1.5.4-1) to debian unstable.

2019-08-08: Upload package tmux-theme-jimeh (0+git20190430-1b1b809-1) to debian unstable (Closes: #933222).

2019-08-08: Close debian bug #927219 (“amdgpu graphics fail to be configured”).

2019-08-08: Close debian bugs #861065 and #861067 (For creating nextstep task and live media).

2019-08-10: Sponsor package scons (3.1.1-1) for debian unstable (mentors.debian.org request) (Closes RFS: #932817).

2019-08-10: Sponsor package fractgen (2.1.7-1) for debian unstable (mentors.debian.net request).

2019-08-10: Sponsor package bitwise (0.33-1) for debian unstable (mentors.debian.net request). (Closes RFS: #934022).

2019-08-10: Review package python-pyspike (0.6.0-1) (mentors.debian.net request) (needs some additional work).

2019-08-10: Upload package connectagram (1.2.10-1) to debian unstable.

2019-08-11: Review package bitwise (0.40-1) (mentors.debian.net request) (need some further work).

2019-08-11: Sponsor package sane-backends (1.0.28-1~experimental1) to debian experimental (mentors.debian.net request).

2019-08-11: Review package hcloud-python (1.4.0-1) (mentors.debian.net).

2019-08-13: Review package bitwise (0.40-1) (e-mail request) (needs some further work).

2019-08-15: Sponsor package bitwise (0.40-1) for debian unstable (email request).

2019-08-19: Upload package calamares-settings-debian (10.0.20-1+deb10u1) to debian buster (CVE #2019-13179).

2019-08-19: Upload package gnome-shell-extension-dash-to-panel (21-1) to debian unstable.

2019-08-19: Upload package flask-restful (0.3.7-4) to debian unstable.

2019-08-20: Upload package python3-grapefruit (0.1~a3+dfsg-8) to debian unstable (Closes: #934599).

2019-08-20: Sponsor package runescape (0.6-1) for debian unstable (mentors.debian.net request).

2019-08-20: Review package ukui-menu (1.1.12-1) (needs some mor work) (mentors.debian.net request).

2019-08-20: File ITP #935178 for bcachefs-tools.

2019-08-21: Fix two typos in bcachefs-tools (Github bcachefs-tools PR: #20).

2019-08-25: Published Debian Package of the Day video #60: 5 Fonts (highvoltage.tv / YouTube).

2019-08-26: Upload new upstream release of speedtest-cli (2.1.2-1) to debian unstable (Closes: #934768).

2019-08-26: Upload new package gnome-shell-extension-draw-on-your-screen to NEW for debian untable. (ITP: #925518)

2019-08-27: File upstream bug for btfs so that python2 depencency can be dropped from Debian package (BTFS: #53).

2019-08-28: Published Debian Package Management #4: Maintainer Scripts (highvoltage.tv / YouTube).

2019-08-28: File upstream feature request in Calamares unpackfs module to help speed up installations (Calamares: #1229).

2019-08-28: File upstream request at smlinux/rtl8723de driver for license clarification (RTL8723DE: #49).

on September 02, 2019 11:35 AM

September 01, 2019

Media Operations Proposal

Stephen Michael Kellat

I have been working on a project but need to go back to the drawing board. A goal is to avoid recreating Fernwood 2 Night but I think we can manage that. We’re also not re-creating Live From Here either. After all we are not trying to create a comedy but rather an actual news program.

As a digression, I will point out that with Hurricane Dorian still a threat the National Hurricane Center has reactivated their semi-experimental, still irregular podcast at https://www.nhc.noaa.gov/audio/ with a feed address of https://www.nhc.noaa.gov/audio/podcast.xml. A very good source for podcast discovery remains gpodder.net and I encourage its use. Unfortunately the discovery platform still needs a maintainer.

on September 01, 2019 11:09 AM

August 30, 2019

Example of website that only supports TLS v1.0, which is rejected by the client

Overivew

TLS v1.3 is the latest standard for secure communication over the internet. It is widely supported by desktops, servers and mobile phones. Recently Ubuntu 18.04 LTS received OpenSSL 1.1.1 update bringing the ability to potentially establish TLS v1.3 connections on the latest Ubuntu LTS release. Qualys SSL Labs Pulse report shows more than 15% adoption of TLS v1.3. It really is time to migrate from TLS v1.0 and TLS v1.1.

As announced on the 15th of October 2018 Apple, Google, and Microsoft will disable TLS v1.0 and TLS v1.1 support by default and thus require TLS v1.2 to be supported by all clients and servers. Similarly, Ubuntu 20.04 LTS will also require TLS v1.2 as the minimum TLS version as well.

To prepare for the move to TLS v1.2, it is a good idea to disable TLS v1.0 and TLS v1.1 on your local systems and start observing and reporting any websites, systems and applications that do not support TLS v1.2.

How to disable TLS v1.0 and TLS v1.1 in Google Chrome on Ubuntu

  1. Create policy directory
    sudo mkdir -p /etc/opt/chrome/policies/managed
  2. Create /etc/opt/chrome/policies/managed/mintlsver.json with
    {
        "SSLVersionMin" : "tls1.2"

How to disable TLS v1.0 and TLS v1.1 in Firefox on Ubuntu

  1. Navigate to about:config in the URL bar
  2. Search for security.tls.version.min setting
  3. Set it to 3, which stand for minimum TLS v1.2

How to disable TLS v1.0 and TLS v1.1 in OpenSSL

  1. Edit /etc/ssl/openssl.cnf
  2. After oid_section stanza add
    # System default
    openssl_conf = default_conf
  3. After oid_section stanza add
    [default_conf]
    ssl_conf = ssl_sect

    [ssl_sect]
    system_default = system_default_sect

    [system_default_sect]
    MinProtocol = TLSv1.2
    CipherString = DEFAULT@SECLEVEL=2
  4.  Save the file

How to disable TLS v1.0 and TLS v1.1 in GnuTLS

  1. Create config directory
    sudo mkdir -p /etc/gnutls/
  2. Create /etc/gnutls/default-priorities with
    SYSTEM=SECURE192:-VERS-ALL:+VERS-TLS1.3:+VERS-TLS1.2 
After performing above tasks most common applications will use TLS v1.2+

I have set these defaults on my systems, and I occasionally hit websites that only support TLS v1.0 and I report them. Have you found any websites and systems you use that do not support TLS v1.2 yet?
on August 30, 2019 03:42 PM

cloud-init is a tool to help you customize cloud images. When you launch a cloud image, you can provide to it with your cloud-init instructions, and the cloud image will execute them. In that way, you can start with a generic cloud image, and as soon as it booted up, it will be configured to your liking.

In LXD, there are two main repositories of container images,

  1. the «ubuntu:» remote, repository with Ubuntu container images
  2. the «images:» remote, repository with all container images.

Until recently, only container images in the «ubuntu:» remote had support for cloud-init.

Now, container images in the «images:» remote have both a traditional version, and a cloud-init version.

Let’s have a look. We search for the Debian 10 container images. The format of the name of the non-cloud-init containers is debian/10. The cloud-init images have cloud appended to the name, for example, debian/10/cloud. These are the names for the default architecture, and in my case my host runs amd64. You will notice the rest of the supported architectures; these do not run (at least not out of the box) on your host because LXD’s system containers are not virtual machines (no hardware virtualization).

$ lxc image list images:debian/10

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

|              ALIAS               | FINGERPRINT  | PUBLIC |              DESCRIPTION               |  ARCH   |   SIZE   |          UPLOAD DATE          |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10 (7 more)               | b1da98aa0523 | yes    | Debian buster amd64 (20190829_05:24)   | x86_64  | 93.21MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/arm64 (3 more)         | 061bf8e54195 | yes    | Debian buster arm64 (20190829_05:24)   | aarch64 | 89.75MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/armel (3 more)         | f45b56483bcc | yes    | Debian buster armel (20190829_05:53)   | armv7l  | 87.75MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/armhf (3 more)         | 8b3223cb7c36 | yes    | Debian buster armhf (20190829_05:55)   | armv7l  | 88.35MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud (3 more)         | df912811b3c3 | yes    | Debian buster amd64 (20190829_05:24)   | x86_64  | 107.57MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud/arm64 (1 more)   | c75bae6267e6 | yes    | Debian buster arm64 (20190829_05:29)   | aarch64 | 103.49MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud/armel (1 more)   | a9939000f769 | yes    | Debian buster armel (20190829_06:33)   | armv7l  | 101.43MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud/armhf (1 more)   | 8840418a2b4f | yes    | Debian buster armhf (20190829_05:53)   | armv7l  | 101.93MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud/i386 (1 more)    | 79ebaba3b386 | yes    | Debian buster i386 (20190829_05:24)    | i686    | 108.85MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud/ppc64el (1 more) | dcbfee6585b3 | yes    | Debian buster ppc64el (20190829_05:24) | ppc64le | 109.43MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/cloud/s390x (1 more)   | f2d6a7310ae1 | yes    | Debian buster s390x (20190829_05:24)   | s390x   | 101.93MB | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/i386 (3 more)          | f0bc9e2c267d | yes    | Debian buster i386 (20190829_05:24)    | i686    | 94.41MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/ppc64el (3 more)       | fcf56d73d764 | yes    | Debian buster ppc64el (20190829_05:24) | ppc64le | 94.57MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

| debian/10/s390x (3 more)         | 3481aeba0e06 | yes    | Debian buster s390x (20190829_05:24)   | s390x   | 88.02MB  | Aug 29, 2019 at 12:00am (UTC) |

+----------------------------------+--------------+--------+----------------------------------------+---------+----------+-------------------------------+

I have written a post about using cloud-init with LXD containers.

Another use of cloud-init is to set statically the IP address of the container.

Summary

The container images in the images: remote now have support for cloud-init. Instead of adding clound-init support to the existing images, there are new container names with /cloud appended to them, that have cloud-init support.

on August 30, 2019 02:39 PM

August 28, 2019

Our first gold sponsor of this event is ANSOL (Associação Nacional para o Software Livre), the Portuguese national association for free and open source software.

ANSOL was officially founded in 2002, with the goal in mind to promote, share, develop and research free software and its social, political, philosophical, cultural, technical and scientific impacts in society. They work closely with policy makers, companies and other free software promoters to ensure more people can know about free and open source software.

Thanks to them, we have received significant support to sustain our event and our journey to give you one of the best open source experiences in Sintra.

Want to jump onboard as well?
Visit our Call for Sponsor post for more information.

on August 28, 2019 09:35 PM

August 27, 2019

One of the most important steps of the design process is “usability testing”, it gives designers the chance to put themselves in other people’s shoes by gathering direct feedback from people in real time to determine how usable an interface may be. This is just as important for free and open source software development process as it is any other.

Though free software projects often lack sufficient resources for other more extensive testing methods, there are some basic techniques that can be done by non-experts with just a bit of planning and time—anyone can do this!

Free Software Usability

Perhaps notoriously, free software interfaces have long been unapproachable; how many times have you heard: “this software is great…once you figure it out.” The steep learning curve of many free software applications is not representative of how usable or useful it is. More often than not it’s indicative of free software’s relative complexity, and that can be attributed to the focus on baking features into a piece of software without regard for how usable they are.

A screenshot of the calibre e-book management app's poor UI

Free software developers are often making applications for themselves and their peers, and the steps in development where you’d figure out how easy it is for other people to use—testing—gets skipped. In other words, as the author of the application you are of course familiar with how the user interface is laid out and how to access all the functionality, you wrote it. A new user would not be, and may need time or knowledge to discover the functionality, this is where usability testing can come in to help you figure out how easy your software is to use.

What is “Usability Testing”?

For those unfamiliar with the concept, usability testing is a set of methods in user-centric design meant to evaluate a product or application’s capacity to meet its intended purpose. Careful observation of people while they use your product, to see if it matches what it was intended for, is the foundation of usability testing.

The great thing is that you don’t need years of experience to run some basic usability tests, you need only sit down with a small group of people, get them to use your software, and listen and observe.

What Usability Testing is Not

Gathering people’s opinion (solicited or otherwise) on a product is not usability testing, that’s market research. Usability testing isn’t about querying people’s already formed thoughts on a product or design, it’s about determining if they understand a given function of a product or its purpose by having them use said product and gather feedback.

Usability is not subjective, it is concrete and measureable and therefore testable.

Preparing a Usability Test

To start, pick a series of tasks within the application that you want to test that you believe would be straightforward for the average person to complete. For example: “Set the desktop background” in a photos app, “Save a file with a new name” in a text editor, “Compose a new email” in an email client, etc. It is easiest to pick tasks that correspond to functions of your application that are (intended to be) evident in the user interface and not something more abstract. Remember: you are testing the user interface not the participant’s own ability to do a task.

You should also pick tasks that you would expect to take no more than a few minutes each, if participants fail to complete a task in a timely manner that is okay and is useful information.

Create Relatable Scenarios

To help would-be participants of your test, draft simple hypothetical scenarios or stories around these tasks which they can empathize with to make them more comfortable. It is very important in these scenarios that you do not use the same phrasing as present in the user interface or reference the interface as it would be too influential on the testers’ process. For instance, if you were testing whether an email client’s compose action was discoverable, you would not say:

Compose an email to your aunt using the new message button.

This gives too much away about the interface as it would prompt people to look for the button. The scenario should be more general and have aspects that everyone can relate to:

It’s your aunt’s birthday and you want to send her a well-wishes message. Please compose a new email wishing her a happy birthday.

These “relatable” aspects gives the participant something to latch onto and it makes the goal of the task clearer for them by allowing them to insert themselves into the scenario.

Finding Participants

Speaking of participants, you need at least five people for your test, after five there are diminishing returns as the more people you add, the less you learn as you’ll begin to see things repeat. This article goes into more detail, but to quote its summary:

Elaborate usability tests are a waste of resources. The best results come from testing no more than 5 users and running as many small tests as you can afford.

This is not to say that you stop after a single test with five individuals, it’s that repetitive tests with small groups allow you to uncover problems that you can address and retest efficiently, given limited resources.

Also, the more random the selection group is, the better the results of your test will be—“random” as if you grabbed passers-by the hallway or on the street. As a bonus, it’s also best to offer some sort of small gratuity for participating, to motivate people to sign up.

Warming Up Participants

It’s also important to not have the participants jump into a test cold. Give participants some background and context for the tests and brief them on what you are trying to accomplish. Make it absolutely clear that the goal is to test the interface, not them or their abilities; it is very important to stress to the participants that their completion of a task is not the purpose of the test but determining the usability of the product is. Inability to complete a task is a reflection of the design not of their abilities.

Preliminary Data Gathering

Before testing, gather important demographic information from your participants, things like age, gender (how they identify), etc. and gauge their level of familiarity with or knowledge of the product category, such as: “how familiar are you with Linux/GNOME/free software on a scale from 1-5?” All this will be helpful as you break down the test results for analysis to see trends or patterns across test results.

Running the Test

Present the scenarios for each task one at a time and separately as to not overload the participants. Encourage participants to give vocal feedback as they do the test, and to be as frank and critical as possible as to make the results more valuable, assuring them your feelings will not be hurt by doing so.

During the task you must but attentive and observe several things at once: the routes they take through your app, what they do or say during or about the process, their body language and the problems they encounter—this is where extensive note-taking comes in.

No Hints!

Do not interfere in the task at hand by giving hints or directly helping the participant. While the correct action may be obvious or apparent to you, the value is in learning what isn’t obvious to other people.

If participants ask for help it is best to respond with guiding questions; if a participant gets stuck, prompt them to continue with questions such as “what do you think you should do?” or “where do you think you should click?” but if they choose not finish or are unable to, that is okay.

Be Watchful

The vast majority of stumbling blocks are found by watching the body language of people during testing. Watch for signs of confusion or frustration—frowning, squinting, sighing, hunched shoulders, etc.—when a participant is testing your product and make note of it, but do not make assumptions about why they became frustrated or confused: ask them why.

It is perfectly alright to pause the test when you see signs of confusion or frustration and say:

I noticed you seemed confused/frustrated, care to tell me what was going through your mind when you were [the specific thing they were doing]?

It’s here where you will learn why someone got lost in your application and that insight is valuable.

Take Notes

For the love of GNU, pay close attention to the participants and take notes. Closely note how difficult a participant finds a task, what their body language is while they do the task, how long it takes them, and problems and criticisms participants have. Having participants think aloud or periodically asking them how they feel about aspects of the task, is extremely beneficial for your note-taking as well.

To supplement your later analysis, you may make use of screen and/or voice-recording during testing but only if your participants are comfortable with it and give informed consent. Do not rely on direct recording methods as they can often be distracting or disconcerting and you want people to be relaxed during testing so they can focus, and not be wary of the recording device.

Concluding the Test

When the tasks are all complete you can choose to debrief participants about the full purpose of the test and answer any outstanding questions they may have. If all goes well you will have some data that can be insightful to the development of your application and for addressing design problems, after further analysis.

Collating Results

Usability testing data is extremely useful to user experience and interaction designers as it can inform our decision-making over interface layouts, interaction models, etc. and help us solve problems that get uncovered.

Regardless of whether the testing and research is not conducted ourselves, it’s important that the data gathered is clearly presented. Graphs, charts and spreadsheets are incredibly useful in your write-up for communicating the break down of test results.

Heat Maps

It helps to visualize issues with tasks in a heat map, which is an illustration that accounts for the perceived difficulty of a given task for each participant by colour-coding them in a table.

Example Heat Map

The above is a non-specific example that illustrates how the data can be represented: green for successful completion of the task, yellow for moderate difficulty, red for a lot of difficulty, and black for an incomplete. From this heat map, we can immediately see patterns that we can address by looking deeper in the results; we can see how “Task 1” and Task 6” presented a lot of difficulty for most of the participants, and that requires further investigation.

More Usable Free Software

Conducting usability testing on free software shouldn’t be an afterthought of the development process but rather it should be a deeply integrated component. However, the reality is that the resources of free software projects (including large ones like GNOME) are quite limited, so one of my goals with this post is to empower you to do more usability testing on your own—you don’t have to be an expert—and to help out and contribute to larger software projects to make up for the limits on resources.

Usability Testing GNOME

Since I work on the design of GNOME, I would be more than happy to help you facilitate usability testing for GNOME applications and software. So do not hesitate to reach out if you would like me to review your plans for usability testing or to share results of any testing that you do.


Further Reading

If you’re interested in more resources or information about usability, I can recommend some additional reading:

on August 27, 2019 11:00 PM

polkit-qt-1 0.113.0 Released

Jonathan Riddell

Some 5 years after the previous release KDE has made a new release of polkit-qt-1, versioned 0.113.0.

Polkit (formerly PolicyKit) is a component for controlling system-wide privileges in Unix-like operating systems. It provides an organized way for non-privileged processes to communicate with privileged ones.   Polkit has an authorization API intended to be used by privileged programs (“MECHANISMS”) offering service to unprivileged programs (“CLIENTS”).

Polkit Qt provides Qt bindings and UI.

This release was done ahead of additions to KIO to support Polkit.

SHA-256:
5b866a2954ef10ffb66156e2fe8ad0321b5528a8df2e4a91b02f5041ce5563a7
GPG fingerprint:
D81C0CB38EB725EF6691C385BB463350D6EF31EF

Notable changes since 0.112.0
———————————————————
– Add support for passing details to polkit
– Remove support for Qt4

https://download.kde.org/stable/polkit-qt-1/

Thanks to Heiko Becker for his work on this release.

Full changelog

  •  Bump version for release
  •  Don’t set version numbers as INT cache entries
  •  Move cmake_minimum_required to the top of CMakeLists.txt
  •  Remove support for Qt4
  •  Remove unneded documentation
  •  authority: add support for passing details to polkit
    https://phabricator.kde.org/D18845
  •  Fix typo in comments
  •  polkitqtlistener.cpp – pedantic
  •  Fix build with -DBUILD_TEST=TRUE
  •  Allow compilation with older polkit versions
  •  Fix compilation with Qt5.6
  •  Drop use of deprecated Qt functions REVIEW: 126747
  •  Add wrapper for polkit_system_bus_name_get_user_sync
  •  Fix QDBusArgument assertion
  • do not use global static systembus instance

 

on August 27, 2019 07:09 PM

man-db 2.8.7

Colin Watson

I’ve released man-db 2.8.7 (announcement, NEWS), and uploaded it to Debian unstable.

There are a few things of note that I wanted to talk about here. Firstly, I made some further improvements to the seccomp sandbox originally introduced in 2.8.0. I do still think it’s correct to try to confine subprocesses this way as a defence against malicious documents, but it’s also been a pretty rough ride for some users, especially those who use various kinds of VPNs or antivirus programs that install themselves using /etc/ld.so.preload and cause other programs to perform additional system calls. As well as a few specific tweaks, a recent discussion on LWN reminded me that it would be better to make seccomp return EPERM rather than raising SIGSYS, since that’s easier to handle gracefully: in particular, it fixes an odd corner case related to glibc’s nscd handling.

Secondly, there was a build failure on macOS that took a while to figure out, not least because I don’t have a macOS test system myself. In 2.8.6 I tried to make life easier for people on this platform with a CFLAGS tweak, but I made it a bit too general and accidentally took away configure’s ability to detect undefined symbols properly, which caused very confusing failures. More importantly, I hadn’t really thought through why this change was necessary and whether it was a good idea. man-db uses private shared libraries to keep its executable size down, and it passes -no-undefined to libtool to declare that those shared libraries have no undefined symbols after linking, which is necessary to build shared libraries on some platforms. But the CFLAGS tweak above directly contradicts this! So, instead of playing core wars with my own build system, I did some refactoring so that the assertion that man-db’s shared libraries have no undefined symbols after linking is actually true: this involved moving decompression code out of libman, and arranging for the code in libmandb to take the database path as a parameter rather than as a global variable (something I’ve meant to fix for ages anyway; 252d7cbc23, 036aa910ea, a97d977b0b). Lesson: don’t make build system changes you don’t quite understand.

on August 27, 2019 05:55 AM

When find can't find

Santiago Zarate

If you happen to be using gnu find in the deadly combination with a directory that is a symlink (you just don’t know that yet), you will find face the hard truth that running:

find /path/to/directory -type f

Will return zero, nada, nichts, meiyou, which is annoying.

where is it!

this will make you question your life decisions, and your knowledge on tools that you use daily, only to find out that the directory is actually a symlink :).

So next time you find yourself using find and it returns nothing, but you are sure that your syntax is correct and get no errors, try adding the --fowllow or use the -L

find -L /path/to/directory/with/symlink -type f

This will do what you want :)

Here is it!

on August 27, 2019 12:00 AM

August 23, 2019

First of all: thanks Dennis Marttinen and Lucas Käldström for helping write this up.

It’s been only a bit over a month since Weave Ignite was announced to the world (others talked about it as well). Time to catch up on what happened in the meantime, the team around it has been busy.

If you’re new to Weave Ignite, it’s an open source VM manager with a container UX and built-in GitOps management (check out the docs). It’s built on top of Firecracker which has proven to be able to run 4000 micro-VMs on the same host. Time to give it a go, right?

Since the initial announce 43 people contributed to the project on Github, 20 got commits merged in the repo. Thanks to every one of you: 

@BenTheElder, @DieterReuter, @PatrickLang, @Strum355, @akshaychhajed, @alex-leonhardt, @alexeldeib, @alexellis, @andrelop, @andrewrynhard, @aojea, @arun-gupta, @asaintsever, @chanwit, @curx, @danielcb, @dholbach, @hbokh, @jiangpengcheng, @junaid18183, @kim3z, @liwei, @luxas, @najeal, @neith00, @paavan98pm, @patrobinson, @pditommaso, @praseodym, @prologic, @robertojrojas, @rugwirobaker, @saiyam1814, @seeekr, @sftim, @silenceshell, @srinathgs, @stealthybox, @taqtiqa-mark, @twelho, @tyhal, @vielmetti, @webwurst

Since then the team got four releases out the door. Let’s go through the big changes one by one and why they matter:

  • Lots of bug fixes, enhanced stability, more tests and more and better docs (find them here)
  • Support for Persistent Storage, ARM64, manifest directories, the improved v1alpha2 API, both declarative and imperative VM management
  • More pre-built VM images (currently there are images based on Ubuntu, CentOS, Amazon Linux, Alpine and OpenSUSE + a kubeadm image)
  • ignited was introduced to move Ignite towards a client-server model and improve VM lifecycle management
  • The Docker-like UX has been further improved, now also featuring ‘ignite exec’
  • Read-write GitOps support, now status updates/changes (e.g. IP addresses) are pushed back to the repository

It’s impressive how such a young project got all of this together in such a short amount of time (only around a month).


We also have been busy growing our community. As mentioned above, documentation was an important part of this: API docs, a very solid CLI reference and short tutorials to get you started were the key.

We also started a mailing list and regular community Ignite developer meetings. These happen Mondays at 15:00 UTC (what’s UTC?) and are meant to get people together who are generally interested in Ignite and want to learn more and potentially help out as well. Project authors Lucas Käldström and Dennis Marttinen are always very approachable, but here especially they made a point of introducing everyone to the goals behind Ignite, its roadmap and the currently ongoing work.

We’ve recorded all of the meetings. Meeting Notes are available too (Please join weaveworks-ignite on Google Groups to get write access).

Here’s what we covered so far:

  • 1st meeting:
    • Team introductions
    • Demo of Ignite
    • Roadmap overview
    • Current work-in-progress
  • 2nd meeting:
    • What’s coming in v0.5.0?
    • Roadmap for v0.6.0
    • Integration with Kubernetes through Virtual Kubelet
    • How to contribute to Ignite
  • 3rd meeting
    • v0.5.0 and v0.5.1 released
    • GitOps Toolkit is being split out – what is it for?
    • Footloose integration – what is it about?
    • Coming up: containerd support
    • Discussion of application logging

And this is where you come in… our next meeting is Monday, 26th August 2019 15:00 UTC and we have an action packed agenda as well:

  • containerd integration
  • CNI integration
  • The GitOps Toolkit
  • Code walk-through / project architecture
  • Discussion: what would you like to see in Ignite? What do/could you use it for?
  • Releasing v0.6.0
  • <you can still add your own agenda item here>

We are very excited to see the direction Ignite is taking, particularly because it contributes a lot to the ecosystem. How?

We realised that all the GitOps functionality of Ignite would be useful to the rest of the world, so we split it out into the GitOps Toolkit.

The team is also working on containerd integration, so that you don’t need Docker installed to run Ignite VMs. Why does Ignite require a container runtime to be present? Because Ignite integrates with the container world, so you can seamlessly run both VMs and containers next to each other. containerd is super lightweight, as is Firecracker, so pairing them with Ignite makes a lot of sense!

If the above sounds exciting to you and your project, please share the news and meet up with us on Monday. We look forward to seeing you there!

But that’s not all. This is just where we felt Ignite could make a difference. If you have your own ideas, own use-cases, issues or challenges, please let us know and become part of the team – even if it’s just by giving us feedback! If you’d like to get inspiration about what others are doing with Ignite, or add your own project, check out the awesome-ignite page.

If you are interested in helping out, that’s fantastic! The meeting should be interesting for you too. If you can’t wait, check out our contributors guide and check out our open issues too. If you are interested in writing docs, adding comments, testing, filing issues or getting your feet into the project, we’re all there and happy to help.

We’ll have more news on Ignite soon. But for today’s update we are signing off with a bitter-sweet announcement: Lucas and Dennis will from September step down as project maintainers in order to embark on a new adventure: Aalto University in Helsinki! They have started something very remarkable and we could not be happier for them. Watch this space for more news.

If you’d like to join the journey, you can do so here:

on August 23, 2019 05:49 PM

Description

Apache Tapestry uses HMACs to verify the integrity of objects stored on the client side. This was added to address the Java deserialization vulnerability disclosed in CVE-2014-1972. In the fix for the previous vulnerability, the HMACs were compared by string comparison, which is known to be vulnerable to timing attacks.

Affected Versions

  • Apache Tapestry 5.3.6 through current releases.

Mitigation

No new release of Tapestry has occurred since the issue was reported. Affected organizations may want to consider locally applying commit d3928ad44714b949d247af2652c84dae3c27e1b1.

Timeline

  • 2019-03-12: Issue discovered.
  • 2019-03-13: Issue reported to security@apache.org.
  • 2019-03-29: Pinged thread to ask for update.
  • 2019-04-19: Fix committed.
  • 2019-04-23: Asked about release timeline, response “in the upcoming months”
  • 2019-05-28: Pinging again about release.
  • 2019-06-24: Asked again, asked for CVE number assigned. No update on timeline.
  • 2019-08-22: Disclosure posted.

Credit

This vulnerability was discovered by David Tomaschik of the Google Security Team.

on August 23, 2019 07:00 AM

August 20, 2019

With the agreement of the Debian LTS contributors funded by Freexian, earlier this year I decided to spend some Freexian money on marketing: we sponsored DebConf 19 as a bronze sponsor and we prepared some stickers and flyers to give out during the event.

The stickers only promote the Debian LTS project with the semi-official logo we have been using and a link to the wiki page. You can see them on the back of a laptop in the picture below. As you can see, we have made two variants with different background colors:

The flyers and the video are meant to introduce the Debian LTS project and to convince companies to sponsor the Debian LTS project through the Freexian offer. Those are short documents and they can’t explain the precise relationship between Debian LTS and Freexian. We try to show that Freexian is just an intermediary between contributors and companies, but some persons will still have the feeling that a commercial entity is organizing Debian LTS.

Check out the video on YouTube:

The inside of the flyer looks like this:

Click on the picture to see it full size

Note that due to some delivery issues, we have left-over flyers and stickers. If you want some to give out during a free software event, feel free to reach out to me.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on August 20, 2019 10:45 AM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 199 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Adrian Bunk got 8h assigned but did nothing (plus 10 extra hours from June), thus he is carrying over 18h to August.
  • Ben Hutchings did 18.5 hours (out of 18.5 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 21 hours (out of 18.5h allocated + 17h remaining, thus keeping 14.5 extra hours for August).
  • Hugo Lefeuvre did 9.75 hours (out of 18.5 hours, thus carrying over 8.75h to August).
  • Jonas Meurer did 19 hours (out of 17 hours allocated plus 2h extra hours June).
  • Markus Koschany did 18.5 hours (out of 18.5 hours allocated).
  • Mike Gabriel did 15.75 hours (out of 18.5 hours allocated plus 7.25 extra hours from June, thus carrying over 10h to August.).
  • Ola Lundqvist did 0.5 hours (out of 8 hours allocated plus 8 extra hours from June, then he gave 7.5h back to the pool, thus he is carrying over 8 extra hours to August).
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 18.5 hours (out of 18.5 hours allocated).
  • Thorsten Alteholz did 18.5 hours (out of 18.5 hours allocated).

Evolution of the situation

July was different than other months. First, some people have been on actual vacations, while 4 of the above 14 contributors met in Curitiba, Brazil, for DebConf19. There, a talk about LTS (slides, video) was given, followed by a Q&ligA session. Also a new promotional video about Debian LTS, aimed at potential sponsors was shown there for the first time.

DebConf19 was also a success in respect to on-boarding of new contributors, we’ve found three potential new contributors, one of them is already in training.

The security tracker (now for oldoldstable as Buster has been released and thus Jessie became oldoldstable) currently lists 51 packages with a known CVE and the dla-needed.txt file has 35 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on August 20, 2019 09:38 AM

August 19, 2019

This iteration was the Web & design team’s first iteration of the second half of our roadmap cycle, after returning from the mid-cycle roadmap sprint in Toronto 2 weeks ago.

Priorities have moved around a bit since before the cycle, and we made a good start on the new priorities for the next 3 months. 

Web squad

Web is the squad that develop and maintain most of the brochure websites across the Canonical.

We launched three takeovers; “A guide to developing Android apps on Ubuntu”, “Build the data centre of the future” and “Creating accurate AI models with data”.

Ubuntu.com Vanilla conversion 

We’ve made good progress on converting ubuntu.com to version 2.3.0 of our Vanilla CSS framework

EKS redesign

We’ve been working on a new design for our EKS images page.

Canonical.com design evolution

New designs and prototypes are coming along well for redesigned partners and careers sections on canonical.com.

Vanilla squad

The Vanilla squad works on constantly improving the code and design patterns in our Vanilla CSS framework, which we use across all our websites.

Ubuntu SSO refresh

The squad continues to make good progress on adding Vanilla styling to all pages on login.ubuntu.com.

Colour theming best practices

We investigated some best practices for the use of colours in themes.

Improvements to Vanilla documentation

We made a number of improvements to the documentation of Vanilla framework.

Base

The Base squad supports the other squads with shared modules, development tooling and hosting infrastructure across the board.

certification.ubuntu.com

We continued to progress with the back-end rebuild and re-hosting of certification.ubuntu.com, which should be released next iteration.

Blog improvements

We investigated ways to improve the performance of our blog implementations (most importantly ubuntu.com/blog). We will be releasing new versions of the blog module over the next few weeks which should bring significant improvements.

MAAS

The MAAS squad works on the browser-based UI for MAAS, as well as the maas.io website.

“Real world MAAS”

We’ve been working on a new section for the maas.io homepage about “Real world MAAS”, which will be released in the coming days. As MAAS is used at various enterprises of different scale we’re proving grouped curated content for three of the main audiences

UI settings updates

We’ve made a number of user experience updates to the settings page in the MAAS UI, including significant speed improvements to the Users page in conjunction with the work of moving the settings part of the application to React (from Django). We have completed the move of the General, Network, and Storage tabs, and have redesigned the experience for DHCP snippets and Scripts. 

Redesigned DHCP snippets tab

JAAS

The JAAS squad works on jaas.ai, the Juju GUI, and upcoming projects to support Juju.

This iteration we setup a bare-bones scaffold of our new JAAS Dashboard app using React and Redux.

Snaps

The Snap squad works on improvements to snapcraft.io.

Updating snapcraft.io to Vanilla 2.3.0

We continued work updating snapcraft.io to the latest version of Vanilla.

The post Design and Web team summary – 16 August 2019 appeared first on Ubuntu Blog.

on August 19, 2019 09:39 AM
  • Replicating Particle Collisions at CERN with Kubeflow – this post is interesting for a number of reasons. First, it shows how Kubeflow delivers on the promise of portability and why that matters to CERN. Second, it reiterates that using Kubeflow adds negligible performance overhead as compared to other methods for training. Finally, the post shows another example of how images and deep learning can replace more computationally expensive methods for modelling real-word behaviour. This is the future, today.
  • AI vs. Machine Learning: The Devil Is in the Details – Need a refresh on what the difference is between artificial intelligence, machine learning and deep learning? Canonical has done a webinar on this very topic, but sometimes a different set of words are useful, so read this article for a refresh. You’ll also learn about a different set of use cases for how AI is changing the world – from Netflix to Amazon to video surveillance and traffic analysis and predictions.
  • Making Deep Learning User-Friendly, Possible? – The world has changed a lot in the 18 months since this article was published. One of the key takeaways from this article is a list of features to compare several standalone deep learning tools. The exciting news? The output of these tools can be used with Kubeflow to accelerate Model Training. There are several broader questions as well – How can companies leverage the advancements being made within the AI community? Are better tools the right answer? Finding a partner may be the right answer.
  • Interview spotlight: One of the fathers of AI is worried about its future – Yoshua Bengio is famous for championing deep learning, one of the most powerful technologies in AI. Read this transcript to understand some of his concerns with the direction of AI, as well as the exciting developments in AI. Research that is extending deep learning into things like reasoning, learning causality, and exploring the world in order to learn and acquire information.

The post Issue #2019.08.19 – Kubeflow at CERN appeared first on Ubuntu Blog.

on August 19, 2019 08:00 AM

August 15, 2019

APT Patterns

Julian Andres Klode

If you have ever used aptitude a bit more extensively on the command-line, you’ll probably have come across its patterns. This week I spent some time implementing (some) patterns for apt, so you do not need aptitude for that, and I want to let you in on the details of this merge request !74.

so, what are patterns?

Patterns allow you to specify complex search queries to select the packages you want to install/show. For example, the pattern ?garbage can be used to find all packages that have been automatically installed but are no longer depended upon by manually installed packages. Or the pattern ?automatic allows you find all automatically installed packages.

You can combine patterns into more complex ones; for example, ?and(?automatic,?obsolete) matches all automatically installed packages that do not exist any longer in a repository.

There are also explicit targets, so you can perform queries like ?for x: ?depends(?recommends(x)): Find all packages x that depend on another package that recommends x. I do not fully comprehend those yet - I did not manage to create a pattern that matches all manually installed packages that a meta-package depends upon. I am not sure it is possible.

reducing pattern syntax

aptitude’s syntax for patterns is quite context-sensitive. If you have a pattern ?foo(?bar) it can have two possible meanings:

  1. If ?foo takes arguments (like ?depends did), then ?bar is the argument.
  2. Otherwise, ?foo(?bar) is equivalent to ?foo?bar which is short for ?and(?foo,?bar)

I find that very confusing. So, when looking at implementing patterns in APT, I went for a different approach. I first parse the pattern into a generic parse tree, without knowing anything about the semantics, and then I convert the parse tree into a APT::CacheFilter::Matcher, an object that can match against packages.

This is useful, because the syntactic structure of the pattern can be seen, without having to know which patterns have arguments and which do not - basically, for the parser ?foo and ?foo() are the same thing. That said, the second pass knows whether a pattern accepts arguments or not and insists on you adding them if required and not having them if it does not accept any, to prevent you from confusing yourself.

aptitude also supports shortcuts. For example, you could write ~c instead of config-files, or ~m for automatic; then combine them like ~m~c instead of using ?and. I have not implemented these short patterns for now, focusing instead on getting the basic functionality working.

So in our example ?foo(?bar) above, we can immediately dismiss parsing that as ?foo?bar:

  1. we do not support concatenation instead of ?and.
  2. we automatically parse ( as the argument list, no matter whether ?foo supports arguments or not
apt not understanding invalid patterns

apt not understanding invalid patterns

Supported syntax

At the moment, APT supports two kinds of patterns: Basic logic ones like ?and, and patterns that apply to an entire package as opposed to a specific version. This was done as a starting point for the merge, patterns for versions will come in the next round.

We also do not have any support for explicit search targets such as ?for x: ... yet - as explained, I do not yet fully understand them, and hence do not want to commit on them.

The full list of the first round of patterns is below, helpfully converted from the apt-patterns(7) docbook to markdown by pandoc.

logic patterns

These patterns provide the basic means to combine other patterns into more complex expressions, as well as ?true and ?false patterns.

?and(PATTERN, PATTERN, ...)

Selects objects where all specified patterns match.

?false

Selects nothing.

?not(PATTERN)

Selects objects where PATTERN does not match.

?or(PATTERN, PATTERN, ...)

Selects objects where at least one of the specified patterns match.

?true

Selects all objects.

package patterns

These patterns select specific packages.

?architecture(WILDCARD)

Selects packages matching the specified architecture, which may contain wildcards using any.

?automatic

Selects packages that were installed automatically.

?broken

Selects packages that have broken dependencies.

?config-files

Selects packages that are not fully installed, but have solely residual configuration files left.

?essential

Selects packages that have Essential: yes set in their control file.

?exact-name(NAME)

Selects packages with the exact specified name.

?garbage

Selects packages that can be removed automatically.

?installed

Selects packages that are currently installed.

?name(REGEX)

Selects packages where the name matches the given regular expression.

?obsolete

Selects packages that no longer exist in repositories.

?upgradable

Selects packages that can be upgraded (have a newer candidate).

?virtual

Selects all virtual packages; that is packages without a version. These exist when they are referenced somewhere in the archive, for example because something depends on that name.

examples

apt remove ?garbage

Remove all packages that are automatically installed and no longer needed - same as apt autoremove

apt purge ?config-files

Purge all packages that only have configuration files left

oddities

Some things are not yet where I want them:

  • ?architecture does not support all, native, or same
  • ?installed should match only the installed version of the package, not the entire package (that is what aptitude does, and it’s a bit surprising that ?installed implies a version and ?upgradable does not)

the future

Of course, I do want to add support for the missing version patterns and explicit search patterns. I might even add support for some of the short patterns, but no promises. Some of those explicit search patterns might have slightly different syntax, e.g. ?for(x, y) instead of ?for x: y in order to make the language more uniform and easier to parse.

Another thing I want to do ASAP is to disable fallback to regular expressions when specifying package names on the command-line: apt install g++ should always look for a package called g++, and not for any package containing g (g++ being a valid regex) when there is no g++ package. I think continuing to allow regular expressions if they start with ^ or end with $ is fine - that prevents any overlap with package names, and would avoid breaking most stuff.

There also is the fallback to fnmatch(): Currently, if apt cannot find a package with the specified name using the exact name or the regex, it would fall back to interpreting the argument as a glob(7) pattern. For example, apt install apt* would fallback to installing every package starting with apt if there is no package matching that as a regular expression. We can actually keep those in place, as the glob(7) syntax does not overlap with valid package names.

Maybe I should allow using [] instead of () so larger patterns become more readable, and/or some support for comments.

There are also plans for AppStream based patterns. This would allow you to use apt install ?provides-mimetype(text/xml) or apt install ?provides-lib(libfoo.so.2). It’s not entirely clear how to package this though, we probably don’t want to have libapt-pkg depend directly on libappstream.

feedback

Talk to me on IRC, comment on the Mastodon thread, or send me an email if there’s anything you think I’m missing or should be looking at.

on August 15, 2019 01:55 PM

August 13, 2019

Whenever a process accesses a virtual address where there isn't currently a physical page mapped into its process space then a page fault occurs.  This causes an interrupt so that the kernel can handle the page fault.  

A minor page fault occurs when the kernel can successfully map a physically resident page for the faulted user-space virtual address (for example, accessing a memory resident page that is already shared by other processes).   Major page faults occur when accessing a page that has been swapped out or accessing a file backed memory mapped page that is not resident in memory.

Page faults incur latency in the running of a program, major faults especially so because of the delay of loading pages in from a storage device.

The faultstat tool allows one to easily monitor page fault activity allowing one to find the most active page faulting processes.  Running faultstat with no options will dump the page fault statistics of all processes sorted in major+minor page fault order.

Faultstat also has a "top" like mode, inoking it with the -T option will display the top page faulting processes again in major+minor page fault order.


The Major and Minor  columns show the respective major and minor page faults. The +Major and +Minor columns show the recent increase of page faults. The Swap column shows the swap size of the process in pages.

Pressing the 's' key will switch through the sort order. Pressing the 'a' key will add an arrow annotation showing page fault growth change. The 't' key will toggle between cumulative major/minor page total to current change in major/minor faults.

The faultstat tool has just landed in Ubuntu Eoan and can also be installed as a snap.  The source can is available on github.  

on August 13, 2019 11:14 AM

August 09, 2019

As you may have been made aware on some news articles, blogs, and social media posts, a vulnerability to the KDE Plasma desktop was recently disclosed publicly. This occurred without KDE developers/security team or distributions being informed of the discovered vulnerability, or being given any advance notice of the disclosure.

KDE have responded quickly and responsibly and have now issued an advisory with a ‘fix’ [1].

Kubuntu is now working on applying this fix to our packages.

Packages in the Ubuntu main archive are having updates prepared [2], which will require a period of review before being released.

Consequently if users wish to get fixed packages sooner, packages with the patches applied have been made available in out PPAs.

Users of Xenial (out of support, but we have provided a patched package anyway), Bionic and Disco can get the updates as follows:

If you have our backports PPA [3] enabled:

The fixed packages are now in that PPA, so all is required is to update your system by your normal preferred method.

If you do NOT have our backports PPA enabled:

The fixed packages are provided in our UPDATES PPA [4].

sudo add-apt-repository ppa:kubuntu-ppa/ppa
sudo apt update
sudo apt full-upgrade

As a precaution to ensure that the update is picked up by all KDE processes, after updating their system users should at the very least log out and in again to restart their entire desktop session.

Regards

Kubuntu Team

[1] – https://kde.org/info/security/advisory-20190807-1.txt
[2] – https://bugs.launchpad.net/ubuntu/+source/kconfig/+bug/1839432
[3] – https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/backports
[4] – https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/ppa

on August 09, 2019 03:29 PM
Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 18.04.3 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight X11 Desktop Environment (LXDE). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock solid […]
on August 09, 2019 12:20 AM

August 08, 2019

Ubuntu 18.04.3 LTS has just been released. As usual with LTS point releases, the main changes are a refreshed hardware enablement stack (newer versions of the kernel, xorg & drivers) and a number of bug and security fixes.

For the Desktop, newer stable versions of GNOME components have been included as well as a new feature: Livepatch desktop integration.

For those who aren’t familiar, Livepatch is a service which applies critical kernel patches without rebooting. The service is available as part of an Ubuntu Advantage subscriptions but also made available for free to Ubuntu users (up to 3 machines).  Fixes are downloaded and applied to your machine automatically to help reduce downtime and keep your Ubuntu LTS systems secure and compliant.  Livepatch is available for your servers and your desktops.

Andrea Azzarone worked on desktop integration for the service and his work finally landed in the 18.04 LTS.

To enabling Livepatch you just need an Ubuntu One account. The set up is part of the first login or can be done later from the corresponding software-properties tab.

Here is a simple walkthrough showing the steps and the result:

The wizard displayed during the first login includes a Livepatch step will help you get signed in to Ubuntu One and enable Livepatch:

Clicking the ‘Set Up’ button invites you to enter you Ubuntu One information (or to create an account) and that’s all that is needed.

The new desktop integration includes an indicator showing the current status and notifications telling when fixes have been applied.

You can also get more details on the corresponding CVEs from the Livepatch configuration UI

You can always hide the indicator using the toggle if you prefer to keep your top panel clean and simple.

Enjoy the increased security in between reboots!

 

 

 

on August 08, 2019 07:32 PM

August 07, 2019

We’ve been hard at work optimizing Xfce’s screensaver to give users the best possible lock and screensaver experience in Xfce. With 0.1.6 and 0.1.7, we’ve dropped even more legacy code, while implementing a long-requested feature, per-screensaver configuration!

What’s New?

New Features

  • Added support for on-screen keyboards. This option adds a button to the login window to show and hide the keyboard at the bottom of the screen.
  • Added per-screensaver configuration. The available options are pulled from the xscreensaver theme file and are stored via Xfconf.
  • Improved background drawing when using 2x scaling.

Bug Fixes

  • Fixed flickering within the password dialog (0.1.6)
  • Fixed various display issues with the password dialog, all themes should now render xfce4-screensaver identically to lightdm-gtk-greeter (0.1.6)
  • Fixed confusion between screensaver and lock timeouts (Xfce #15726)
  • Removed reference to pkg-config (.pc) file (0.1.6) (Xfce #15597)

Code Cleanup

  • Cleaned up kdb-indicator logic (0.1.6)
  • Consolidated debug function calls (0.1.6)
  • Dropped libXxf86 dependency (MATE Screensaver #199)
  • Dropped lots of unused or unneeded code, significantly streamlining the codebase
  • Migrated xfce4-screensaver-command to GDBus
  • Moved job theme processing out of gs-manager (0.1.6)
  • Removed full-screen window shaking on failed login
  • Simplified handling of user preferences (0.1.6)
  • Simplified lock screen and screensaver activation code

Translation Updates

Armenian (Armenia), Belarusian, Bulgarian, Catalan, Chinese (China), Chinese (Taiwan), Czech, Danish, Dutch, Finnish, French, Galician, German, Hebrew, Hungarian, Italian, Lithuanian, Malay, Norwegian Bokmål, Polish, Portuguese, Portuguese (Brazil), Russian, Spanish, Turkish

Downloads

Source tarball (md5sha1sha256)

Xfce Screensaver is included in Xubuntu 19.10 “Eoan Ermine”, installed with the xfce4-screensaver package.

on August 07, 2019 01:51 AM

August 06, 2019

Here’s a brief changelog of what we’ve been up to since our last general update.

Bugs

  • Add basic GitLab bug linking (#1603679)
  • Expect the upstream bug ID in the “number” field of GitHub issue objects, not the “id” field (#1824728)
  • Include metadata-only bug changes in Person:+commentedbugs

Build farm

  • Filter ASCII NUL characters out of build logtails (#1831500)
  • Encode non-bytes subprocess arguments on Python 2 to avoid crashing on non-ASCII file names under LC_CTYPE=C (#1832072)

Code

  • Don’t preload recipe data when deleting recipes associated with branches or repositories, and add some more job indexes (#1793266, #1828062)
  • Fix crash if checkRefPermissions finds that the repository is nonexistent
  • Add a rescan button to branch merge proposals for failed branch or repository scans
  • Land parts of the work required for Git HTTPS push tokens, though this is not yet complete (#1824399)
  • Refactor code import authorisation to be clearer and safer
  • Set line-height on <pre> elements in Bazaar file views
  • Work in progress to redeploy Launchpad’s Git backend on more scalable infrastructure

Infrastructure

  • Upgrade to PostgreSQL 10
  • Fix make-lp-user, broken by the fix for #1576142
  • Use our own GPG key retrieval implementation when verifying signatures rather than relying on auto-key-retrieve
  • Give urlfetch a default timeout, fixing a regression in process-mail (#1820552)
  • Make test suite pass on Ubuntu 18.04
  • Retry webhook deliveries that respond with 4xx for an hour rather than a day
  • Merge up to a current version of Storm
  • Upgrade to Celery 4.1.1
  • Move development sites from .dev to .test
  • Upgrade to Twisted 19.2.1
  • Upgrade to requests 2.22.0
  • Use defusedxml to parse untrusted XML
  • Improve caching of several delegated authorization checks (#1834625)

Registry

  • Fix redaction in pillar listings of projects for which the user only has LimitedView (#1650430)
  • Tighten up the permitted pattern for newly-chosen usernames

Snappy

  • Landed parts of the work required to support private snap builds, though this is not yet complete (#1639975)
  • Generalise snap channel handling slightly, allowing channel selection for core16 and core18
  • Add build-aux/snap/snapcraft.yaml to the list of possible snapcraft.yaml paths (#1805219)
  • Add build-request-id and build-request-timestamp to SNAPCRAFT_IMAGE_INFO
  • Allow selecting source snap channels when requesting manual snap builds (#1791265)
  • Push build start timestamps to the store, and use release intents so that builds are more reliably released to channels in the proper sequence (#1684529)
  • Try to manually resolve symlinks in remote Git repositories when fetching snapcraft.yaml (#1797366)
  • Consistently commit transactions in SnapStoreUploadJob (#1833424)
  • Use build request jobs for all snap build requests in the web UI
  • Honour “base: bare” and “build-base” when requesting snap builds (#1819196)

Soyuz (package management)

  • Add command-not-found metadata in the archive to the Release file
  • Check the .deb format using dpkg-deb rather than ar
  • Add s390x Secure Initial Program Load signing support (#1829749)
  • Add u-boot Flat Image Tree signing support (#1831942)
  • Use timeout(1) to limit debdiff rather than using alarm(3) ourselves
  • Allow configuring the binary file retention period of a LiveFS (#1832477)
  • Import source packages from Debian bullseye
on August 06, 2019 06:16 PM