July 17, 2018

Useful Metasploit Reminders

David Tomaschik

This isn’t an intro to metasploit, but more a reminder to myself of things that are useful to know, but maybe not used all the time (or relatively new).

Meterpreter

on July 17, 2018 03:52 AM

Synonyms in x86 Assembly

David Tomaschik

I recently had an opportunity to handcraft shellcode with unusual restrictions, and appreciated that there’s a number of ways to accomplish any goal in an ISA as flexible as x86. (Most of these techniques will apply to x86-64 as well, but the work I was doing happened to be 32 bit, so that’s what I will use as an example.) Obviously, this won’t be comprehensive, but it’s just a reminder of different ways you can do something. If you ever think it’s impossible, remember to try harder.

Most of my examples will use eax, unless a special circumstance applies (i.e., dealing with esp, eip, etc.). They’re not all going to be strictly synonyms, because many of them will have varying side effects (flags, etc.). I will try to note any that have potentially undesirable side effects like clobbering other registers, leaving the stack modified, etc.

Zero Out a Register
1
mov eax, 0

Straight out zero: has the disadvantage of sticking a bunch of NULL bytes in your output, which is a problem for many use cases.

1
xor eax, eax

Because a^a=0, xoring a register with itself clears it. Nice and short (2 bytes) and no NULL bytes.

1
shl eax, 32

This works by zero-filling of shifts.

1
2
mov eax, -1
not eax

This sets eax to 0xFFFFFFFF, then inverts it.

1
2
mov eax, -1
inc eax

Similarly to above, but it increments rather than inverts eax.

on July 17, 2018 03:52 AM

July 16, 2018

Welcome to the Ubuntu Weekly Newsletter, Issue 536 for the week of July 8 – 14, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on July 16, 2018 08:58 PM
The low-latency kernel offering with Ubuntu provides a kernel tuned for low-latency environments using low-latency kernel configuration options.  The x86 kernels by default run with the Intel-Pstate CPU scheduler set to run with the powersave scaling governor biased towards power efficiency.

While power efficiency is fine for most use-cases, it can introduce latencies due to the fact that the CPU can be running at a low frequency to save power and also switching from a deep C state when idle to a higher C state when servicing an event can also increase on latencies.

In a somewhat contrived experiment, I rigged up an i7-3770 to collect latency timings of clock_nanosleep() wake-ups with timer event coalescing disabled (timer_slack set to zero) over 60 seconds across a range of CPU scheduler and governor settings on a 4.15 low-latency kernel.  This can be achieved using stress-ng, for example:

 sudo stress-ng --cyclic 1 --cyclic-dist 100 –cyclic-sleep=10000 --cpu 1 -l 0 -v \
--cyclic-policy rr --cyclic-method clock_ns --cpu 0 -t 60 --timer-slack 0

..the above runs a cyclic measurement collecting latency counts in 100ns buckets with a clock_nanosecond wakeup interval of 10,000 nanoseconds with zero % load CPU stressor and timer slack set to 0 nanoseconds.  This dumps latency distribution stats that can be plotted to see where the modal latency points occur and the latency characteristics of the CPU scheduler.

I also used powerstat to measure the power consumed by the CPU package over a 60 second interval.  Measurements for the Intel-Pstate CPU scheduler [performance, powersave] and the ACPI CPU scheduler (intel_pstate=disabled) [performance, powersave, conservative and ondemand] were taken for 1,000,000 down to 10,000 nanosecond timer delays.

1,000,000 nanosecond timer delays (1 millisecond)

Strangely the powersave Intel-Pstate is using the most power (not what I expected).

The ACPI CPU scheduler in performance mode has the best latency distribution followed by the Intel-Pstate CPU scheduler also in performance mode.

100,000 nanosecond timer delays (100 microseconds)

Note that Intel-Pstate performance consumes the most power...
...and also has the most responsive low-latency distribution.

10,000 nanosecond timer delays (10 microseconds)

In this scenario, the ACPI CPU scheduler in performance mode was consuming the most power and had the best latency distribution.

It is clear that the best latency responses occur when a CPU scheduler is running in performance mode and this consumes a little more power than other CPU scheduler modes.  However, it is not clear which CPU scheduler (Intel-Pstate or ACPI) is best in specific use-cases.

The conclusion is rather obvious;  but needs to be stated.  For best low-latency response, set the CPU governor to the performance mode at the cost of higher power consumption.  Depending on the use-case, the extra power cost is probably worth the improved latency response.

As mentioned earlier, this is a somewhat contrived experiment, only one CPU was being exercised with a predictable timer wakeup.  A more interesting test would be with data handling, such as incoming packet handling over ethernet at different rates; I will probably experiment with that if and when I get more time.  Since this was a synthetic test using stress-ng, it does not represent real world low-latency scenarios, however, it may be worth exploring CPU scheduler settings to tune a low-latency configuration rather than relying on the default CPU scheduler setting.
on July 16, 2018 12:22 PM
Here is the seventh issue of This Week in Lubuntu Development. You can read the last issue here. Changes General This week was focused on polishing the installer experience and the desktop in general. Here are the changes made, with links to the full details. Lubuntu Artwork Rename sddm-theme-lubuntu-chooser to sddm-theme-lubuntu. Since Ubuntu's sddm is […]
on July 16, 2018 05:40 AM

GUADEC 2018 Almería

Robert Ancell

I recently attended the recent GNOME Users and Developers European Conference (GUADEC) in Almería, Spain. This was my fifth GUADEC and as always I was able to attend thanks to my employer Canonical paying for me to be there. This year we had seven members of the Ubuntu desktop team present. Almería was a beautiful location for the conference and a good trade for the winter weather I left on the opposite side of the world in New Zealand.


This was the second GUADEC since the Ubuntu desktop switched back to shipping GNOME and it’s been great to be back. I was really impressed how positive and co-operative everyone was; the community seems to be in a really healthy shape. The icing on the cake is the anonymous million dollar donation the foundation has received which they announced will be used to hire some staff.


The first talk of the week was from my teammates Ken VanDine, Didier Roche and Marco Treviño who talked about how we’d done the transition from Unity to GNOME in Ubuntu desktop. I was successful in getting an open talk slot and did a short talk about the state of Snap integration into GNOME. I talked about the work I’d done making snapd-glib and the Snap plugin in GNOME Software. I also touched on some of the work James Henstridge has been working on making Snaps work with portals. It was quite fun to see James be a bit of a celebrity after a long period of not being at a GUADEC - he is the JH in JHBuild!


After the first three days of talks the remaining three days are set for Birds of a Feather sessions where we get together in groups around a particular topic and discuss and hack on that. I organised a session on settings which turned out to be surprisingly popular! It was great to see everyone that I work with online in-person and allowed us to better understand each other. In particular I caught up with Georges Stavracas who has been very patient in reviewing the many patches I have been working on in GNOME Control Center.


I hope to see everyone again next year!
on July 16, 2018 02:41 AM

July 12, 2018

S11E18 – Eighteen Summers - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This week we’ve been using GameMaker 1.4 on Windows to patch Spelunky for Linux. We interview some of the Ubuntu Communitheme team, round up the community news and go over your feedback.

It’s Season 11 Episode 18 of the Ubuntu Podcast! Alan Pope and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on July 12, 2018 02:00 PM

July 10, 2018

Recently I gave a keynote at DevXCon in San Francisco. So, what better place to deliver a presentation that is entirely non-technical and non-specific to developer relations?

“You are bonkers, Bacon”, I hear you say.

Well, hold your horses. Effective leadership, how we identify quality leaders, and how we foster great leadership at scale is critical to all communities. As such, I took a crack at this topic in my keynote. Fortunately, it seemed well-received by the folks there.

Now it is your turn to decide. Here is the video:

Can’t see the video? Click here.

I would love to hear your ideas about what great leadership consists of and how you have approached this. Share your thoughts in the comments!

The post Video: Building Community Leaders: A Guide appeared first on Jono Bacon.

on July 10, 2018 11:35 PM

The Kubuntu Community is please to announce that KDE Plasma 5.12.6, the latest bugfix release for Plasma 5.12 was made available for Kubuntu 18.04 LTS (the Bionic Beaver) users via normal updates.

The full changelog for 5.12.6 contains scores of fixes, including fixes and polish for Discover and the desktop.

These fixes should be immediately available through normal updates.

The Kubuntu team wishes users a happy experience with the excellent 5.12 LTS desktop, and thanks the KDE/Plasma team for such a wonderful desktop to package.

on July 10, 2018 02:47 PM

Switching Things Up

Stephen Michael Kellat

A further update to this has been posted at https://identi.ca/alpacaherder/note/DoCu7vECRvKuu9xLxd3T-Q

What Happened?

Things are getting a bit troublesome at work. I have been getting sick a bit more frequently. We've been having more people showing up with "pulse-ox" meters to see if they're even getting enough oxygen to breath. The air isn't that great in my section. With the increasing pace of retirements being announced, I suppose I need to recognize the handwriting that is on the wall.

Besides, have you seen the federal cabinet ministers getting harassed while simply trying to eat dinner? Secretaries of Homeland Security and Transportation, the now-former EPA Administrator, and the Assistant to the President/Press Secretary have all run into a spot of bother. This is not behavior to encourage.

Now What?

A project proposal being passed around at church is for me to stand up an adult education/pre-college unit. Our local school district doesn't always have the best graduation outcomes. In this instance I would be standing up an experimental operation to work during the fall to offering some remedial education for local students as well as "pre-college preparatory experiences".

In short, do tutoring as well as some lecturing on how to ensure 2+2=4 plus also working on reading. I do have a degree in library science that is barely used so working on the literature aspect would be fairly simple. Chalkboards and chalk are already there as well as tables and chairs. Facilities would be put to use during downtime between services. Dragging in resources from Saylor Academy would also be useful to ensure students could actually pick up credits from an external provider through things like The Alternative Credit Project. For those operating from a Christian viewpoint, Acts 8:26-40 is the scriptural example we're working with that also potentially includes students working with MOOCs too. History doesn't repeat but it sure can look awfully similar.

The big problem is that this would be incompatible with at least 1 of the 3 sets of ethics rules I have to comply with as a federal civil servavnt. To do this, I would have to leave the federal civil service. Then again, for the sake of my own personal safety this is probably the best time ever to do so.

To maintain subsistence on half pay for the remainder of calendar year 2018, I would have to raise USD$10,000. Thankfully this is not something that would be dealt with via Patreon or the like which would otherwise requiring me wading through the IRS Sharing Economy Tax Center for guidance. The church is a 501(c)(3) entity and would be the fiscal agent.

Since the congregation is older than ARPANet and still has a halting embrace of technology, there's very little computer technology in use there. There is no website. There is no e-mail address. Checks/cheques are still good and still useful for making donations that are tax-deductible in the United States. There is a full document known as Publication 526 from the IRS that details how that works if somebody wanted to make a donation to help further this work.

I thought logistically about PayPal and all sorts of other platforms. They do great for what they're intended to do although they end up putting money in my lap in a way that isn't tax-deductible. It doesn't go directly to the church first and winds up running things through the IRS Sharing Economy Tax Center again. They actually overcomplicate things as well as introduce arbitrage fees and other unnecessary weirdnesses. From the Publication 526 perspective, all that needs to happen is that a church get a check or other instrument like a money order to deposit so it can then turn around and send an acknowledgement letter which can then be used as a potential tax write-off.

Are There Timeframes For This?

Yes, of course. The goal is to be able to walk away from my current job by no later than July 20th. I would be then spending 20 hours per week working on this matter for the church in addition to the limited missionary-preacher work I do twice per month. Getting at least USD$3,000 raised by then would ensure at least two months funds were on-hand to keep me at subsistence operating. That is to say, keeping a roof over my head and food on the table.

The goal would be to have students start trickling in before Labor Day. Locally the K-12 students will already be back in sessions around August 20th so we can start promoting available services then. We'd take it from there and expect to wrap up with any participating students by no later than December 14th.

To discuss how to donate and to coordinate any funding participation, feel free to send an e-mail to ashtabulaeducationproject@gmail.com.

With luck I may be able to do some good in my local community. I've been fueling the engine of the nation-state long enough. Change seems to be in the winds.

Creative Commons Licence
Switching Things Up by Stephen Michael Kellat is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Based on a work at https://identi.ca/alpacaherder/note/8h47hl0DTR2tfkJ1oBqdXg.

on July 10, 2018 02:43 AM

July 09, 2018

Running Linux terminals on Windows needs just a few clicks since we can install Ubuntu, Debian and other distributions right from the Store as apps, without the old days’ hassle of dual-booting or starting virtual machines. It just works and it works even in enterprise environments where installation policies are tightly controlled.

If you check the Linux distribution apps based on the Windows Subsystem for Linux technology you may notice that there is not only one Ubuntu app, but there are already three, Ubuntu, Ubuntu 16.04 and Ubuntu 18.04. This is no accident. It matches the traditional Ubuntu release offering where the LTS releases are supported for long periods and there is always a recommended LTS release for production:

  • Ubuntu 16.04 (code name: Xenial) was the first release really rocking on WSL and it will be updated in the Store until 16.04’s EOL, April, 2021.
  • Ubuntu 18.04 (code name: Bionic) is the current LTS release (also rocking :-)) and the first one supporting even ARM64 systems on Windows. It will be updated in the Store until 18.04’s EOL, April, 2023.
  • Ubuntu (without the release version) always follows the recommended release, switching over to the next one when it gets the first point release. Right now it installs Ubuntu 16.04 and will switch to 18.04.1, on 26th July, 2018.

The apps in the Store are like installation kits. Each app creates a separate root file system in which Ubuntu terminals are opened but app updates don’t change the root file system afterwards. Installing a different app in parallel creates a different root file system allowing you to have both Ubuntu LTS releases installed and running in case you need it for keeping compatibility with other external systems. You can also upgrade your Ubuntu 16.04 to 18.04 by running ‘do-release-upgrade’ and have three different systems running in parallel, separating production and sandboxes for experiments.

What amazes me in the WSL technology is not only that Linux programs running directly on Windows perform surprisingly well (benchmarks), but the coverage of programs you can run unmodified without any issues and without the large memory overhead of virtual machines.

I hope you will enjoy the power or the Linux terminals on Windows at least as much we enjoyed building the apps at Canonical working closely with Microsoft to make it awesome!

on July 09, 2018 07:50 PM

Welcome to the Ubuntu Weekly Newsletter, Issue 535 for the week of July 1 – 7, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on July 09, 2018 07:37 PM

July 07, 2018

En esta ocasión, Francisco Javier Teruelo y Marcos Costalesentrevistamos a Alberto Larraz, profesor, organizador del evento que se celebró hace un mes en Barcelona: Obrim el Codi.

2º Capítulo Extra de la segunda temporada

El podcast esta disponible para escuchar en:
on July 07, 2018 08:51 AM

July 06, 2018

Here is the sixth issue of This Week in Lubuntu Development. You can read the last issue here. Changes Lubuntu 17.10 reaches End of Life on July 19, 2018 Following the announcement from Adam Conrad, we are announcing that Lubuntu 17.10 reaches End of Life on July 19, 2018. After July 19, Lubuntu 17.10 will […]
on July 06, 2018 01:28 AM

Launchpad news, June 2018

Launchpad News

Here’s a brief changelog for this month.

Bugs

  • Handle Bugzilla.time() changes in Bugzilla 5.1.1 (#1774838)
  • Cope with the comment author field being renamed to creator in recent Bugzilla versions (#1774838)

Build farm

  • Set the hostname and FQDN of LXD containers to match the host system, though with an IP address pointing to the container (#1747015)
  • If the extra build arguments include fast_cleanup: True, then skip the final cleanup steps of the build; this can be used when building in a VM that is guaranteed to be torn down after the build
  • Allow checking out a git tag rather than a branch (#1687078, forum post)
  • Add a local unauthenticated proxy on port 8222, which proxies through to the remote authenticated proxy; this should allow running a wider range of network clients, since some of them apparently don’t support authenticated proxies very well (#1690834, #1753340, forum post)
  • Run tar with correct working directory when building source tarballs for snaps

Code

  • Port the loggerhead (Bazaar code browser) integration to gunicorn, allowing it to be used as an internal API as well
  • Optimise BuildableDistroSeries.findSeries (#1778732)
  • Proxy loggerhead branch diffs through the webapp, allowing AJAX MP revision diffs to work for private branches (#904070)

Infrastructure

  • Convert most code to use explicit proxy configuration settings rather than picking up a proxy from the environment, making the effective production settings easier to understand

Registry

  • Fix crash while adding an ssh key with unknown type (#1777507)

Miscellaneous

  • Improve documentation of what deactivating an account does (#993153)
on July 06, 2018 12:18 AM

July 05, 2018

S11E17 – At Seventeen - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

This week we make a snap of Xonotic, interview Daniel Foré from elementary OS about the Beta release of “Juno” and round up the news.

It’s Season 11 Episode 17 of the Ubuntu Podcast! Alan Pope and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on July 05, 2018 02:00 PM

July 04, 2018

Hi Folks,

I’m writing these lines while I’m in the flight to Almeria where this year’s GNOME Users And Developers European Conference will take place, typing with my Thinkpad Bluetooth keyboard on my mobile phone (I’ve to admit that the Android physical keyboard usage is getting awesome, allowing proper WM actions) :), as the battery of my T460p was already over after the flight from Florence to Madrid during which I fixed some more shell JS errors.

This will be my first GUADEC ever, and as a fresh Foundation member, I’m quite excited to finally join it.

I’m not coming alone, of course, as this year the ubuntu-desktop team will be quite crowded, as me, Ken VanDine, Sébastien Bacher, Didier Roche, Iain Lane, James Henstridge and Robert Ancell will be part of the conference, to give input and help to get GNOME even better.

Soo, looking forward to meet you all very soon (almost landed – or better – trying to, in the mean time)!

As always, I’ve to thank Canonical for allowing me and the desktop crew to be part of this great community reunion. But also for being one of the silver sponsors of the event.

These are the events that really matter in order to get things done.

on July 04, 2018 05:28 PM

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Distro Tracker

I merged a branch adding appstream related data (thanks to Matthias Klumpp). I merged multiple small contributions from a new contributor: Lev Lazinskiy submitted a fix to have the correct version string in the documentation and ensured that we could not use the login page if we are already identified (see MR 36).

Arthur Del Esposte continued his summer of code project and submitted multiple merge requests that I reviewed multiple times before they were ready to be merged. He implemented a team search feature, created a small framework to display an overview of all packages of a team.

On a more administrative level, I had to deal with many subscriptions that became immediately invalid when alioth.debian.org shut down. So I tried to replace all email subscriptions using *@users.alioth.debian.org with alternate emails linked to the same account. When no fallback was possible, I simply deleted the subscription.

pkg-security work

I sponsored cewl 5.4.3-1 (new upstream release) and wfuzz_2.2.11-1.dsc (new upstream release), masscan 1.0.5+ds1-1 (taken over by the team, new upstream release) and wafw00f 0.9.5-1 (new upstream release). I sponsored wifite2, made the unittests run during the build and added some autopkgtest. I submitted a pull request to skip tests when some tools are unavailable.

I filed #901595 on reaver to get a fixed watch file.

Misc Debian work

I reviewed multiple merge requests on live-build (about its handling of archive keys and the associated documentation). I uploaded a new version of live-boot (20180603) with the pending changes.

I sponsored pylint-django 0.11-1 for Joseph Herlant, xlwt 1.3.0-2 (bug fix) and python-num2words_0.5.6-1~bpo9+1 (backport requested by a user).

I uploaded a new version of ftplib fixing a release critical bug (#901224: ftplib FTCBFS: uses the build architecture compiler).

I submitted two patches to git (fixing french l10n in git bisect and marking two strings for translation).

I reviewed multiple merge requests on debootstrap: make --unpack-tarball no longer downloads anything, --components not carried over with --foreign/--second-stage and enabling --merged-usr by default.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on July 04, 2018 01:35 PM

This week I began a new chapter in my career by joinging the Linux Foundation as a developer advocate and community manager for the EdgeX Foundry, an open platform for IoT edge computing.

I started using open source before I even knew what it was. Perl was my first programming language, and so installing libraries from CPAN became a routine task (as well as a routine challenge on SunOS). I posted my first open source code on SourceForge soon after, still thinking of it as a way for hobbyists to share their hobby, but not as something serious developers or companies would do. I still remember the feeling I had when Netscape announced that the next version of their browser, Netscape Navigator 5, would be released as open source. As a web developer in the late 90's, Netscape was the killer app, the king of the hill, the virtual monopoly that was leaps and bounds ahead of IE4. For them to release their source code in a way that let other people see it, copy it, even improve on it, was revolutionary. And it changed forever the way I thought about open source.


Of course, anybody else who lived through those turbulent times knows how that Netscape 5 story actually turned out, not because it was open source but because of business decisions and buyouts (thanks AOL!) that kept pulling the development one way and then the other. But my own journey into open source was much more straight forward. I dove in completely, releasing everything I could under an open license, using as much openly licensed software as possible. I bought (yes bought) my first copy of Linux from Best Buy in 1999, and switched my desktop permanently in 2006 when Canonical mailed me a free CD of Dapper Drake. Five years later I would join Canonical myself, and eventually land on the Community Team where I was building new communities and growing existing ones around Ubuntu and all it's upstreams and downstreams. Last year I was doing the same at Endless Computers, bringing the benefits of open technology to users in some of the most remote and disconnected parts of the world.

Dinner in Yogyakarta

So having the opportunity to join the Linux Foundation is a dream come true for me. I've seen first hand how collaboration on common technology leads to more and better innovation across the board, and that is the core idea behind the Linux Foundation. I'm excited to be joining the EdgeX Foundry which will play a crucial role in developing they way the rapidly expanding number of IoT devices are going to connect and communicate with the already massive cloud ecosystem. I will be working to improve the way new developers get started using and contributing to EdgeX Foundry, as well as teaching new organizations about the benefits of working together to solve this difficult but shared problem. I look forward to bringing my past experience in desktop, mobile and cloud developer communities into the IoT space, and working with developers across the world to build a vibrant and welcoming community at the network edge.

on July 04, 2018 12:56 PM

July 03, 2018

Dear Siôn,

Thank you for your comments on Twitter welcoming my feedback on the EU’s proposed copyright reform. I’d like to discuss in particular Article 13, “Use of protected content by information society service providers storing and giving access to large amounts of works and other subject-matter uploaded …

on July 03, 2018 08:58 AM

July 02, 2018

Another LTS is here and the upgrade prompts are coming to a desktop near you in just a couple of weeks. But Ubuntu development never stops, and creative persons come together to collaborate concurrently with cautious users scrutinizing new releases.

Every Ubuntu release contemplates a question. A carefully chosen codename piques the curiousity of keen, eager fans. Ubuntu 18.10 isn’t excluded from this cunning course of continuing curios.

Ubuntu 18.10 is codenamed Cosmic Cuttlefish. Christened after a cute mollusc of the class Cephalopoda, these clever creatures have made the cut since the early Cretaceous. Careful consideration will expose an extraordinary quirk: chromatic changes facilitate a unique mechanism for communication. They change the color of their skin to send communiqués. This codename should encourage wacky and eccentric, but unique and colorful images we can ship in October!

For the Ubuntu Free Culture Showcase for 18.10, we’re requesting accomplished and consummate photographers and artists to submit their Creative Commons-licensed photos and artwork to the Free Culture Showcase, a contest that determines which wallpapers we’ll include with Ubuntu 18.10 as extra content for choosy consumers of desktop Ubuntu.

The contest will conclude on August 13th. Check the subsequent conditions for acceptance, and consider sharing your creation with the cosmic community across the globe.

All content must be released under a Creative Commons Attribution-Sharealike or Creative Commons Attribute license. (The Creative Commons Zero waiver is okay, too!). Each entrant must only submit content they have created themselves, and all submissions must adhere to the Ubuntu Code of Conduct.

The winning collection will be included in the Ubuntu 18.10 release on October 18th, 2018!

There’s cornucopia of other considerations, so please consult the Ubuntu Free Culture Showcase wiki page for details. Good luck!

on July 02, 2018 07:00 AM

July 01, 2018

Our work over June has brought a few new updates for July including lots of Xfce 4.13 updates, bug fixes, and a few migrations which round out a nice month of development.

Xfce 4.13, A Preview of What’s To Come

Xfce has a fairly standard versioning scheme. Even version numbers (4.10, 4.12, 4.14) represent stable, supported releases. Odd version numbers (4.11, 4.13) represent development versions. Xfce 4.14 (the GTK 3 release) has been in development for a few years now, and several components have had 4.13 releases as their ports are completed and bugs are fixed.

At this point, with the Xubuntu LTS release behind us and Xfce 4.14 likely releasing sometime in the next year, we’re ready to start rolling out more of these development releases for our users. There are not a lot of new features, but with the upgraded toolkit, there’s better support for newer technologies, theming capabilities, and … an increased likelihood of bugs (we’ll fix them, we promise).

As of this morning (July 1), we have a lot more Xfce 4.13 available in Xubuntu 18.10 “Cosmic Cuttlefish”. This is thanks to the hard work of Unit 193 — our Debian liaison, package maintainer, council member, and all-around awesome person. The following components are now available.

These components are using GTK 3… can you tell?
  • Thunar 1.8 (and plugins)
  • Xfce Desktop 4.13
  • Xfce Panel 4.13 (and plugins)
  • Xfce Screenshooter 1.9
  • Xfce Settings 4.13
  • Updated libraries (libxfce4panel, libxfce4util, xfconf)

New Releases

July was a nice month for Xfce development. 11 projects were updated, and most of these updates have already landed in Xubuntu. The other updates should land in the coming days.

Upcoming Releases

Catfish 1.4.6

We’re in the middle of migrating Catfish to be an official part of the Xfce family. After migrating bugs from Launchpad to Xfce it was only rational to fix a few. So with those bug fixes, we’re right around the corner from another release. I’ll have more information on the Xfce transition with the 1.4.6 release announcement later this month.

Xubuntu 18.04.1

The first point release for 18.04 is expected this month, on July 26th. After 18.04.1 is released, 16.04 LTS users will begin to receive upgrade notifications. Before we get there, we have a few fixes and updates we want to deliver.

What to Expect in July?

Xubuntu 18.04.1 Testing

With each point release, we need some testing to make sure everything’s still working as expected. With 18.04.1, we’ll likely be looking for more release upgrade tests. Look forward to some calls for testing this month to test the upcoming milestone, or join us on #xubuntu-devel to learn how you can start testing now.

Xfce Migrations

We’ve already started moving Catfish to Xfce, but there is still more work to be done. We’ll also be moving Xfce Panel Switch this month to join the rest of its friends and family.

More Blueprint Activity

Keep an eye on the Xubuntu development tracker to see what the team is up to this month.

on July 01, 2018 11:53 PM

June 28, 2018

One thing I see clients reach for the Pepto about over and over again is how to manage work effectively. They often struggle to (a) gather and communicate requirements (and not a Christmas list), (b) understand these needs and set expectations, and (c) manage how this work is actually delivered. When this isn’t smooth, it is a royal pain in the behind.

As part of my Open Org series, here is my new video covers precisely this:

Can’t see it? See it here.

Don’t forget to see my first part of the series too, which covered how to communicate effectively across different parts of an organization. This is really important, particularly if you are working with executive teams. Check it out:

Can’t see it? See it here.

Feedback is always welcome!

The post Video: How to Manage Requirements, Expectations, and Project Delivery (Without Sucking) appeared first on Jono Bacon.

on June 28, 2018 03:30 PM

You can write instructions in a language like English, just like I do now with this blog post. If you want to write instructions for a computer, you are likely to use some sort of intermediate language that will then be converted into raw instructions suitable for the computer. Those intermediate languages are the programming languages. The software that you install on your computer is made of such raw instructions.

You can learn to make sense of these raw computer instructions, however if they are too many you get overwhelmed and you are unable to make sense of what is going on. There are cases where you want to make sense, and there are tools that help you to make sense. radare2 is one such tool.

radare2 is a reversing framework that helps you make sense of the raw computer instructions. You can also modify these instructions and make the software do things that it was intended to perform.

In the following, we are going to see

  1. How to install radare2 as a snap package.
  2. How to prepare a small HelloWorld-style example
  3. How to solve the HelloWorld example using radare2

Installing radare2 as a snap package

Run the following command

$ snap install radare2-simosx
radare2-simosx 2.6.0 from 'simosx' installed

This is an unofficial snap package of radare2. Quite soon there will be an official package and you can use that instead. This post will be updated when the official radare2 snap package is released.

You can run the program as radare2-simosx.radare2. When the official snap package is released and installed, you will be able to run it as just radare2.

Preparing a HelloWorld for radare2

Here is the source code (in the C programming language) of the HelloWorld for radare2. That’s the intermediate language mentioned in the introduction. You can make sense of it with a bit of effort.

$ cat helloworld-radare2.c 
#include <stdio.h>

int main()                            // @main function, execution always starts here.
{
  int i = 0;                          // declare an integer variable, set it to value of 0.

  if (i != 0)                         // If that variable is not 0, then
    puts("Hello, world!\n");          //   print to the screen "Hello, World!"
  else                                // Else
    puts("Go away!\n");               //   print to the screen "Go away!"

return 0;                             // Finish up.
}

We can see that in this program it will never print the Hello message because this i variable does have the value of 0. There is nothing in the code that can change the value of i. Let’s verify by compiling the source code and create the computer instructions file (the helloworld-radare2 file).

$ gcc helloworld-radare2.c -o helloworld-radare2

$ ./helloworld-radare2 
Go away!

The file helloworld-radare2 has the raw computer instructions. In the next section we are going to use only helloworld-radare2 to change the functionality of the program with radare2.

Understanding HelloWorld with radare2

We are going to run radare2 with the name of the file helloworld-radare2 as argument.

$ radare2-simosx.radare2 helloworld-radare2
[0x00400430]>

We are in radare2! If you want to quit at any time, you can type q and press Enter.

The next step is to get radare2 to auto-analyse the raw computer instructions for us. That would be the a command, and in the current version of radare2 you can get the full available analysis automation by typing up to three a’s (aaa). Please do not try with seven a‘s.

[0x00400430]> aaa
[x] Analyze all flags starting with sym. and entry0 (aa)
[x] Analyze function calls (aac)
[x] Analyze len bytes of instructions for references (aar)
[x] Use -AA or aaaa to perform additional experimental analysis.
[x] Constructing a function name for fcn.* and sym.func.* functions (aan)
[0x00400430]>

Now that radare2 made sense of the raw computer instructions, we can ask it to show the instructions of the default main entry point that all programs have. Execution always starts atmain, and we use the mnemonic @main.

We use the command pdf @main. pdf stands for Print Disassembly of Function and then specify the function (“main”).

You can see three columns of information. The middle column is the raw computer instructions and they are pairs of hexadecimal numbers. The right column is an interpretation of the raw computer instructions (called assembly language). The first column is the incremental index of each instruction, in hexadecimal numbers. Even if you do not know much about what is shown, you can deduce the control flow between this output and the initial source code in the C programming language. Note those arrows in the screenshot which show the control flow (the if else statement in the C source code).

Let’s use a different visual view of the structure of the program. There is a VV command in radare2. Type VV @main and press Enter. Here is how it looks. To Quit back to the radare2 command line, press q twice.

The top box has instructions that does a CoMParison (cmp, cyan color), comparing with 0. Then, it Jumps if  Equal (JE, green color) to one of the two appropriate boxes below. The box on the left is for Hello, world!. The box on the right is for Go away!. The green line towards the box on the right is followed when the variable is equal with 0. Wait, in the source code the condition is whether i is equal to 0. Why was it changed to NOT being equal? The compiler that generates the instructions has the freedom to optimize them as required. Obviously, by switching the comparing, the compiler also switched the two branches. At the end the program works exactly as intended.

Both boxes have the MOVe instruction that moves the message to print into a destination named edi and then they call an IMPorted function called puts to PUT the String of characters (from destination edi) to the screen. Again, both boxes lead to the bottom box, which is the termination of the program. Note that the Hello, world box ends with a JuMP (jmp) instruction, and jumps to the bottom box. The reason is that sequentially, the Hello, world box is first and then the Go away box. Without the jmp, the Hello, world box would have followed though to the Go away box, and both messages would have been printed (and would have been wrong).

The bottom box does a LEAVE, and then a RETurn back to the system.

We understand more or less how the raw instructions are supposed to work.. In the next section we modify those raw instructions in order to make the application to do other things than from what was intended.

Modifying (patching) the HelloWorld with radare2

Let’s see again the disassembly of the main function. radare2 allows as to change anything and the easiest way to manipulate the program is to overwrite some instructions with something else. Such an instruction is No OPeration (nop), which does nothing. We could replace some instructions with nops and affect in that way the execution. Which instruction should we nop out?

We can change the je 0x400547 instruction into NOPs. If we do that, then the comparison will not happen and the execution will follow through the Hello, world box. Then, the jmp instruction will jump the execution to the end of the program, and the program will terminate naturally.

The raw instructions of je 0x400547 are 74 0c (middle column), and the address is 0x00400539 (first column). We should seek into this address location, and then overwrite those raw instructions with as many nops as required.

For radare2 to be able to modify a file, it needs to run with the -w switch (w for writing). Therefore, exit radare2 and start it again as follows:

$ radare2-simosx.radare2 -w helloworld-radare2
[0x00400430]> aaaa
[x] Analyze all flags starting with sym. and entry0 (aa)
[x] Analyze function calls (aac)
[x] Analyze len bytes of instructions for references (aar)
[x] Emulate code to find computed references (aae)
[x] Analyze consecutive function (aat)
[x] Constructing a function name for fcn.* and sym.func.* functions (aan)
[x] Type matching analysis for all functions (afta)
[0x00400430]> pdf @main
/ (fcn) main 50
|   main ();
|           ; var int local_4h @ rbp-0x4
|           ; DATA XREF from 0x0040044d (entry0)
|           0x00400526   55              push rbp
|           0x00400527   4889e5          mov rbp, rsp
|           0x0040052a   4883ec10        sub rsp, 0x10
|           0x0040052e   c745fc000000.   mov dword [local_4h], 0
|           0x00400535   837dfc00        cmp dword [local_4h], 0
|       ,=< 0x00400539   740c            je 0x400547
|       |   0x0040053b   bfe4054000      mov edi, str.Hello__world ; 0x4005e4 ; "Hello, world!\n"
|       |   0x00400540   e8bbfeffff      call sym.imp.puts
|      ,==< 0x00400545   eb0a            jmp 0x400551
|      ||   ; CODE XREF from 0x00400539 (main)
|      |`-> 0x00400547   bff3054000      mov edi, str.Go_away ; 0x4005f3 ; "Go away!\n"
|      |    0x0040054c   e8affeffff      call sym.imp.puts
|      |    ; CODE XREF from 0x00400545 (main)
|      `--> 0x00400551   b800000000      mov eax, 0
|           0x00400556   c9              leave
\           0x00400557   c3              ret
[0x00400430]>

In the screenshot below, we Seek to the address 0x00400539 (s 0x00400539). Any write we do, will be on that location. A command to write raw instructions (also called operation codes, opcodes) is wao. There are a few ways to use wao, and the one we choose is wao nop; modify the current opcode and replace accordingly into the nop opcode (raw instruction).

Here is another glorious screenshot showing the before and after. The freshly added opcodes appear in blue. The instruction je 0x400547 is two bytes long (740c), and has been replaced by two nop (90).

Do we need to save the file now? No need, our change has already been written to the file.

Press q and then Enter to exit radare2. Finally, run the program again.

$ ./helloworld-radare2 
Hello, world!

Success, Hello, world!

What next?

You can read the radare2 book online, also available as PDF and learn more about radare2.

There is also a GUI tool for radare2 called Cutter. I have tried to create a snap package for Cutter but did not succeed yet. The reason is that currently the snaps are based on the Ubuntu 16.04 Core image, and Cutter uses some Qt libraries that have not been packaged for 16.04. It is doable to package them, but ain’t nobody have time for that. So, we will compile Cutter from source in a LXD GUI container. Finally, we will do the HelloWorld task again using Cutter.

Installing Cutter in a LXD GUI container

Visit the following post in order to setup LXD on your Linux box and prepare it for GUI LXD containers.

How to easily run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

Then, follow these commands. First, launch a GUI container, copy helloworld-radare2 over to the container and prepare the environment to compile Cutter. The command to copy the file over to the container is lxc file push, then the filename to copy, and finally the destination cutter (name of container) /home/ubuntu/ (directory).

$ lxc launch --profile default --profile gui ubuntu:18.04 cutter
Creating cutter
Starting cutter

$ lxc file push helloworld-radare2 cutter/home/ubuntu/

$ lxc exec cutter -- sudo --user ubuntu --login
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@cutter:~$ git clone --recurse-submodules https://github.com/radareorg/cutter
Cloning into 'cutter'...
remote: Counting objects: 7296, done.
remote: Compressing objects: 100% (64/64), done.
remote: Total 7296 (delta 47), reused 60 (delta 35), pack-reused 7197
Receiving objects: 100% (7296/7296), 7.85 MiB | 2.06 MiB/s, done.
Resolving deltas: 100% (5568/5568), done.
Submodule 'radare2' (https://github.com/radare/radare2) registered for path 'radare2'
Cloning into '/home/ubuntu/cutter/radare2'...
remote: Counting objects: 163731, done. 
remote: Compressing objects: 100% (54/54), done. 
remote: Total 163731 (delta 37), reused 44 (delta 26), pack-reused 163651 
Receiving objects: 100% (163731/163731), 84.29 MiB | 1.68 MiB/s, done.
Resolving deltas: 100% (122529/122529), done.
Submodule path 'radare2': checked out '7743169a9b7343d08e5153158a65176ef44d63f0'
ubuntu@cutter:~$

Next, install the necessary development packages.

ubuntu@cutter:~$ cd cutter

ubuntu@cutter:~/cutter$ sudo apt install build-essential pkg-config python3-dev python3-pip qt5-default qtwebengine5-dev libqt5svg5-dev
ubuntu@cutter:~/cutter$ pip3 install notebook jupyter_client

Finally, start compiling cutter.

ubuntu@cutter:~/cutter$ ./build.sh 
A (new?) version of radare2 will be installed. Do you agree? [Y/n] Y
...
Build complete. Binary available at: build/Cutter
ubuntu@cutter:~/cutter$

Cutter has been compiled, and the binary is found at build/Cutter.

Time to run Cutter!

ubuntu@cutter:~/cutter$ build/Cutter

We click on the Select button to select the file helloworld-radare2.

Next is the dialog for the load options. We enable the -w flag (Load in write mode), which means that any changes we make, will be applied directly to the file. Note that if you later make a mistake when editing the file, them compile it again from scratch and put it back here in order to reopen.

We keep the Auto-Analysis level aaa (three as). In my tests, if I increase the level to aaaa (four as, Include Experimental Auto-Analysis) then Cutter would hang. Most likely an issue with Cutter because radare2 is OK with aaaa.

Click on OK to continue.

Here is the interface of Cutter. There are many things to see. For this post, we focus on two of the windows, the Functions and the Disassembly windows. The Disassembly window shows the disassembly of the file starting from the beginning, while we are only interested in the function main.

We need to select the main function from the Functions window in order to have the instructions of @main appear in the Disassembly window. @main is listed as sym.main in the Functions window.

Just double-click on sym.main in the Functions window.

The output in the Disassembly window is quite familiar. It is the same output from radare2. In fact, Cutter is a GUI interface that makes use of radare2. When we compiled Cutter, it asked as to compile radare2 as well.

We place the mouse pointer over the je 0x400547 instruction and right-click with the mouse. Then, click on Edit, then Nop Instruction. That should replace the je 0x400547 instruction with as many nop as needed.

Everything went well, and we can see the two nop instructions. We do not even need to save. The changes were written already to the file and we just need to click on File → Quit. It asks us if we want to save the project. We do not need to, so we select No and exit Cutter.

The final step is to run again the file helloworld-radare2 and see whether we Go Away or Hello, world.

ubuntu@cutter:~/cutter$ /home/ubuntu/helloworld-radare2 
Hello, world!

Hello, world! It worked.

Discussion

There are many amazing free and open-source projects. Radare2 and Cutter are just two of them. It is good to have at least some understanding on how they work and give them our support so they become even better.

on June 28, 2018 02:52 PM

I have recently become aware of a fraudulent investment scam which falsely states that I have launched new software known as a QProfit System promoted by Jerry Douglas. I’ve seen some phishing sites like http://www.bbc-tech.news and http://pipeline-stats.club, and pop up ads on Facebook like this one:

I can’t comment on whether or not Jerry Douglas promotes a QProfit system and whether or not it’s fraud. But I can tell you categorically that there are many scams like this, and that this investment has absolutely nothing to do with me. I haven’t developed this software and I have no desire to defraud the South African government or anyone else. I’m doing what I can to get the fraudulent sites taken down. But please take heed and don’t fall for these scams.

on June 28, 2018 10:00 AM

June 26, 2018

In 1965, Bruce Tuckman proposed a “developmental sequence in small groups.” According to his influential theory, most successful groups go through four stages with rhyming names:

  1. Forming: Group members get to know each other and define their task.
  2. Storming: Through argument and disagreement, power dynamics emerge and are negotiated.
  3. Norming: After conflict, groups seek to avoid conflict and focus on cooperation and setting norms for acceptable behavior.
  4. Performing: There is both cooperation and productive dissent as the team performs the task at a high level.

Fortunately for organizational science, 1965 was hardly the last stage of development for Tuckman’s theory!

Twelve years later, Tuckman suggested that adjourning or mourning reflected potential fifth stages (Tuckman and Jensen 1977). Since then, other organizational researchers have suggested other stages including transforming and reforming (White 2009), re-norming (Biggs), and outperforming (Rickards and Moger 2002).

What does the future hold for this line of research?

To help answer this question, we wrote a regular expression to identify candidate words and placed the full list is at this page in the Community Data Science Collective wiki.

The good news is that despite the active stream of research producing new stages that end or rhyme with -orming, there are tons of great words left!

For example, stages in a group’s development might include:

  • Scorning: In this stage, group members begin mocking each other!
  • Misinforming: Group that reach this stage start producing fake news.
  • Shoehorning: These groups try to make their products fit into ridiculous constraints.
  • Chloroforming: Groups become languid and fatigued?

One benefit of keeping our list in the wiki is that the organizational research community can use it to coordinate! If you are planning to use one of these terms—or if you know of a paper that has—feel free to edit the page in our wiki to “claim” it!


Also posted on the Community Data Science Collective blog. Although credit for this post goes primarily to Jeremy Foote and Benjamin Mako Hill, the other Community Data Science Collective members can’t really be called blameless in the matter either.

on June 26, 2018 02:21 AM

June 25, 2018

Note: This post is about LXD containers. These are system containers, which means they are similar to Docker but behave somewhat like virtual machines. When you start a LXD (lex-dee) container, you are starting a new system on your computer, with its own IP address and all. You can get LXD as a snap package. Go https://docs.snapcraft.io/core/install to install snap support, then sudo snap install lxd. See also Getting Started with LXD.

In that older post, we saw how to manually setup a LXD container in order to run GUI apps from there, and have them appear on our X11 desktop.

How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

In this post, we are going to see how to easily set up our LXD installation in order to be able to launch on demand containers that we can run GUI apps in them. First, we will see the instructions and explanation on how to use them. Then, we explain these instructions in detail. And finally we go through some common troubleshooting issues.

Prerequisites

The following have been tested with

  • the host runs either Ubuntu 18.04 or Ubuntu 16.04
  • the containers run either Ubuntu 18.04 or Ubuntu 16.04 or Ubuntu 14.04 or Ubuntu 12.04
  • LXD version 3.0 or newer (probably works fine with LXD 2.0.8+ as well)
  • works fine with either the LXD deb package or the LXD snap package
    To verify whether you run the deb package or the snap package, run the command which lxd

    $ which lxd
    /usr/bin/lxd              # NOTE: you have the deb package of LXD
         or
    /snap/bin/lxd             # NOTE: you have the snap package of LXD
    

These instructions should work with other distributions as well. Read further below on the detailed explanation of the instructions in order to adapt to your favorite distribution.

In the following, we see the two steps to set up our system so that we can then create GUI containers on demand. Step 1 is only required if you run the deb package of LXD. In subsequent sections, we see an explanation of the instructions so that you can easily port of other Linux distributions. At the end, have a look at the Troubleshooting section to see commons issues and how to solve them.

Step 1 Mapping the user ID of the host to the container

This step is only required if you run the deb package of LXD. If instead you have the snap package of LXD, skip to Step 2.

Run on the host (only once) the following command (source): (Note: if you do not use the bash shell, then $UID is the user-id of the current user. You can replace $UID with $(id -u) in that case.)

$ echo "root:$UID:1" | sudo tee -a /etc/subuid /etc/subgid
[sudo] password for myusername: 
root:1000:1
$

The command appends a new entry in both the /etc/subuid and /etc/subgid subordinate UID/GID files. It allows the LXD service (that runs as root) to remap our user’s ID ($UID, from the host) as requested.

Step 2 Creating the gui LXD profile

You will be creating a LXD profile with settings relevant to launching GUI applications. All configuration that you did manually at the old How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop post, are now included in a single LXD profile.

Download the file lxdguiprofile.txt and save it locally.

Then, create an empty LXD profile with the name gui. Finally, put the downloaded profile configuration into the newly created gui profile.

$ lxc profile create gui
Profile gui created

$ cat lxdguiprofile.txt | lxc profile edit gui
$

Verify that the profile has been created.

$ lxc profile list
+---------------+---------+
| NAME          | USED BY |
+---------------+---------+
| default       | 10      |
+---------------+---------+
| gui           | 0       |
+---------------+---------+

You can view the contents of the profile gui by running lxc profile show gui. A discussion on the profile contents is found two sections below.

Launching gui containers in LXD

Let’s launch some GUI containers in LXD. The gui LXD profile only has instructions related to running GUI applications. Due to this, you need to specify first another profile with information on the disk and the networking. The default LXD profile is suitable for this. You may use a bridge profile or macvlan profile instead.

$ lxc launch --profile default --profile gui ubuntu:18.04 gui1804
Creating gui1804
Starting gui1804

$ lxc launch --profile default --profile gui ubuntu:16.04 gui1604
Creating gui1604
Starting gui1604

You have launched two containers, with Ubuntu 18.04 and Ubuntu 16.04 respectively. You have specified two LXD profiles, default and gui. This means that the new container gets configuration from default, then from gui.

Next, make sure that the containers are up and running. In the LXD profile there are instructions to install additional packages automatically for us. That takes time. Here is how we check. We get a shell as the non-root account ubuntu in the container, and tail the end of the cloud-init log file. It says that it has 0 failures, it took (on this case) about 22 seconds to complete, and the startup was successful.

$ lxc exec gui1804 -- sudo --user ubuntu --login
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@gui1804:~$ tail -6 /var/log/cloud-init.log 
2018-06-25 13:11:54,175 - main.py[DEBUG]: Ran 20 modules with 0 failures
2018-06-25 13:11:54,176 - util.py[DEBUG]: Creating symbolic link from '/run/cloud-init/result.json' => '../../var/lib/cloud/data/result.json'
2018-06-25 13:11:54,176 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False)
2018-06-25 13:11:54,177 - util.py[DEBUG]: Read 12 bytes from /proc/uptime
2018-06-25 13:11:54,177 - util.py[DEBUG]: cloud-init mode 'modules' took 21.822 seconds (22.00)
2018-06-25 13:11:54,177 - handlers.py[DEBUG]: finish: modules-final: SUCCESS: running modules for final
ubuntu@gui1804:~$

Subsequently, run glxgears to test graphics hardware acceleration. You may also try glxinfo.

ubuntu@gui1804:~$ glxgears 
366 frames in 5.0 seconds = 73.161 FPS
300 frames in 5.0 seconds = 59.999 FPS
300 frames in 5.0 seconds = 60.000 FPS

XIO: fatal IO error 11 (Resource temporarily unavailable) on X server ":0"
after 1047 requests (42 known processed) with 0 events remaining.
ubuntu@gui1804:~$

Finally, test the audio and whether Pulseaudio works.

ubuntu@gui1804:~$ pactl info
Server String: unix:/tmp/.pulse-native
Library Protocol Version: 32
Server Protocol Version: 32
Is Local: yes
Client Index: 12
Tile Size: 65472
User Name: myusername
Host Name: mycomputer
Server Name: pulseaudio
Server Version: 8.0
Default Sample Specification: s16le 2ch 44100Hz
Default Channel Map: front-left,front-right
Default Sink: noechosink
Default Source: noechosource
Cookie: 4a83:ba9b
ubuntu@gui1804:~$

Audio works fine as well. If there was an error, the pactl info command would have showed it here.

Now, you can install deb packages of GUI programs in these containers, such as Firefox, Chromium browser, Chrome, Steam and so on. Installing snap packages inside the containers and having them appear on your desktop is not supported yet. That would require LXD 3.2 and a few modifications to the profile (not covered in this post).

In the following subsections, we see some useful examples.

Running a separate instance of a program

We are creating a GUI container in order to run Firefox from it. It will be a separate and independent instance of Firefox compared to our desktop browser.

$ lxc launch --profile default --profile gui ubuntu:18.04 firefox
Creating firefox
Starting firefox

$ lxc exec firefox -- sudo --user ubuntu --login
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@firefox:~$ sudo apt install firefox
ubuntu@firefox:~$ firefox

Running old programs in old versions of Ubuntu

redet is a Tcl/Tk program that does not run easily on Ubuntu 18.04 because it needs some extra packaging effort for newer versions of Ubuntu. One option would have been to install Ubuntu 12.04 in VirtualBox. Here is the LXD alternative: We launch an Ubuntu 12.04 container, then install redet and finally run it. It took around 40 seconds from launch to GUI.

$ lxc launch --profile default --profile gui ubuntu:12.04 redet
Creating redet
Starting redet

$ lxc exec redet -- sudo su -l ubuntu
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@redet:~$ sudo apt-get install redet
...ubuntu@redet:~$ redet
Redet 8.26
Copyright (C) 2003-2008 William J. Poser.
This program is free software; you can redistribute it
and/or modify it under the terms of version 3 of the GNU General
Public License as published by the Free Software Foundation.

Running Windows programs with Wine

When you need to run a particular Windows program with Wine, you would prefer not to install all the dependencies on your desktop Ubuntu but rather have them confined into a container. Here is how to do this. We launch a new GUI container called wine, then install Wine (package wine-stable, Wine version 3.0) according to the official instructions, and finally install a Windows program. We can reuse the same container to install more Windows programs.

$ lxc launch --profile default --profile gui ubuntu:18.04 wine
Creating wine
Starting wine

$ lxc ubuntu wine
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.

ubuntu@wine:~$ sudo dpkg --add-architecture i386 
ubuntu@wine:~$ wget -nc https://dl.winehq.org/wine-builds/Release.key


ubuntu@wine:~$ sudo apt-key add Release.key
OK
ubuntu@wine:~$ sudo apt-add-repository https://dl.winehq.org/wine-builds/ubuntu/
ubuntu@wine:~$ sudo apt install wine-stable
...

Then, you can set up the environment to run winetricks in order to easily install Windows programs.

ubuntu@wine:~$ echo export PATH=\"/opt/wine-stable/bin:\$PATH\" >> ~/.profile 
ubuntu@wine:~$ source ~/.profile 
ubuntu@wine:~$ which wine
/opt/wine-stable/bin/wine
ubuntu@wine:~$ sudo apt install zenity unzip cabextract
ubuntu@wine:~$ wget  https://raw.githubusercontent.com/Winetricks/winetricks/master/src/winetricks
ubuntu@wine:~$ chmod +x winetricks 
ubuntu@wine:~$ ./winetricks

We have installed Internet Explorer through winetricks and here it is,

ubuntu@wine:~$ wine .wine/drive_c/Program\ Files/Internet\ Explorer/iexplore.exe

A closer look into the gui LXD profile

Let’s have a closer look at the gui LXD profile contents.

$ lxc profile show gui
config:
  environment.DISPLAY: :0
  raw.idmap: both 1000 1000
  user.user-data: |
    #cloud-config
    runcmd:
      - 'sed -i "s/; enable-shm = yes/enable-shm = no/g" /etc/pulse/client.conf'
      - 'echo export PULSE_SERVER=unix:/tmp/.pulse-native | tee --append /home/ubuntu/.profile'
    packages:
      - x11-apps
      - mesa-utils
      - pulseaudio
description: GUI LXD profile
devices:
  PASocket:
    path: /tmp/.pulse-native
    source: /run/user/1000/pulse/native
    type: disk
  X0:
    path: /tmp/.X11-unix/X0
    source: /tmp/.X11-unix/X0
    type: disk
  mygpu:
    type: gpu
    name: gui
used_by:
- /1.0/containers/gui1804

The config node

First, there is environment.DISPLAY, with the default value :0. This is an environment variable, and has the value of the default display of the host’s X11 server. You may have to change this to :1 if you have more displays (for example, multiple graphics cards). Here is how to set this to :1,

$ lxc profile set gui environment.DISPLAY :1

The raw.idmap has a value that refers to the $UID of the non-root user of the host. By default on Ubuntu it is 1000 (user ID and group ID). This is necessary for the bind-mounting of sockets of the host to the container. If you need to change it, here is how to do it,

$ lxc profile set gui raw.idmap "both 1001 1001"

The user.user-data are instructions for cloud-init. The LXD container images from the ubuntu: repository support cloud-init, and we use it to pass configuration to the newly created container.

In cloud-init, we use runcmd to run two commands. First, disable shm in PulseAudio so that it uses an alternative that works in LXD. Second, set the PULSE_SERVER environment variable to the Unix socket that have bind-mounted in the devices node.

In packages, we get cloud-init to install for us the minimal packages to get X11 libraries, Mesa libraries and the PulseAudio client libraries. On top of that, we get cloud-init to run apt update for us so that when we get into the container, we can install packages straight away.

The description node

This node has the description text of the LXD profile.

The devices node

The devices node has two Unix sockets, one for PulseAudio and one for X11.

It also gives access to the gpu device.

The used_by node

We do not edit this node, it will include the created containers that have this profile.

Creating shortcuts to the gui container applications

If you want to run Internet Exploerr from the container, you can simply run from a terminal window the following,

$ lxc exec wine -- sudo --login --user ubuntu /opt/wine-stable/bin/wine /home/ubuntu/.wine/drive_c/Program\ Files/Internet\ Explorer/iexplore.exe

and that’s it.

To make a shortcut, create the following .desktop file on the host and finally use desktop-file-install to install into /usr/share/applications/.

$ cat > lxd-iexplore.desktop
[Desktop Entry]
Version=1.0
Name=Internet Explorer in LXD
Comment=Access the Internet with Wine Internet Explorer through a LXD container
Exec=lxc exec wine -- sudo --login --user ubuntu /opt/wine-stable/bin/wine /home/ubuntu/.wine/drive_c/Program\ Files/Internet\ Explorer/iexplore.exe %U
Icon=/usr/share/icons/HighContrast/scalable/apps-extra/firefox-icon.svg
Type=Application
Categories=Network;WebBrowser;
^D
$ sudo desktop-file-install lxd-iexplore.desktop

This is how the (randomly-selected) icon looks like in a File Manager.

Here is the icon on the Launcher. Simply drag from the File Manager and drop to the Launcher in order to get the application at your fingertips.

Troubleshooting

Error sudo: unknown user: ubuntu and unable to initialize policy plugin

You get this error when you create a container and then very quickly try to connect to it with a shell. Here is how it looks.

$ lxc launch --profile default --profile gui ubuntu:18.04 gui1804
Creating gui1804
Starting gui1804

$ lxc ubuntu gui1804
sudo: unknown user: ubuntu
sudo: unable to initialize policy plugin

The Ubuntu container images come with cloud-init instructions that, among others, create the non-root account ubuntu. When you launch a container, it takes several seconds to start the runtime and then execute the cloud-init instructions. You get this error when you try to connect too soon, when the ubuntu account has not been created yet. You can try again until the account is created.

Error Pulseaudio, Connection failure: Connection refused

You got a shell in the newly created container, but when you try to use the audio, you get Connection refused. Here is how it looks,

ubuntu@gui1804:~$ pactl info
Connection failure: Connection refused
ubuntu@gui1804:~$

The cloud-init instructions in the gui LXD profile have commands to install packages and commands to setup the PulseAudio environment variable. The sequence is to install first the packages, and then add PULSE_SERVER in the ~/.profile. This means that if you get a shell in the container before cloud-init has completed, you have missed the addition of PULSE_SERVER in ~/.profile. As a solution, you can log out and then connect again. Or, do

ubuntu@gui1804:~$ source ~/.profile
ubuntu@gui1804:~$ pactl info
Server String: unix:/tmp/.pulse-native
Library Protocol Version: 32
Server Protocol Version: 32
...

I have an existing container, can I make it a gui container?

Yes, you can. You can assign profiles to a container, then restart the container. Here is how,

$ lxc profile assign oldcontainer default,gui
Profiles default,gui applied to oldcontainer
$ lxc restart oldcontainer

I have a gui container, can I remove the gui profile from it?

Yes, by assigning the default or any other profile. Then, restart the container.

$ lxc profile assign gui1804 default
Profiles default applied to gui1804
$ lxc restart gui1804

More errors

Report in the comments any issues that you encounter and I will be adding here.

I tested this on both Intel and AMD GPUs and they worked fine for me. For NVidia there might be some additional issues, so I would rather investigate again rather than copy from the old post.

Discussion

A year ago, I wrote the first version of the post on how to run GUI application in a LXD system container. I had put together older sources from the Internet while writing that post. In this post, I used the comments and feedback of last year’s post to automate the process and make it less error-prone.

Up to now, we have seen how to reuse the existing display of our desktop for any GUI apps running in a container. The downside is that a malicious application in a container can attack the desktop because X11. One solution is to use Xephyr instead of our desktop’s DISPLAY (:0). It is elemental to adapt this post to use Xephyr. However, in terms of usability, it would be ideal to create some sort of VirtualBox clone that would use LXD containers instead of VMs to launch Linux distributions. In this VirtualBox clone, it would be easy to select whether we want to output in a window on the desktop’s DISPLAY or in a Xephyr window. Also, in a Xephyr window we can launch a window manager, therefore we can have a proper Linux desktop environment in a window.

on June 25, 2018 07:37 PM

Any group of humans needs some form of governance. It’s a set of rules the group follows in order to address issues and take clear decisions. Even the absence the rules (anarchy) is a form of governance! At the opposite end of the spectrum is dictatorship, where all decisions are made by one person. Open source projects are groups of humans, and they are no exception to this. They can opt for various governance models, which I detailed in a previous article four years ago (how time flies!).

That article compared various overall models in terms of which one would best ensure the long-term survival of the community, avoiding revolutions (or forks). It advocated for a representative democracy model, and since then I've been asked several times for the best recipe to implement it. However there are numerous trade-offs in the exercise of building governance, and the "best" depends a lot on the specifics of each project situation. So, rather than detail a perfect one-size-fits-all governance recipe, in this article I'll propose a framework of three basic rules to keep in mind when implementing it.

This simple 3-rule model can be used to create just enough governance, a lightweight model that should be sustainable over the long run, while avoiding extra layers of useless bureaucracy.

Rule #1: Contributor-driven bodies

Governance bodies for an open source project should be selected by the contributors to the project. I'm not talking of governance bodies for open source Foundations (which generally benefit from having some representation of their corporate sponsors chiming in on how their money shall be spent). I'm talking about the upstream open source project itself, and how the technical choices end up being made in a community of contributors.

This rule is critical: it ensures that the people contributing code, documentation, usage experience, mentoring time or any other form of contribution to the project are aligned with the leadership of the project. When this rule is not met, generally the leadership and the contributors gradually drift apart, to the point where the contributors no longer feel like their leadership represents them. This situation generally ends with contributors making the disruptive decision to fork the project under a new, contributor-aligned governance, generally leaving the old governance body with a trademark and an empty shell to govern.

One corollary of that first rule is that the governance system must regularly allow replacement of current leaders. Nobody should be appointed for life, and the contributors should regularly be consulted, especially in fast-moving communities.

Rule #2: Aligned with their constituencies

This is another corollary of the first rule. In larger projects, you need enough governance bodies to ensure that each is aligned with its own constituency. In particular, if your community is made of disjoint groups with little to no overlap in membership, and those groups each need decisions to be made, they probably need to each have their own governance body at that level.

The risk we are trying to avoid here is dominance of the larger group over smaller groups. If you use a single governance body for two (or more) disjoint groups, chances are that the larger group will dominate the representative governance body, and therefore will end up making decisions for the smaller group. This is generally OK for global decisions that affect every contributor equally, but matters that are solely relevant to the smaller group should be decided at the smaller group level, otherwise that group might be tempted to fork to regain final call authority over their own things.

Rule #3: Only where decisions are needed

Strict application of rule #2 tends to result in the creation of a large number of governance bodies, that's why you need to balance it with rule #3: only create governance bodies where decisions are actually needed. The art of lightweight governance is, of course, to find the best balance between rule #2 and rule #3.

This rule has two practical consequences. The first one is obvious: you should not create vanity governance bodies, just to give people or organizations a cool title or badge. Numerous communities fall in the trap of creating "advisory" boards with appointed seats, to thank long-standing community members, or give organizations the illusion of control. Those bodies create extra bureaucracy while not being able to make a single call, or worse, trying desperately to assert authority to justify their existence.

The second consequence is, before creating a governance body at a certain level in the project organization, you should question whether decisions are really needed at that level. If the group needs no final call, or can trust an upper decision body to make the call if need be, maybe that governance body is not needed. If two governance bodies need to cooperate to ensure things work well between them, do you really need to create a governance body above them, or just encourage discussion and collaboration ? This trade-off is more subtle, but generally boils down to how badly you need final decisions to be made, vs. letting independently-made decisions live alongside.

That is all there is to it! As I said in the introduction, those three rules are not really a magic recipe, but more of a basic framework to help you, in the specific situation of your community, build healthy communities with just enough governance. Let me know if you find it useful!

on June 25, 2018 11:45 AM

June 24, 2018

En esta ocasión, Francisco MolineroFrancisco Javier Teruelo y Marcos Costalescharlamos sobre nuestra experiencia personal con Ubuntu 18.04.

Capítulo 8º de la segunda temporada

El podcast esta disponible para escuchar en:
on June 24, 2018 02:32 PM

June 22, 2018

I’m a maker, baby

Benjamin Mako Hill

 

What does the “maker movement” think of the song “Maker” by Fink?

Is it an accidental anthem or just unfortunate evidence of the semantic ambiguity around an overloaded term?

on June 22, 2018 11:34 PM

June 20, 2018

Plans for DebCamp18

Jonathan Carter

Dates

I’m going to DebCamp18! I should arrive at NCTU around noon on Saturday, 2018-07-21.

My Agenda

  • DebConf Video: Research if/how MediaDrop can be used with existing Debian video archive backends (basically, just a bunch of files on http).
  • DebConf Video: Take a better look at PeerTube and prepare a summary/report for the video team so that we better know if/how we can use it for publishing videos.
  • Debian Live: I have a bunch of loose ideas that I’d like to formalize before then. At the very least I’d like to file a bunch of paper cut bugs for the live images that I just haven’t been getting to. Live team may also need some revitalization, and better co-ordination with packagers of the various desktop environments in terms of testing and release sign-offs. There’s a lot to figure out and this is great to do in person (might lead to a DebConf BoF as well).
  • Debian Live: Current live weekly images have Calamares installed, although it’s just a test and there’s no indication yet on whether it will be available on the beta or final release images, we’ll have to do a good assessment on all the consequences and weigh up what will work out the best. I want to put together an initial report with live team members who are around.
  • AIMS Desktop: Get core AIMS meta-packages in to Debian… no blockers on this but just haven’t had enough quite time to do it (And thanks to AIMS for covering my travel to Hsinchu!)
  • Get some help on ITPs that have been a little bit more tricky than expected:
    • gamemode – Adjust power saving and cpu governor settings when launching games
    • notepadqq – A linux clone of notepad++, a popular text editor on Windows
    • Possibly finish up zram-tools which I just don’t get the time for. It aims to be a set of utilities to manage compressed RAM disks that can be used for temporary space, compressed in-memory swap, etc.
  • Debian Package of the Day series: If there’s time and interest, make some in-person videos with maintainers about their packages.
  • Get to know more Debian people, relax and socialize!
on June 20, 2018 08:32 AM

June 19, 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 202 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased to 190 hours per month thanks to a few new sponsors who joined to benefit from Wheezy’s Extended LTS support.

We are currently in a transition phase. Wheezy is no longer supported by the LTS team and the LTS team will soon take over security support of Debian 8 Jessie from Debian’s regular security team.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on June 19, 2018 08:27 AM

June 15, 2018

As the last man standing as a fellowship representative in FSFE, I propose to give a report at the community meeting at RMLL.

I'm keen to get feedback from the wider community as well, including former fellows, volunteers and anybody else who has come into contact with FSFE.

It is important for me to understand the topics you want me to cover as so many things have happened in free software and in FSFE in recent times.

last man standing

Some of the things people already asked me about:

  • the status of the fellowship and the membership status of fellows
  • use of non-free software and cloud services in FSFE, deviating from the philosophy that people associate with the FSF / FSFE family
  • measuring both the impact and cost of campaigns, to see if we get value for money (a high level view of expenditure is here)

What are the issues you would like me to address? Please feel free to email me privately or publicly. If I don't have answers immediately I would seek to get them for you as I prepare my report. Without your support and feedback, I don't have a mandate to pursue these issues on your behalf so if you have any concerns, please reply.

Your fellowship representative

on June 15, 2018 07:28 AM

June 14, 2018

Previously: v4.16.

Linux kernel v4.17 was released last week, and here are some of the security things I think are interesting:

Jailhouse hypervisor

Jan Kiszka landed Jailhouse hypervisor support, which uses static partitioning (i.e. no resource over-committing), where the root “cell” spawns new jails by shrinking its own CPU/memory/etc resources and hands them over to the new jail. There’s a nice write-up of the hypervisor on LWN from 2014.

Sparc ADI

Khalid Aziz landed the userspace support for Sparc Application Data Integrity (ADI or SSM: Silicon Secured Memory), which is the hardware memory coloring (tagging) feature in Sparc M7. I’d love to see this extended into the kernel itself, as it would kill linear overflows between allocations, since the base pointer being used is tagged to belong to only a certain allocation (sized to a multiple of cache lines). Any attempt to increment beyond, into memory with a different tag, raises an exception. Enrico Perla has some great write-ups on using ADI in allocators and a comparison of ADI to Intel’s MPX.

new kernel stacks cleared on fork

It was possible that old memory contents would live in a new process’s kernel stack. While normally not visible, “uninitialized” memory read flaws or read overflows could expose these contents (especially stuff “deeper” in the stack that may never get overwritten for the life of the process). To avoid this, I made sure that new stacks were always zeroed. Oddly, this “priming” of the cache appeared to actually improve performance, though it was mostly in the noise.

MAP_FIXED_NOREPLACE

As part of further defense in depth against attacks like Stack Clash, Michal Hocko created MAP_FIXED_NOREPLACE. The regular MAP_FIXED has a subtle behavior not normally noticed (but used by some, so it couldn’t just be fixed): it will replace any overlapping portion of a pre-existing mapping. This means the kernel would silently overlap the stack into mmap or text regions, since MAP_FIXED was being used to build a new process’s memory layout. Instead, MAP_FIXED_NOREPLACE has all the features of MAP_FIXED without the replacement behavior: it will fail if a pre-existing mapping overlaps with the newly requested one. The ELF loader has been switched to use MAP_FIXED_NOREPLACE, and it’s available to userspace too, for similar use-cases.

pin stack limit during exec

I used a big hammer and pinned the RLIMIT_STACK values during exec. There were multiple methods to change the limit (through at least setrlimit() and prlimit()), and there were multiple places the limit got used to make decisions, so it seemed best to just pin the values for the life of the exec so no games could get played with them. Too much assumed the value wasn’t changing, so better to make that assumption actually true. Hopefully this is the last of the fixes for these bad interactions between stack limits and memory layouts during exec (which have all been defensive measures against flaws like Stack Clash).

Variable Length Array removals start

Following some discussion over Alexander Popov’s ongoing port of the stackleak GCC plugin, Linus declared that Variable Length Arrays (VLAs) should be eliminated from the kernel entirely. This is great because it kills several stack exhaustion attacks, including weird stuff like stepping over guard pages with giant stack allocations. However, with several hundred uses in the kernel, this wasn’t going to be an easy job. Thankfully, a whole bunch of people stepped up to help out: Gustavo A. R. Silva, Himanshu Jha, Joern Engel, Kyle Spiers, Laura Abbott, Lorenzo Bianconi, Nikolay Borisov, Salvatore Mesoraca, Stephen Kitt, Takashi Iwai, Tobin C. Harding, and Tycho Andersen. With Linus Torvalds and Martin Uecker, I also helped rewrite the max() macro to eliminate false positives seen by the -Wvla compiler option. Overall, about 1/3rd of the VLA instances were solved for v4.17, with many more coming for v4.18. I’m hoping we’ll have entirely eliminated VLAs by the time v4.19 ships.

That’s in for now! Please let me know if you think I missed anything. Stay tuned for v4.18; the merge window is open. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on June 14, 2018 11:23 PM

Active Searching

Stephen Michael Kellat

I generally am not trying to shoot for terse blog posts. That being said, my position at work is getting increasingly untenable since we're in a position of being physically unable to accomplish our mission goals prior to funding running out at 11:59:59 PM Eastern Time on September 30th. Conflicting imperatives were set and frankly we're starting to hit the point that neither are getting accomplished regardless of how many warm bodies we're throwing at the problem. It isn't good either when my co-workers who have any military experience are sounding out KBR, Academi, and Perspecta.

I'm actively seeking new opportunities. In lieu of a fancy resume in LaTeX, I put forward the relevant details at https://www.linkedin.com/in/stephenkellat/. I can handle LaTeX, though, as seen by the example here that has some copyright-restricted content stripped from it: http://erielookingproductions.info/saybrook-example.pdf.

Ideas for things I could do:

  • Return to being a librarian
  • Work in an Emergency Operations Center (I am Incident Command System trained plus ran through the FEMA EOC basics training)
  • Work as a dispatcher (General class licensed ham radio operator)
  • Teach since I do "point of need" education now over the phone such as spending 30 minutes or more explaining to people how the "Estimated Tax Penalty" in the Internal Revenue Code works, for example
  • Work in a journalistic endeavor as I previously worked as a print news reporter and helmed an audio podcast for 6 years
  • Help coordinate interactions between programmers and regulators (Would you want to be in the uncomfortable position Mr. Zuckerberg was in front of the US Congress without support?)

If your project/work/organization/endeavor/skunkworks is looking for a new team player I may prove a worthwhile addition. You more than likely could pay me more than my current employer does.

on June 14, 2018 02:00 AM

June 13, 2018

It’s been quite a while since the last post about Mesa backports, so here’s a quick update on where we are now.

Ubuntu 18.04 was released with Mesa 18.0.0 which was built against libglvnd. This complicates things a bit when it comes to backporting Mesa to 16.04, because the packaging has changed a bit due to libglvnd and would break LTS->LTS upgrades without certain package updates.

So we first need to make sure 18.04 gets Mesa 18.0.5 (which is the last of the series, so no version bumps expected until the backport from 18.10) along with an updated libglvnd which bumps the Breaks/Replaces on old package versions to ensure that xenial -> bionic upgrade will go smoothly once 18.0.5 is backported to xenial, which will in fact be in -proposed soon.

What this also means is that the only release getting new Mesa backports via the x-updates PPA from now on is 18.04. And I’ve pushed Mesa 18.1.1 there today, enjoy!

on June 13, 2018 01:08 PM

June 12, 2018

This last weekend I was at FOSS Talk Live 2018. It was fun. And it led me into various thoughts of how I’d like there to be more of this sort of fun in and around the tech community, and how my feelings on success have changed a bit …

on June 12, 2018 09:07 AM

June 07, 2018

KDE Slimbook 2 Review

KDE Slimbook 2  Outside

The kind folks at Slimbook recently sent me the latest generation of their ultrabook-style laptop line for review, the KDE Slimbook 2. You can hear my thoughts on the latest episode of the Ubuntu Podcast, released on June 7th 2018.

Slimbook are a small laptop vendor based in Spain. All the laptops ship with KDE Neon as the default operating system. In addition to their hardware, they also contribute to and facilitate local Free Software events in their area. I was sent the laptop only for review purposes. There's no other incentive provided, and Slimbook didn't see this blog post before I published it.

Being a small vendor, they don't have the same buying power with OEM vendors as other big name laptop suppliers. This is reflected in the price you pay. You're supporting a company who are themselves supporting Free Software developers and communities.

If you're after the cheapest possible laptop, and don't care about its origin or the people behind the device, then maybe this laptop isn't for you. However, if you like to vote with your wallet, then the KDE Slimbook should absolutely be on your list to seriously consider.

Specs

The device I was sent has the following technical specifications.

  • Core i5-7200 @ 2.5GHz CPU
  • Integrated Intel HD 620 GPU
  • 16GB DDR4 RAM
  • 500GB Samsung 960 EVO SSD
  • Intel 7265 Wireless chipset
  • Bluetooth chipset
  • 1080p Matte finish
  • Full size SD card
  • Heaphone socket and built in mic
  • 720p webcam
  • 1 x USB 3.0 (USB3.1 Gen 1) (Type A), 1 x USB 3.0 (USB3.1 Gen 1) (Type C), 1 x USB 2.0 (Type A)
  • Spanish 'chiclet' style keyboard with power button in top right
  • 3-level keyboard backlight
  • Elan Synaptics touch pad
  • 46Wh battery, TPS S10
  • Power adpater with right-angle plug
  • USB-C dongle

As shipped, mine came in at around ~1098EUR / 956GBP / 1267USD. Much of this can be tweaked, including the keyboard layout, although doing so may extend the lead time on receiving the device. There are plenty of options to tweak, and the site gives a running total as you adjust to taste. There's an i7 version, and I'm told it will soon be possible to order one with a black case, rather than the silver I was shipped. The laptop shipped with one drive, but has capacity for both an M.2 and traditional form factor drive too. So, fully loaded you could order this with 2x1TB SSDs if you're after extra disk space.

Notable is the lack of Ethernet port, which for some is a dealbreaker, even in these days of ubiquitous reliable wifi for many. The solution Slimbook went with is to provide two optional 'dongles'. One connects to USB3 Type A and presents an Ethernet port. The other option connects to the USB C port and provides 3 more USB 3 tradtional ports and an Ethernet socket. Slimbook shipped me the latter, which was super useful for connecting more USB devices, and a LAN cable.

The cable on the dongle is relatively short, but it feels solid, and I had no problems with it in infrequent daily use. One omission on the dongle is the lack of a pass-through USB C port. Once the dongle is attached to the laptop, you've used your only type-c connector. This might not be a problem if you're a luddite like me who had very few USB-C devices, but I imagine that'll be more of an issue going forward. This is an optional dongle though, and you could certainly choose not to get it, but purchase a differenty one to service your requirements.

Software

KDE Slimbook  2 Inside

Default install - KDE Neon

The laptop shipped with KDE Neon. It's no secret to listeners of the Ubuntu Podcast that I'm a bit of a KDE fanboy since I began testing Neon a few months back, and stuck with it on my ThinkPad T450. So I am a little biased in favour of this particular Linux distribution. So I felt very much at home on the Slimbook with KDE.

On other computers I've tweaked the desktop in various ways - it's the KDE raison d'être to expose settings for everything, and I usually tweak a fair number. However on the Slimbook I wanted to try out the default experience. I found the default applications easy to use, well integrated and reliable. I'm writing this blog post in Kwrite, and have noticed features that I would have not expected here, such as the zoomed out code view and popup spelling completion.

I'm pleasantly surprised by the choices made on the software build here. KDE performs well & starts up and wakes from suspend quickly. Everything works out of the box, and the selection of applications is small, but wisely chosen. Unsurprisingly I've augmented the default apps with a few applications I use on a daily basis elsewhere, and they all fit in perfectly. I didn't feel any of the applications I use stood out as alien, or non-KDE originals. The theme and app integration is spot on. If I were a Slimbook customer, I'd happily leave the default install pretty much as-is and thoughouly enjoy the experience.

The software is delivered by the usual Ubuntu 16.04 (Xenial) archives, with the KDE Neon archive delivering updates to the KDE Plasma desktop and suite of applications. In addition two PPAs are enabled. One for TLP and another for screenfetch. Personally on a shipping laptop I'd be inclined not to enable 3rd party PPAs, but perhaps supply documentation which details how the user can enable them if required. PPAs are not stable, and can deliver unexpected updates and experiences to users.

I should also mention in the pack was a tri-fold leaflet titled "Plasma Desktop & You". It details a little about KDE, the community and invites new users to not only enjoy the software, but get involved. It's a nice touch.

Alternative options

Slimbook don't appear to offer other Linux distributions - and given the lid of the laptop has a giant KDE logo engraved on it, that wouldn't make a ton of sense anyway.

However I tested a couple of distros on it via live USB sticks. With Ubuntu 18.04 everything worked, including the USB C Ethernet dongle. For fun I also tried out Trisquel, which also appeared to mostly work including wired network via the dongle, but wifi didn't function. I didn't attempt any other distros, but given how well KDE Neon (based on Ubuntu 16.04), Ubuntu 18.04 worked, I figure any distro-hoppers would have no hardware compatibility issues.

Hardware

Display & Graphics

The 1080p matte finish panel is great. I found it plenty bright and clear enough at maximum brightness. There are over 20 levels of brightness and I found myself using a balanced setting near the middle most of the time, only needing full brightness sometimes when outside. The viewing angles are fine for a single person using it, but don't lend well to having a bunch of people crouched round the laptop.

I ran a few games to see how the integrated GPU performed, and it was surprinsgly okay. My usual tests involved running Mini Metro which got 50fps, Goat Simulator at 720 got me 25fps and Talos Principle at 1080p also clocked in 25fps. This isn't a gaming laptop but if you want to play a few casual games or even run some emulators between work, it's more than up to the task.

Performance

I use a bunch of fairly chunky applications on a daily basis including common electron apps and tools. I also frequently build software locally using various compilers. The Slimbook 2 was a super effective workstation computer for these tasks. It rarely broke into a sweat, with very few occasions where the fan span up. Indeed I can't really tell you how loud the fan is because I so rarely heard it.

It boots quickly, the session starts promptly and application startup isn't a problem. Overall as a workstation, it's fine for any of the tasks I do daily.

Keyboard

KDE Slimbook  2 Keyboard

The keyboard is a common 'chiclet' affair, with a full row of function keys that double as media, wifi, touchpad, brightness hardware control buttons. The arrow cluster is bottom right with home/end/pgup/pgdown as secondary functions on those keys. The up/down arrows are vertically half-size to save space, which I quite like.

The "Super" (Windows) key sports a natty little Tux with the Slimbook logo beneath. Nice touch :)

Touchpad

The touchpad is a decent size and works with single and double touch for click/drag and scrolling. I did find the palm rejection wasn't perfect in KDE. I sometimes found myself nuking chunks of a document while typing as my fat thumbs hit the touchpad, selecting text and overtyping it.

I tried fiddling with the palm rejection options in KDE but didn't quite hit the sweet-spot. I've never been a fan of touchpads at all, and would likely just turn off the device (via Fn-F1) if this continued to annoy me, which it didn't especially.

Audio

As with most ultrabook style laptops the audio is okay, but not great. I played my usual test songs and the audio reproduction via speakers lacked volume, was a bit tinny and lacked bass.

With headphones plugged in, it was fine. I rarely use laptop speakers personally, but tend to use a pair of headphones. Nobody wants to hear what I'm listening to :). It's fine for the odd video conference though.

Battery

The model I had was supplied with a 46Wh battery, a small & lightweight ~40W charger and euro power cable & right angled barrel connector to the laptop. Under normal circumstances with medium workload I would get around 7 hours, sometimes more.

Leaving the laptop on, connected to wifi, with KDE power management switched off and brightness at 30% the system lasted around 8 hours 40 mins. I'd anticipate with a variable workload, with KDE power management switched on, you'd get similar times.

I also tried leaving the laptop playing a YouTube video at 1080p, full screen with wifi switched on and power management suppresed by the browser. The battery gave out after around 5 hours.

The battery takes around 4 hours to re-charge while the laptop is on. This is probably faster if you're not using the laptop at the time, but I didn't test that.

Overall impressions

I've been really happy using the KDE Slimbook 2. The software choices are sensible, and being based on Ubuntu 16.04 meant I could install whatever else I needed outside the KDE ecosystem. The laptop is quiet, feels well built and was a pleasure to use. I'm a little sad to give it back, because I've got used to the form-factor now.

I have only a couple of very minor niggles. The chassis case is a little sharp around the edges, much like the MacBook Air it takes design cues from. Secondly, when suspended the power LED is on the inside of the laptop, above the keyboard. So if like me, you suspend your laptop by closing the lid, you won't know if it suspended properly by looking at the slow blink of the power LED. It's a minor thing, but having been burned (literally) in the past by a laptop which unexpectedly didn't suspend, it's something I'm aware of.

Other than that, it's a cracking machine. I'd be happy to use this on a daily basis. If you're in the market for a new laptop, and want to support a Linux vendor, this device should totally be on your list. Thanks so much to Slimbook for shipping the device over and letting me have plenty of time to play with it!

on June 07, 2018 04:28 PM

June 06, 2018

Imagine that you have a package to build. Sometimes it takes minutes. Other one takes hours. And then you run htop and see that your machine is idle during such build… You may ask “Why?” and the answer would be simple: multiple cpu cores.

On x86-64 developers usually have from two to four cpu cores. Can be double of that due to HyperThreading. And that’s all. So for some weird reason they go for using make -jX where X is half of their cores. Or completely forget to enable parallel builds.

And then I came with ARM64 system. With 8 or 24 or 32 or 48 or even 96 cpu cores. And have to wait and wait and wait for package to build…

So next step is usually similar — edit of debian/rules file and adding --parallel argument to dh call. Or removal of --max-parallel option. And then build makes use of all those shiny cpu cores. And it goes quickly…

UPDATE: Riku Voipio told me that Debhelper 10 does parallel builds by default. If you set ‘debian/compat’ value to at least ’10’.

on June 06, 2018 10:46 AM

June 05, 2018

FSFE has been running the Public Money Public Code (PMPC) campaign for some time now, requesting that software produced with public money be licensed for public use under a free software license. You can request a free box of stickers and posters here (donation optional).

Many non-profits and charitable organizations receive public money directly from public grants and indirectly from the tax deductions given to their supporters. If the PMPC argument is valid for other forms of government expenditure, should it also apply to the expenditures of these organizations too?

Where do we start?

A good place to start could be FSFE itself. Donations to FSFE are tax deductible in Germany, the Netherlands and Switzerland. Therefore, the organization is partially supported by public money.

Personally, I feel that for an organization like FSFE to be true to its principles and its affiliation with the FSF, it should be run without any non-free software or cloud services.

However, in my role as one of FSFE's fellowship representatives, I proposed a compromise: rather than my preferred option, an immediate and outright ban on non-free software in FSFE, I simply asked the organization to keep a register of dependencies on non-free software and services, by way of a motion at the 2017 general assembly:

The GA recognizes the wide range of opinions in the discussion about non-free software and services. As a first step to resolve this, FSFE will maintain a public inventory on the wiki listing the non-free software and services in use, including details of which people/teams are using them, the extent to which FSFE depends on them, a list of any perceived obstacles within FSFE for replacing/abolishing each of them, and for each of them a link to a community-maintained page or discussion with more details and alternatives. FSFE also asks the community for ideas about how to be more pro-active in spotting any other non-free software or services creeping into our organization in future, such as a bounty program or browser plugins that volunteers and staff can use to monitor their own exposure.

Unfortunately, it failed to receive enough votes (minutes: item 24, votes: 0 for, 21 against, 2 abstentions)

In a blog post on the topic of using proprietary software to promote freedom, FSFE's Executive Director Jonas Öberg used the metaphor of taking a journey. Isn't a journey more likely to succeed if you know your starting point? Wouldn't it be even better having a map that shows which roads are a dead end?

In any IT project, it is vital to understand your starting point before changes can be made. A register like this would also serve as a good model for other organizations hoping to secure their own freedoms.

For a community organization like FSFE, there is significant goodwill from volunteers and other free software communities. A register of exposure to proprietary software would allow FSFE to crowdsource solutions from the community.

Back in 2018

I'll be proposing the same motion again for the 2018 general assembly meeting in October.

If you can see something wrong with the text of the motion, please help me improve it so it may be more likely to be accepted.

Offering a reward for best practice

I've observed several discussions recently where people have questioned the impact of FSFE's campaigns. How can we measure whether the campaigns are having an impact?

One idea may be to offer an annual award for other non-profit organizations, outside the IT domain, who demonstrate exemplary use of free software in their own organization. An award could also be offered for some of the individuals who have championed free software solutions in the non-profit sector.

An award program like this would help to showcase best practice and provide proof that organizations can run successfully using free software. Seeing compelling examples of success makes it easier for other organizations to believe freedom is not just a pipe dream.

Therefore, I hope to propose an additional motion at the FSFE general assembly this year, calling for an award program to commence in 2019 as a new phase of the PMPC campaign.

Please share your feedback

Any feedback on this topic is welcome through the FSFE discussion list. You don't have to be a member to share your thoughts.

on June 05, 2018 08:40 PM