November 21, 2019

Simplicity is the magic ingredient in any product design. For members of the KDE community, snap development has become that much simpler, thanks to the recent introduction of the KDE neon extension.

Last year, we talked about the KDE build and content snaps, which can greatly speed the build of KDE application snaps and save disk space. The extension takes this effort one step farther, and allows for faster, smoother integration of snaps into the Linux desktop. While there are no shortcuts in life, you can rely on a passionate community of skilled techies to make the journey easier.

Let’s have a look at the magic behind the scenes.

KDE neon extension details

Currently, the KDE neon extension is available as a preview feature, so you will need the edge release of snapcraft to be able to test and use it. This also means it might not be suitable for immediate use in production environments where you expect complete stability and predictability. However, it’s a great way for early adopters and tinkerers to get a feel of the possibilities available in the latest snapcraft release.

snap refresh snapcraft --channel edge

Once you have the edge release installed, you can test the extension. The quickest way to explore the functionality is to download the Kcalc example as part of the Qt5 and KDE frameworks demonstration.

git clone https://github.com/galgalesh/kcalc.git

The snapcraft.yaml file contained in the cloned repo has some important declarations. In the app section, we can see the use of the extension, as well as the common-id field, used to link the AppStream metadata to this application.

apps:
  kcalc:
    common-id: org.kde.kcalc.desktop
    command: kcalc
    extensions:
  - kde-neon
    plugs:
      - home
      - opengl
      - network
      - network-bind
      - pulseaudio

The parts section includes the KDE build snaps, as well as several additional build and runtime dependencies that are not part of the common bundle. This means developers may need to tweak their YAML files for specific requirements, but in most cases, the bulk of dependencies will already be satisfied by the common components.

build-snaps:
      - kde-frameworks-5-core18-sdk
      - kde-frameworks-5-core18

That’s pretty much it! Run snapcraft to build the snap, and then test it.

In the background

Once you declare the KDE neon extension in your snapcraft.yaml and build the snap, the following will happen:

  • You will have the latest Qt5 and KDE Frameworks libraries available to your application at runtime. You won’t need to manually satisfy the runtime requirement, and this will also save a fair deal of disk space through the reuse of common components.
  • The extension initialises Qt5 and the desktop environment before the application starts, so functionality like fonts, cursor themes and a11y work correctly. This means you and the users of your snap should enjoy a smoother desktop integration and more consistent looks and behavior. Specifically, your snap will connect to the following content snaps at run time:
    • gtk-common-themes for common icon, cursor and sound themes. This snap will ensure the applications have a consistent look & feel in all desktop environments, as some of the elements may be missing in the system.
    • kde-frameworks-5-core18 for the Qt5 and KDE Frameworks runtime libraries.
    • The extension also configures each application entry with additional plugs: desktop, desktop-legacy, wayland, and x11.

Summary

The KDE neon extension is a handy “extension” to the snapcraft tool, offering a more streamlined build experience. It is also a result of feedback from the community, and we invested a lot of effort in making sure the developers have a pleasant, seamless work environment, and in turn, their users can also enjoy the software in a consistent, transparent manner.

If you’re an early adopter, you should grab the edge release of snapcraft now and test the extension. Finally, you may be asking, what about GNOME? Worry not, a similar extension is in the works, and we will blog about it in the near future.

If you have any comments, please join our forum for a discussion.

Photo by Artem Maltsev on Unsplash.

on November 21, 2019 03:40 PM

Vanilla Framework is a living design system for our products that will grow along with our organisation.

Vanilla’s component library is used by many internal and external websites along with the cloud applications JAAS dashboard and MAAS UI. We release updates approximately every 2 weeks, either for bug fixes, improvements or new components. All members of the team can make changes once discussed and agreed upon. We then review and QA carefully before pushing the code to our master branch.

Which in turn makes the component lifecycle a very complex thing to manage.

Pitch proposal

Components are reusable parts of the user interface intended to support a variety of websites and applications throughout Canonical.

Individual components can be used in a variety of different patterns and contexts. For example, the text input component can be used to ask for an email address, card number or password.

With any design system, there needs to be an approval process to stop bespoke components that belong in just one project being introduced into our suite of products. In order to formalise the introduction of new components, we introduced a bi-weekly Vanilla working group for both designers, developers and anyone else with an interest in the framework to:

  • Discuss new and existing Vanilla components
  • Look at proposals in the GitHub project
  • Discuss any other issues, questions, etc
Vanilla framework repo proposals.Vanilla framework repo proposals

This meeting has proven a success, allowing every member of the Web and Design team to be involved with the framework, and have sight of new and upcoming features. If you’d like to propose a new component or amend an existing one, you can do this via GitHub using our proposal templates.

Design specification

Once a component is validated in the working group meeting, we add it to our backlog for design and/or build. Dependant on whether it’s an update or a new component.

Once the new component has been through design exploration, accessibility testing, and final sign-off. It’s ready to handover to development to build. With any new or existing component we create individual specs which consist of:

  1. Visual 
  2. Markdown file

As illustrated below the visual shows what the component should look like. And a markdown file that defines the fonts, color, paddings, margins, etc.

Heading icon component design spec.Heading icon design spec

Improve or remove

Modify

When any component is needed in a specific section of a website or application, it might need some adjustments and modifications. As a team, we try to find the right balance between flexibility and consistency. Keeping the principles of the framework at the forefront of our minds as we make these decisions.

Workflow of modifying a component in Vanilla.Modifying a component

Above in the illustrated example. We have an existing contextual menu on the left, but in two separate applications, we have two different styles; adding icons, positioning, paddings, etc.

To provide flexibility and consistency, we modified the component to allow icons the ability to be positioned left or right. As well as adding a class to change a button; our default neutral style to any button style you desire.

Deprecating components

When deprecating components in the framework, we’re careful not to remove unnecessary pieces that can cause project regressions. We’re actively reviewing components to ask ourselves:

  • Are we actively using the component?
  • Could we build it from existing components?
  • Has there been an impact on the file size?
Step by step process of deprecating a component.Step-by-step process

When deprecating components in Vanilla, we have a process of removing and should no longer be used in projects. We make the team aware of this in our release notes and newsletter when upgrading Vanilla to a new version.

Status

When we add, make significant updates, or deprecate a component. We update their status so that it’s clear what’s available to use. We’ve created a component status page that documents the component, its status and provides notes with high-level information.

An example of our component status stable.Component status table

Better consistency and collaboration

A component-based design system breeds visual and functional consistency. In Vanilla, we keep components lightweight using base style elements, on top of which we add classes to define patterns. Having this control enables us to keep styles composable and reusable while keeping a focus on consistency. 

Components encourage better collaboration between design and development, allowing your design language to evolve over time. Ideally, what we see inside our Sketch library is what we build with HTML/CSS and React. And that’s what we continue to do in our workflow.

Consistency and collaboration is everything!


on November 21, 2019 01:51 PM

While much of the work on kernel Control Flow Integrity (CFI) is focused on arm64 (since kernel CFI is available on Android), a significant portion is in the core kernel itself (and especially the build system). Recently I got a sane build and boot on x86 with everything enabled, and I’ve been picking through some of the remaining pieces. I figured now would be a good time to document everything I do to get a build working in case other people want to play with it and find stuff that needs fixing.

First, everything is based on Sami Tolvanen’s upstream port of Clang’s forward-edge CFI, which includes his Link Time Optimization (LTO) work, which CFI requires. This tree also includes his backward-edge CFI work on arm64 with Clang’s Shadow Call Stack (SCS).

On top of that, I’ve git a few x86-specific patches that get me far enough to boot a kernel without warnings pouring across the console. Along with that are general linker script cleanups, CFI cast fixes, and x86 crypto fixes, all in various states of getting upstreamed. The resulting tree is here.

On the compiler side, you need a very recent Clang and LLD (i.e. “Clang 10”, or what I do is build from the latest git). For example, here’s how to get started. First, checkout, configure, and build Clang (and include a RISC-V target just for fun):

# Check out latest LLVM
mkdir -p $HOME/src
cd $HOME/src
git clone https://github.com/llvm/llvm-project.git
mkdir llvm-build
cd llvm-build
# Configure
cmake -DCMAKE_BUILD_TYPE=Release \
      -DLLVM_ENABLE_PROJECTS='clang;lld' \
      -DLLVM_EXPERIMENTAL_TARGETS_TO_BUILD="RISCV" \
      ../llvm-project/llvm
# Build!
make -j$(getconf _NPROCESSORS_ONLN)
# Install cfi blacklist template (why is this missing from "make" above?)
mkdir -p $(echo lib/clang/*)/share
cp ../llvm-project/compiler-rt/lib/cfi/cfi_blacklist.txt lib/clang/*/share/cfi_blacklist.txt

Then checkout, configure, and build the CFI tree. (This assumes you’ve already got a checkout of Linus’s tree.)

# Check out my branch
cd ../linux
git remote add kees https://git.kernel.org/pub/scm/linux/kernel/git/kees/linux.git
git fetch kees
git checkout kees/kspp/cfi/x86 -b test/cfi
# Configure (this uses "defconfig" but you could use "menuconfig"), but you must
# include CC and LD in the make args or your .config won't know about Clang.
make defconfig \
     CC=$HOME/src/llvm-build/bin/clang LD=$HOME/src/llvm-build/bin/ld.lld
# Enable LTO and CFI.
scripts/config \
     -e CONFIG_LTO \
     -e CONFIG_THINLTO \
     -d CONFIG_LTO_NONE \
     -e CONFIG_LTO_CLANG \
     -e CONFIG_CFI_CLANG \
     -e CONFIG_CFI_PERMISSIVE \
     -e CONFIG_CFI_CLANG_SHADOW
# Enable LKDTM if you want runtime fault testing:
scripts/config -e CONFIG_LKDTM
# Build!
make -j$(getconf _NPROCESSORS_ONLN) \
     CC=$HOME/src/llvm-build/bin/clang LD=$HOME/src/llvm-build/bin/ld.lld

Do not be alarmed by various warnings, such as:

ld.lld: warning: cannot find entry symbol _start; defaulting to 0x1000
llvm-ar: error: unable to load 'arch/x86/kernel/head_64.o': file too small to be an archive
llvm-ar: error: unable to load 'arch/x86/kernel/head64.o': file too small to be an archive
llvm-ar: error: unable to load 'arch/x86/kernel/ebda.o': file too small to be an archive
llvm-ar: error: unable to load 'arch/x86/kernel/platform-quirks.o': file too small to be an archive
WARNING: EXPORT symbol "page_offset_base" [vmlinux] version generation failed, symbol will not be versioned.
WARNING: EXPORT symbol "vmalloc_base" [vmlinux] version generation failed, symbol will not be versioned.
WARNING: EXPORT symbol "vmemmap_base" [vmlinux] version generation failed, symbol will not be versioned.
WARNING: "__memcat_p" [vmlinux] is a static (unknown)
no symbols

Adjust your .config as you want (but, again, make sure the CC and LD args are pointed at Clang and LLD respectively). This should(!) result in a happy bootable x86 CFI-enabled kernel. If you want to see what a CFI failure looks like, you can poke LKDTM:

# Log into the booted system as root, then:
cat <(echo CFI_FORWARD_PROTO) >/sys/kernel/debug/provoke-crash/DIRECT
dmesg

Here’s the CFI splat I see on the console:

[   16.288372] lkdtm: Performing direct entry CFI_FORWARD_PROTO
[   16.290563] lkdtm: Calling matched prototype ...
[   16.292367] lkdtm: Calling mismatched prototype ...
[   16.293696] ------------[ cut here ]------------
[   16.294581] CFI failure (target: lkdtm_increment_int$53641d38e2dc4a151b75cbe816cbb86b.cfi_jt+0x0/0x10):
[   16.296288] WARNING: CPU: 3 PID: 2612 at kernel/cfi.c:29 __cfi_check_fail+0x38/0x40
...
[   16.346873] ---[ end trace 386b3874d294d2f7 ]---
[   16.347669] lkdtm: Fail: survived mismatched prototype function call!

The claim of “Fail: survived …” is due to CONFIG_CFI_PERMISSIVE=y. This allows the kernel to warn but continue with the bad call anyway. This is handy for debugging. In a production kernel that would be removed and the offending kernel thread would be killed. If you run this again with the config disabled, there will be no continuation from LKDTM. :)

Enjoy! And if you can figure out before me why there is still CFI instrumentation in the KPTI entry handler, please let me know and help us fix it. ;)

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on November 21, 2019 05:09 AM

Streaming Television -- A New Hope?

Stephen Michael Kellat

There is a somewhat cheeky interactive posted by The New York Times to help people determine what services they should subscribe to in streaming. As you might imagine, it is a newspaper based in the United States of America so the results of the quiz are biased to here. Caveat lector.

There has been chatter in the Ubuntu Podcast Telegram group about the usability of such streaming services. There hasn’t been discussion on the podcast yet but you are encouraged to tune in via Spotify, via iTunes, or on Android.

As for me I have my consumer broadband complaining off and on that things like Reddit, Twitter, and Facebook simply didn’t exist today. I’m pretty sure I didn’t order any blocking so I’m questioning what broke where. Sticking to conventional over-the-air broadcasts as well as direct broadcast satellite television service remain the best options for me at the present time in my strange little corner of Ohio.

on November 21, 2019 03:49 AM

November 20, 2019

Today, November 20th, is Trans Day of Remembrance. It is about remembering victims of hate crimes that aren't amongst us anymore. Last year we learned about at least 331 murdered trans people, the real number is like always higher. Also like always it affects mostly trans women of color who are the target of multiple discriminatory patterns.

What is also a pattern is that Brazil has a fair chunk of those murders. Unfortunately the country since a while is on the top in that statistic, but the election of an right winged outspoken queer hating person as president of the country last year did make those who feel the hate having some sort of legitimacy to it, which makes it obviously harder to survive these days. My thoughts thus are specifically with the people of Brazil who fight for their survival.

Right-winged parties though rise all around the globe spreading hate, and as our Debian Free Software Guidelines say in #5, "No Discrimination Against Persons or Groups", and this is something that we can't limit only to software licenses but also have to extend to the way we work as community.

If you ask what you can do: Support your local community spaces and support groups. I had the pleasure to meet Grupo Dignidade during my stay in Curitiba for DebConf 19, and was very thankful for a representative of that group to join my Debian Diversity BoF. Thanks again, Ananda, it was lovely having you!

Meu Corpo é Político - my body is political.

/personal | permanent link | Comments: 0 | Flattr this

on November 20, 2019 01:33 PM

Welcome to the Ubuntu Weekly Newsletter, Issue 605 for the week of November 10 – 16, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on November 20, 2019 02:57 AM

November 19, 2019

Managing dynamic inventory in private subnets using bastion jump box
The title of post is quite large, but is something I encountered issues in the last weeks. I had a VPC in AWS, creating x amount of instances in a private network, and was quite complex to manage this instance using static inventory files. So I will explain you how to manage this problem with Ansible.
Before continue, I want to say these articles are really good and can help you with this issues.
So you will be asking, if these articles are so good, why are you writing them again? Easy, I’m doing this in Gitlab CI, and I suppose other CI will encounter similar issues. It’s not possible to connect to the instances using the instructions above.

First Step

We get our inventory in a dynamic way. For this we will use the inventory scripts.
We need to modify the ec2.ini file with uncommenting the vpc_destination_variable and set the value to private_ip_address
An example
# For server inside a VPC, using DNS names may not make sense. When an instance
# has 'subnet_id' set, this variable is used. If the subnet is public, setting
# this to 'ip_address' will return the public IP address. For instances in a
# private subnet, this should be set to 'private_ip_address', and Ansible must
# be run from within EC2. The key of an EC2 tag may optionally be used; however
# the boto instance variables hold precedence in the event of a collision.
# WARNING: - instances that are in the private vpc, _without_ public ip address
# will not be listed in the inventory until You set:
vpc_destination_variable = private_ip_address
#vpc_destination_variable = ip_address
Be sure to have your ansible.cfg, with the following line.
host_key_checking = False
This is useful, as we’re running this in a CI, we can’t hit enter to accept the connection in the terminal.
Then we begin working with our yml file. As I’m running this on a container, I need to create the .ssh directory and the config file. Here it’s important to add StrictHostKeyChecking=no If we don’t do this, this will fail in our CI, as we can’t hit enter. If you don’t included it and run it locally, it will work.
---
- name: Creates ssh directory
file:
path: ~/.ssh/
state: directory


- name: Create ssh config file in local computer
copy:
dest: ~/.ssh/config
content: |
Host 10.*.*.*
User ubuntu
IdentityFile XXXXX.pem
StrictHostKeyChecking=no
ProxyCommand ssh -q -W %h:%p {{ lookup('env', 'IP') }}
Host {{ lookup('env', 'IP') }}
User ubuntu
StrictHostKeyChecking=no
IdentityFile XXXXX.pem
ForwardAgent yes

And finally we test it running the ping command.
---
- name: test connection
ping:

In case you need the code : https://github.com/DiegoTc/bastionansible
on November 19, 2019 04:09 PM

November 18, 2019

Linux Applications Summit

Jonathan Riddell

I had the pleasure of going to the Linux Applications Summit last week in Barcelona.  A week of talks and discussion about getting Linux apps onto people’s computers.  It’s the third one of these summits but the first ones started out with a smaller scope (and located in the US) being more focused on Gnome tech, while this renamed summit was true cross-project collaboration.

20191114_174302

Oor Aleix here opening the conference (Gnome had a rep there too of course).

20191114_143521

It was great to meet with Heather here from Canonical’s desktop team who does Gnome Snaps, catching up with Alan and Igor from Canonical too was good to do.

20191114_120338

Here is oor Paul giving his talk about the language used.  I had been minded to use “apps” for the stuff we make but he made the point that most people don’t associate that word with the desktop and maybe good old “programs” is better.

20191113_103309

Oor Frank gave a keynote asking why can’t we work better together?  Why can’t we merge the Gnome and KDE foundations for example?  Well there’s lots of reasons why not but I can’t help think that if we could overcome those reasons we’d all be more than the sum of our parts.

I got to chat with Ti Lim from Pine64 who had just shipped some developer models of his Pine Phone (meaning he didn’t have any with him).

Pureism were also there talking about the work they’ve done using Gnomey tech for their Librem5 phone.  No word on why they couldn’t just use Plasma Mobile where the work was already largely done.

This conference does confirm to me that we were right to make a goal of KDE to be All About the Apps, the new technologies and stores we have to distribute our programs we have mean we can finally get our stuff out to the users directly and quickly.

Barcelona was of course beautiful too, here’s the cathedral in moonlight.

20191112_194245

on November 18, 2019 03:04 PM

November 15, 2019

A Debian LTS logo

Like each month, here comes a report about
the work of paid contributors
to Debian LTS.

Individual reports

In October, 214.50 work hours have been dispatched among 15 paid contributors. Their reports are available:

  • Abhijith PA did 8.0h (out of 14h assigned) and gave the remaining 6h back to the pool.
  • Adrian Bunk didn’t get any hours assigned as he had been carrying 26h from September, of which he gave 8h back, so thus carrying over 18h to November.
  • Ben Hutchings did 22.25h (out of 22.75h assigned), thus carrying over 0.5h to November.
  • Brian May did 10h (out of 10h assigned).
  • Chris Lamb did 18h (out of 18h assigned).
  • Emilio Pozuelo Monfort did 46.25h (out of 21.75h assigned at the beginning of the month and 24.5h assigned at the end of the month).
  • Hugo Lefeuvre did 46.5h (out of 22.75h assigned and 23.75h from September).
  • Jonas Meurer didn’t get any hours assigned and gave back the 14.5h he was carrying from September as he did nothing.
  • Markus Koschany did 22.75h (out of 22.75h assigned).
  • Mike Gabriel did 11.75h (out of 10h assigned and 1.75h from September).
  • Ola Lundqvist did 8.5h (out of 8h assigned and 14h from September), thus carrying over 13.5h to November.
  • Roberto C. Sánchez did 8h (out of 8h assigned).
  • Sylvain Beucler did 22.75h (out of 22.75h assigned).
  • Thorsten Alteholz did 22.75h (out of 22.75h assigned).
  • Utkarsh Gupta did 10.0h (out of 10h assigned).

Evolution of the situation

In October Emilio spent many hours bringing firefox-esr 68 to jessie and stretch, thus expanding the impact from Debian LTS to stable security support. For jessie firefox-esr needed these packages to be backported: llvm-toolchain, gcc-mozilla, cmake-mozilla, nasm-mozilla, nodejs-mozilla, cargo, rustc and rust-cbindgen.
October was also the month were we saw the first paid contributions from Utkarsh Gupta, who was a trainee in September.

Starting in November we also have a new trainee, Dylan Aïssi. Welcome to the team, Dylan!

We currently have 59 LTS sponsors sponsoring 212h per month. Still, as always we are welcoming new LTS sponsors!

The security tracker currently lists 35 packages with a known CVE and the dla-needed.txt file has 35 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on November 15, 2019 02:26 PM

S12E32 – Dungeon Keeper

Ubuntu Podcast from the UK LoCo

This week we’ve become addicted to Sedna SSD to PCIe controller cards. We discuss why distro hoppers are the worst, bring you some GUI love and round up our listener feedback.

It’s Season 12 Episode 32 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on November 15, 2019 02:05 AM

Previously: v5.2.

Linux kernel v5.3 was released! I let this blog post get away from me, but it’s up now! :) Here are some security-related things I found interesting:

heap variable initialization
In the continuing work to remove “uninitialized” variables from the kernel, Alexander Potapenko added new init_on_alloc” and “init_on_free” boot parameters (with associated Kconfig defaults) to perform zeroing of heap memory either at allocation time (i.e. all kmalloc()s effectively become kzalloc()s), at free time (i.e. all kfree()s effectively become kzfree()s), or both. The performance impact of the former under most workloads appears to be under 1%, if it’s measurable at all. The “init_on_free” option, however, is more costly but adds the benefit of reducing the lifetime of heap contents after they have been freed (which might be useful for some use-after-free attacks or side-channel attacks). Everyone should enable CONFIG_INIT_ON_ALLOC_DEFAULT_ON=1 (or boot with “init_on_alloc=1“), and the more paranoid system builders should add CONFIG_INIT_ON_FREE_DEFAULT_ON=1 (or “init_on_free=1” at boot). As workloads are found that cause performance concerns, tweaks to the initialization coverage can be added.

pidfd_open() added
Christian Brauner has continued his pidfd work by creating the next needed syscall: pidfd_open(), which takes a pid and returns a pidfd. This is useful for cases where process creation isn’t yet using CLONE_PIDFD, and where /proc may not be mounted.

-Wimplicit-fallthrough enabled globally
Gustavo A.R. Silva landed the last handful of implicit fallthrough fixes left in the kernel, which allows for -Wimplicit-fallthrough to be globally enabled for all kernel builds. This will keep any new instances of this bad code pattern from entering the kernel again. With several hundred implicit fallthroughs identified and fixed, something like 1 in 10 were missing breaks, which is way higher than I was expecting, making this work even more well justified.

x86 CR4 & CR0 pinning
In recent exploits, one of the steps for making the attacker’s life easier is to disable CPU protections like Supervisor Mode Access (and Execute) Prevention (SMAP and SMEP) by finding a way to write to CPU control registers to disable these features. For example, CR4 controls SMAP and SMEP, where disabling those would let an attacker access and execute userspace memory from kernel code again, opening up the attack to much greater flexibility. CR0 controls Write Protect (WP), which when disabled would allow an attacker to write to read-only memory like the kernel code itself. Attacks have been using the kernel’s CR4 and CR0 writing functions to make these changes (since it’s easier to gain that level of execute control), but now the kernel will attempt to “pin” sensitive bits in CR4 and CR0 to avoid them getting disabled. This forces attacks to do more work to enact such register changes going forward. (I’d like to see KVM enforce this too, which would actually protect guest kernels from all attempts to change protected register bits.)

additional kfree() sanity checking
In order to avoid corrupted pointers doing crazy things when they’re freed (as seen in recent exploits), I added additional sanity checks to verify kmem cache membership and to make sure that objects actually belong to the kernel slab heap. As a reminder, everyone should be building with CONFIG_SLAB_FREELIST_HARDENED=1.

KASLR enabled by default on arm64
Just as Kernel Address Space Layout Randomization (KASLR) was enabled by default on x86, now KASLR has been enabled by default on arm64 too. It’s worth noting, though, that in order to benefit from this setting, the bootloader used for such arm64 systems needs to either support the UEFI RNG function or provide entropy via the “/chosen/kaslr-seed” Device Tree property.

hardware security embargo documentation
As there continues to be a long tail of hardware flaws that need to be reported to the Linux kernel community under embargo, a well-defined process has been documented. This will let vendors unfamiliar with how to handle things follow the established best practices for interacting with the Linux kernel community in a way that lets mitigations get developed before embargoes are lifted. The latest (and HTML rendered) version of this process should always be available here.

Those are the things I had on my radar. Please let me know if there are other things I should add! Linux v5.4 is almost here…

© 2019, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on November 15, 2019 01:36 AM

November 14, 2019

Ep 64 – Castanhas e água-pé

Podcast Ubuntu Portugal

Num dia negro para o mundo dos podcasts em geral, e para o Ubuntu Podcasting em particular, entregamos nesta noite fria e ventosa, mais um episódio do fantástico Podcast Ubuntu Portugal com os habituais Diogo Constantino e Tiago Carrondo.

  • https://podes.pt/
  • https://ubuntu.com/blog/roadmap-for-official-support-for-the-raspberry-pi-4
  • https://ubuntu.com/blog/ua-services-deployed-from-the-command-line-with-ua-client
  • https://radiozero.pt/
  • https://www.leffest.com/en/events/international-symposium-resistances
  • https://ubuntu.com/advantage

Apoios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles se acrescentarem no fim do link para qualquer bundle: ?partner=pup (da mesma forma como no link da sugestão) e vão estar também a apoiar-nos.

Atribuição e licenças

Photo credit: The Bone Collector II on Visualhunt / CC BY-NC-ND

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on November 14, 2019 10:48 PM

November 13, 2019

When talking to various people at conferences in the last year or at conferences, a recurring topic was that they believed that the GTK Rust bindings are not ready for use yet.

I don’t know where that perception comes from but if it was true, there wouldn’t have been applications like Fractal, Podcasts or Shortwave using GTK from Rust, or I wouldn’t be able to do a workshop about desktop application development in Rust with GTK and GStreamer at the Linux Application Summit in Barcelona this Friday (code can be found here already) or earlier this year at GUADEC.

One reason I sometimes hear is that there is not support for creating subclasses of GTK types in Rust yet. While that was true, it is not true anymore nowadays. But even more important: unless you want to create your own special widgets, you don’t need that. Many examples and tutorials in other languages make use of inheritance/subclassing for the applications’ architecture, but that’s because it is the idiomatic pattern in those languages. However, in Rust other patterns are more idiomatic and even for those examples and tutorials in other languages it wouldn’t be the one and only option to design applications.

Almost everything is included in the bindings at this point, so seriously consider writing your next GTK UI application in Rust. While some minor features are still missing from the bindings, none of those should prevent you from successfully writing your application.

And if something is actually missing for your use-case or something is not working as expected, please let us know. We’d be happy to make your life easier!

P.S.

Some people are already experimenting with new UI development patterns on top of the GTK Rust bindings. So if you want to try developing an UI application but want to try something different than the usual signal/callback spaghetti code, also take a look at those.

on November 13, 2019 03:02 PM

November 12, 2019

Full Circle Weekly News #153

Full Circle Magazine


The Debian Project stands with the GNOME Foundation
https://bits.debian.org/2019/10/gnome-foundation-defense-patent-troll.html
Samsung Discontinues Linux On DeX Starting With Android 10
https://fossbytes.com/samsung-discontinues-linux-on-dex-android-10/
Credits:
Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

on November 12, 2019 07:12 PM

November 11, 2019

Welcome to the Ubuntu Weekly Newsletter, Issue 604 for the week of November 3 – 9, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on November 11, 2019 09:26 PM

November 10, 2019

In LXD, you can add multiple settings in a single command line. For example, to both limit the memory to 2GB and the CPUs to a single core, you would run the following in a single line. Obviously, you can set these separately as well.

lxc config set mycontainer limits.memory=2GB limits.cpu=1

See the LXD key/value configuration options for more settings. In most cases, the type of the value is either string or integer. However, there are cases where the type is blob. Currently, there are four cases of blob,

raw.apparmor
raw.idmap
raw.lxc
raw.seccomp

Blob is a special type, and it means that LXD takes the value verbatim and does not perform any processing by itself. This means that if you want to set a multi-line blob, the following will not work, because raw.lxc will keep one of the values.

$ lxc config set mycontainer raw.lxc="lxc.cgroup.devices.allow = c 10 237" raw.lxc="lxc.cgroup.devices.allow = b 7 *"
$ lxc config show mycontainer
...
raw.lxc: lxc.cgroup.devices.allow = b 7 *
...
$ 

In addition, the following will not work either, because of parsing errors (multiple = characters).

lxc config set mycontainer raw.lxc='lxc.cgroup.devices.allow = c 10 237\nlxc.cgroup.devices.allow = b 7 *'

Moreover, the following will not work either, because LXD will take the multi-line string as a single string.

$ echo 'lxc.cgroup.devices.allow=c 10 237\nlxc.cgroup.devices.allow=b 7 *' | lxc config set mycontainer raw.lxc -
$ lxc config show mycontainer
...
raw.lxc: |
     lxc.cgroup.devices.allow=c 10 237\nlxc.cgroup.devices.allow=b 7 *
...
$ # But why? Because
$ echo "lxc.cgroup.devices.allow=c 10 237\nlxc.cgroup.devices.allow=b 7 *"
lxc.cgroup.devices.allow=c 10 237\nlxc.cgroup.devices.allow=b 7 *

echo does not interpret the control characters by default, unless you do shopt -s xpg_echo first. Or, unless you add the -e flag in the echo command to enable the interpretation of the _backslash escapes_, as Uli reminds us in the comments.

How to add then multi-line blobs to LXD?

We can use either printf (or echo -e), as in the following.

$ printf 'lxc.cgroup.devices.allow = c 10 237\nlxc.cgroup.devices.allow = b 7 *' | lxc config set mycontainer raw.lxc -

The configuration now looks as the following.

$ lxc config show mycontainer
...
raw.lxc: |-
   lxc.cgroup.devices.allow = c 10 237
   lxc.cgroup.devices.allow = b 7 *
...
$ 

Alternatively, if you are comfortable in using the text mode editors, you can try with lxc config edit mycontainer. Make sure to obey the spaces and avoid adding tabs to the configuration. When you save and exit the text editor, the configuration is parsed and saved.

on November 10, 2019 03:35 PM

November 07, 2019

Ep 63 – Vivinho da Silva

Podcast Ubuntu Portugal

Directamente do cobre e das fibras ópticas para o éter, contem com mais 1 episódio do Podcast Ubuntu Portugal desta vez também para o público da Rádio Zero. Voltou à agenda o fantástico trabalho da Fundação UBPorts e ainda uma viagem pelas viagens do Diogo.

  • https://ubports.com/pt_PT/blog/ubports-blog-1/post/ubuntu-touch-ota-10-release-239
  • https://www.youtube.com/watch?v=S-FSyMgkYQY
  • https://ubports.com/pt_PT/blog/ubports-blog-1/post/ubuntu-touch-ota-11-release-252
  • https://ubports.com/pt_PT/blog/ubports-blog-1/post/october-2019-status-of-ubuntu-touch-on-librem-5-smartphone-250
  • https://www.youtube.com/channel/UCLCZ80HI7OJaMEGTTsEDDpA/
  • https://www.humblebundle.com/books/linux-bsd-bookshelf-2019-books?partner=PUP

Apoios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Podem apoiar o podcast usando os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles se acrescentarem no fim do link para qualquer bundle: ?partner=pup (da mesma forma como no link da sugestão) e vão estar também a apoiar-nos.

Atribuição e licenças

Photo credit: The Bone Collector II on VisualHunt / CC BY-NC-ND

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on November 07, 2019 10:30 PM

S12E31 – Ikari Warriors

Ubuntu Podcast from the UK LoCo

This week we’ve been moonlighting on all the podcasts and live streaming to share Ubuntu bug reporting skills. We discuss Ubuntu on Raspberry Pi 4, Debian’s new homepage, elementary updates and Fedora 31. We also round up some events and our picks from the tech news.

It’s Season 12 Episode 31 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on November 07, 2019 03:00 PM

November 06, 2019

Full Circle Weekly News #152

Full Circle Magazine


Project Trident Ditches BSD for Linux
https://itsfoss.com/bsd-project-trident-linux/
Hyperbola GNU / Linux-libre releases “Milky Way” v0.3
https://www.hyperbola.info/news/milky-way-v03-release/
Credits:
Ubuntu “Complete” sound: Canonical

Theme Music: From The Dust – Stardust
https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

on November 06, 2019 07:23 PM

November 05, 2019

Early F-Cycle Adventuring

Stephen Michael Kellat

This blog does recount my misadventures in using computers. I had not intended to so quickly get back into testing. After several frustrating failures in trying to upgrade to 19.10 that left me with a system that refused to boot I chose to take a risk.

After many multiple failed upgrade attempts as well as a failed attempt to install something completely different I was about to settle for just using the Windows Subsystem for Linux under Windows 10 1903. The problems is that Windows 10 just feels so utterly slow to me compared to Xubuntu or even Ubuntu MATE. This may come from having to use very unmaintained computers for almost six years in a government job that ran very old versions of Microsoft Windows that were very behind the rest of the world.

Considering all that I decided to push forward. I got Focal Fossa installed on my laptop and it is working for the time being.

I recorded on the ISO Tracker what are effectively my test results. Sadly we don’t have enough people testing ISO images. You don’t have to be doing installs to bare metal hardware like I did as you can test in a virtual machine too. Of course I’ll be filing bugs when software breaks as I use my laptop here on out.

Testing is not glamorous. It isn’t a realm of heroes compared to building some great new framework for building a static website. However it is a great work to help provide people experiencing extreme frustration when they can’t get their upgrade to work or fresh install to work. If you want to build an ever larger community of users in the Ubuntu kingdoms then we do need to ensure barriers to entry are not insurmountable.

Focal Fossa is a Long Term Support release. With a 3-5 year support window depending upon the flavor there will be a great need to get it right. According to my last look at the schedule it appears we’ll be targeting April 23rd, 2020. That’s just under 171 days away with a variety of holidays cropping up during that time.

Are you ready to join in any facet of this great adventure?

on November 05, 2019 03:12 AM

I am excited to be back for another Reddit Ask Me Anything on Wed 20th November 2019 at 8.30am Pacific / 11.30am Eastern.

For those unfamiliar with Reddit AMAs, it is essentially a way in which people can ask questions that someone will respond to. You simply add your questions (serious or fun) and I will respond to as many as I can. It has been a while since my last AMA, so I am looking forward to this one!

Feel free to ask any questions you like. Here is some food for thought:

  • The value of building communities, what works, and what doesn’t
  • The methods and approaches to community management, leadership, and best practice.
  • My new book, ‘People Powered: How communities can supercharge your business, brand, and teams‘, what is in it, and what it covers.
  • Recommended tools, techniques, and tricks to build communities and get people involved.
  • Working at Canonical, GitHub, XPRIZE, and elsewhere.
  • The open source industry, how it has changed, and what the future looks like.
  • Remote working and online collaboration, and what the future looks like
  • The projects I have been involved in such as Ubuntu, GNOME, KDE, and others.
  • The driving forces behind people and groups, behavioral economics, etc.
  • My other things such as my music, conferences, writing etc.
  • Anything else – politics, movies, news, tech…ask away!

If you want to ask about something else though, go ahead! 🙂

How to Join

Joining the AMA is simple. Just follow these steps:

  • Be sure to have a Reddit account. If you don’t have one, head over here and sign up.
  • On Wednesday 20th November 2019 at 8.30am Pacific / 11.30am Eastern (see other time zone times here) I will share the link to my AMA on Twitter (I am not allowed to share it until we run the AMA). You can look for this tweet by clicking here. Here are the times for the AMA for different timezones:

  • Click the link in my tweet to go to the AMA and then click the text box to add your question(s).
  • Now just wait until I respond. Feel free to follow up, challenge my response, and otherwise have fun!

I hope to see you all there!

The post Reddit Ask Me Anything: Wednesday Nov 20th 2019 appeared first on Jono Bacon.

on November 05, 2019 01:30 AM

November 04, 2019

Netdata does real-time health monitoring and performance troubleshooting for systems and applications. It helps you instantly diagnose slowdowns and anomalies in your infrastructure with thousands of metrics, interactive visualizations, and insightful health alarms.

When you set it up on your system, Netdata sets up a Web page where you can view real-time information, including CPU load, network traffic and lots more. It looks like the following.

Netdata running on a system. On top it shows the dashboard. On the right, the system elements that are monitored.

Looks good, let’s install it! Normally, you would install Netdata on the host so that you can have visibility of the whole host. In addition, it makes sense to install on the host because since 2016, Netdata actually understands LXC/LXD containers.

However, in this post we install Netdata in a LXD container. An unprivileged LXD container. The purpose of this exercise is to get to know Netdata before installing on the host. Because your host is too important to install any software before testing them first in a LXD container.

In addition to testing the software before installing on the host, we get to see in practice how well is a container isolated from the host. That is, can the container reveal to us any significant information about the host? Obviously, it is possible to deduce whether the host is under lots of load, but let’s see this play in front of us.

Installing Netdata in a container

We launch a container, get a shell and install Netdata. We choose the installation command that compiles Netdata for us. The other download option to use static amd64 packages will be downloading pre-compiled packages for us. The installation is uneventful; we need to press Enter a few times until the installation gets completed.

$ lxc launch ubuntu:18.04 netdata
 Creating netdata
 Starting netdata
$ lxc exec netdata -- sudo --user ubuntu --login
To run a command as administrator (user "root"), use "sudo ".
 See "man sudo_root" for details.

ubuntu@netdata:~$ bash <(curl -Ss https://my-netdata.io/kickstart.sh)
...
We detected these:
 Distribution    : ubuntu
 Version         : 18.04
 Codename        : 18.04.3 LTS (Bionic Beaver)
 Package Manager : install_apt_get
 Packages Tree   : debian
 Detection Method: /etc/os-release
 Default Python v: 2 
...
        IMPORTANT << 
         Please make sure your system is up to date
         by running:   apt-get update      
 apt-get install autoconf 
 apt-get install autoconf-archive 
 apt-get install automake 
 apt-get install gcc 
 apt-get install libjudy-dev 
 apt-get install liblz4-dev 
 apt-get install libmnl-dev 
 apt-get install libssl-dev 
 apt-get install libuv1-dev 
 apt-get install make 
 apt-get install pkg-config 
 apt-get install python 
 apt-get install uuid-dev 
 apt-get install zlib1g-dev 
 Press ENTER to run it > 

Netdata explains that it is going to install a set of packages. It does not do a -y in the apt command, which means we need to press Enter during each apt install. Ideally, all installations could be merged together to a single command. Below is the second time you need to press Enter. It is good to take note of the locations of the files. When you press Enter, Netdata will start the compilation and then the installation.

...
^
|.-.   .-.   .-.   .-.   .  netdata                                        
|   '-'   '-'   '-'   '-'   real-time performance monitoring, done right!  
+----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+--->

You are about to build and install netdata to your system.

It will be installed at these locations:

the daemon     at /usr/sbin/netdata
config files   in /etc/netdata
web files      in /usr/share/netdata
plugins        in /usr/libexec/netdata
cache files    in /var/cache/netdata
db files       in /var/lib/netdata
log files      in /var/log/netdata
pid file       at /var/run/netdata.pid
logrotate file at /etc/logrotate.d/netdata 

This installer allows you to change the installation path.
Press Control-C and run the same command with --help for help.

Press ENTER to build and install netdata to your system > 

Here is Netdata having completed the installation. It listens on port 19999 on all interface, so we can access from the host using the IP address of the container. It is already running, and we can see the commands to stop and start. There is a script to uninstall and a script to update Netdata. There is a cron service that automatically checks for updates and if there is, it will email us. Which means that since there is no MTA in the container, we will not get any emails.

netdata by default listens on all IPs on port 19999,
so you can access it with:

  http://this.machine.ip:19999/

To stop netdata run:

  systemctl stop netdata

To start netdata run:

  systemctl start netdata

Uninstall script copied to: /usr/libexec/netdata/netdata-uninstaller.sh

--- Install (but not enable) netdata updater tool --- 
 Update script is located at /usr/libexec/netdata/netdata-updater.sh

--- Check if we must enable/disable the netdata updater tool --- 
Adding to cron
Auto-updating has been enabled. Updater script linked to: /etc/cron.daily/netdata-update

netdata-updater.sh works from cron. It will trigger an email from cron
only if it fails (it should not print anything when it can update netdata).

--- Wrap up environment set up --- 
Preparing .environment file
Setting netdata.tarball.checksum to 'new_installation'

--- We are done! --- 

^
   |.-.   .-.   .-.   .-.   .-.   .  netdata                          .-.   .-
   |   '-'   '-'   '-'   '-'   '-'   is installed and running now!  -'   '-'  
   +----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+-----+--->

enjoy real-time performance and health monitoring…

OK   

ubuntu@netdata:~$ 

We are ready to start using Netdata.

Using Netdata

The IP address of the Netdata container in my case is 10.10.10.199. Therefore, I use the browser on my host to access the URL http://10.10.10.199:19999/

Here is how the first screen looks like.

View of Netdata running in a LXD container. In my case, the URL was http://10.10.10.199:19999/

Before doing more testing, click on Settings and change to the following. We change the When to refresh the charts to Always. By doing so, the charts are refreshed even if the browser does not have the focus. This is handy when we are testing something in a terminal window and we want immediate feedback from Netdata.

The Settings dialog in Netdata. We see the Performance tab and changed to refresh the charts to Always.

We can now view all the charts and try to affect them by, for example, creating load or traffic in the container and on the host. In this way, we can sense what information may get leaked from the host to the container, if any.

Interpreting the Netdata charts

Let’s see again the screenshot of Netdata, of the first screen.

Netdata charts, in a LXD system container.

First, we can see that the actual use of the swap partition is shown in the container. Should this be shown, should it not? Does it make a practical difference? This could be hidden through lxcfs.

Second, there is significant inbound and outbound network traffic. The container is idle. Could that be traffic from the host appearing (as traffic count) in the container? Let’s tcpdump it. This is actually the Netdata traffic between the host and the container. The container cannot see traffic that belongs to the host (or other containers), nor packet counts from others.

Third, the used RAM is the percentage of the container’s used RAM over the total available RAM to the container. That is, if we did not set a total memory limit to the container, the total RAM is the RAM of the host. LXD does not reveal how much RAM is used on the host or other containers. If you encounter software that checks how much RAM is reported as available and uses as much as possible, then this software will have a bad time. I think the mysql package was doing this a few years back, and ended up not being able to start.

Affecting the charts

We can create CPU load, we can cause disk reads and disk writes and so on. Here is a way to cause CPU load and disk read. This command will read files and use the CPU to compute the SHA-1 sum.

ubuntu@netdata:~$ sha1sum /usr/bin/*
Disk reads and CPU load when running sha1sum /usr/bin/*.

If you run the command again, and your system has enough RAM, you will probably not notice any more disk reads, just CPU load. Because the files get cached in the disk cache. To test the allocation of RAM, we could write a program to malloc() big amounts of memory.

But why do all these while we can go pro. Let’s install stress-ng, both on the host and the netdata container.

ubuntu@netdata:~$ sudo apt install stress-ng

Stress memory and CPU

Run the following to allocate 1GB of RAM. By default, stress-ng uses workers to stress the resource you are allocating.

ubuntu@netdata:~$ stress-ng --timeout 3s --vm 1G
stress-ng: info:  [4388] dispatching hogs: 1 vm
stress-ng: info:  [4388] successful run completed in 3.01s
ubuntu@netdata:~$ 

Try running this in the container and on the host.

Stress the I/O

Run the following. 10 is the number of workers. This affects the disk writes.

ubuntu@netdata:~$ stress-ng --timeout 3s --io 10
stress-ng: info:  [4467] dispatching hogs: 10 io
stress-ng: info:  [4467] successful run completed in 4.54s
ubuntu@netdata:~$ 

Conclusion

We have setup netdata in a LXD container in order to view real-time information but most importantly to get to know how the software works, before we install on the host. Do more tests and get a good feel as to what is the view of your system from the point of view of a LXD container.

It is very useful to try software in a system container before installing on the host. In this specific case, we cannot make effective use of netdata when it is installed in a container because the tool needs to access too many metrics on the host. These metrics are over 2000 in number, and it does not look practical to expose them to the container.

The next step to testing Netdata in a LXD system container, is to get Netdata to graduate and then install on the host. That’s a subject of a future post. It is am important subject because Netdata understands LXD containers, and is able to provide real-time information for each container.

on November 04, 2019 09:17 PM
Last week I attended the Embedded Linux Conference (Europe 2019) and presented a talk on stress-ng.  The slide deck for this presentation is now available.
on November 04, 2019 09:33 AM

GitHub has seen a startling level of growth. With over 31 million developers spread across 96 million repositories, it has become the quite literal hub for how people build technology (and was recently acquired by Microsoft for $7.5 billion). Throughout this remarkable growth, GitHub has continued to evolve as a product and platform, and the fit and finish of a reliable, consistent product has been a staple of GitHub throughout the years.

Jason Warner is SVP of Technology at GitHub and is tasked delivering this engineering consistency. Jason and I used to work on the engineering management team at Canonical before he went to Heroku and then ultimately GitHub.

In this episode of Conversations With Bacon, we get into a truly fascinating discussion about not just how GitHub builds GitHub, but also Jason’s philosophy, experience, and perspective when it comes to effective leadership.

We discuss how GitHub evaluate’s future features, how they gather customer/user feedback, how Jason structures his team and bridges product and engineering, what he believes truly great leadership looks like, and where the future of technology leadership is going.

Since I have started doing Conversations With Bacon, this conversation with Jason is one of my favorites: there is so much in here that I think will be interesting and insightful for you folks. If you are interested in technology and leadership, and especially if you are curious about hot the GitHub machine works, this one is well worth a listen. Enjoy!

Listen


   Listen on Google Play Music
   

The post Jason Warner on GitHub and Leadership appeared first on Jono Bacon.

on November 04, 2019 04:55 AM

November 03, 2019

Bits and pieces

AIMS Desktop talk: On the 1st of October I gave a talk at AIMS titled “Why AIMS Desktop?” where I talked about the Debian-based system we use at AIMS, and went into some detail about what Linux is, what Debian is, and why it’s used within the AIMS network. I really enjoyed the reaction from the students and a bunch of them are interested in getting involved directly with Debian. I intend to figure out a way to guide them into being productive community members without letting it interfere with their academic program.

Catching up with Martin: On the 12th of October I had lunch with Martin Michlmayr. Earlier this year we were both running for DPL, which was an interesting experience and the last time we met neither of us had any intentions to do so. This was the first time we talked in person since then and it was good reflecting over the last year and we also talked about a bunch of non-Debian stuff.

Cover art of our band?

Activity log

2019-10-08: Upload package bundlewrap (3.7.0-1) to Debian unstable.

2019-10-08: Upload package calamares (3.2.14-1) to Debian unstable.

2019-10-08: Sponsor package python3-fastentrypoints (0.12-2) for Debian unstable (Python team request).

2019-10-08: Sponsor package python3-cheetah (3.2.4-1) for Debian unstable (Python team request).

2019-10-14: Upload package calamares (3.2.15-1) to Debian unstable.

2019-10-14: Upload package kpmcore (4.0.1-1) to Debian unstable.

2019-10-14: Upload package gnome-shell-extension-disconnect-wifi (21-1) to Debian unstable.

2019-10-15: Upload package partitionmanager (4.0.0-1) to Debian unstable.

2019-10-15: Sponsor package python-sabyenc (3.3.6-1) for Debian unstable (Python team request).

2019-10-15: Sponsor package python-jaraco.functools (2.0-1) for Debian unstable (Python team request).

2019-10-15: Sponsor package python3-gntp (1.0.3-1) for Debian unstable (Python team request).

2019-10-15: Review package python3-portend (2.5-1) (Not yet ready) (Python team request).

2019-10-15: Review package python3-tempora (1.14.1) (Net yet ready) (Python team request))

2019-10-15: Upload package python3-flask-silk (0.2-15) to Debian unstable.

2019-10-15: Upload package tuxpaint (0.9.24~git20190922-f7d30d-1~exp2) to Debian experimental.

2019-10-15: Upload package python3-flask-silk (0.2-16) to Debian unstable.

2019-10-16: Upload package gnome-shell-extension-multi-monitors (19-1) to Debian unstable.

2019-10-16: Upload package python3-flask (0.6.2-5) to Debian unstable.

2019-10-16: Sponsor package buildbot (2.4.1-1) for Debian unstable (Python team request).

2019-10-16: Signed/sent keys from DebConf19 KSP.

2019-10-17: Publish blog entry “Calamares plans for Debian 11“.

2019-10-17: Upload package kpmcore (4.0.1-2) to Debian unstable (Thanks to Alf Gaida for the merge request with fixes) (Closes: #942522, #942528, #942512).

2019-10-22: Sponsor package membernator (1.1.0-1) for Debian unstable (Python team request).

2019-10-22: Sponsor package isbg (2.1.5-1) for Debian unstable (Python team request).

2019-10-22: Sponsor package python-pluggy (0.13.0-1) for Debian unstable (Python team request).

2019-10-22: Sponsor package python-pyqt5chart (5.11.3+dfsg-2) for Debian unstable (Python team request).

2019-10-23: Upload package tetzle (2.1.4+dfsg1-3) to Debian unstable.

2019-10-23: Upload package partitionmanager (4.0.0-2) to Debian unstable.

2019-10-24: Upload package tetzle (2.1.5+dfsg1-1) to Debian unstable.

2019-10-24: Upload package xabacus (8.2.2-1) to Debian unstable.

2019-10-24: Review package (fpylll) (Needs some more work) (Python team request).

2019-10-28: Upload package gnome-shell-extension-dash-to-dock (25-1) to Debian unstable.

on November 03, 2019 06:18 PM

With version 12.0 Gitlab has introduced a new interesting feature: Visual Reviews! You can now leave comments to Merge Requests directly from the page you are visiting over your stage environment, without having to change tab.

If you already have Continuous Integration and Continuous Delivery enabled for your websites, adding this feature is blazing fast, and will make life of your reviewers easier! If you want to start with CI/CD in Gitlab, I’ve written about it in the past.

The feature

While the official documentation has a good overview of the feature, we can take a deeper look with some screenshots:

Inserting a comment We can comment directly from the staging environment! And additional metadata will be collected and published as well, making easier to reproduce a bug.

Comment appears in the MR Our comment (plus the metadata) appears in the merge request, becoming actionable.

Implementing the system

Adding the snippet isn’t complicate, you only need some information about the MR. Basically, this is what you should add to the head of your website for every merge request:

<script
  data-project-id="CI_PROJECT_ID"
  data-merge-request-id="CI_MERGE_REQUEST_IID"
  data-mr-url='https://gitlab.example.com'
  data-project-path="CI_PROJECT_PATH"
  id='review-app-toolbar-script'
  src='https://gitlab.example.com/assets/webpack/visual_review_toolbar.js'>
</script>

Of course, asking your team to add the HTML snippet, and filling it with the right information isn’t feasible. We will instead take advantage of Gitlab CI/CD to inject the snippet and autocomplete it with the right information for every merge request.

First we need the definition of a Gitlab CI job to build our client:

buildClient:
  image: node:12
  stage: build
  script:
    - ./scripts/inject-review-app-index.sh
    - npm ci
    - npm run build
  artifacts:
    paths:
      - build
  only:
    - merge_requests
  cache:
    paths:
      - .npm

The important bit of information here is only: merge_requests. When used, Gitlab injects in the job a environment variable, called CI_MERGE_REQUEST_IID, with the unique ID of the merge request: we will fill it in the HTML snippet. The official documentation of Gitlab CI explains in detail all the other keywords of the YAML.

The script

The other important bit is the script that actually injects the code: it’s a simple bash script, which looks for the </title> tag in the HTML, and append the needed snippet:

#!/bin/bash

quoteSubst() {
  IFS= read -d '' -r < <(sed -e ':a' -e '$!{N;ba' -e '}' -e 's/[&/\]/\\&/g; s/\n/\\&/g' <<<"$1")
  printf %s "${REPLY%$'\n'}"
}

TEXT_TO_INJECT=$(cat <<-HTML
    <script
      data-project-id="${CI_PROJECT_ID}"
      data-merge-request-id="${CI_MERGE_REQUEST_IID}"
      data-mr-url='https://gitlab.com'
      data-project-path="${CI_PROJECT_PATH}"
      id='review-app-toolbar-script'
      src='https://gitlab.com/assets/webpack/visual_review_toolbar.js'>
    </script>
HTML
)

sed -i "s~</title>~&$(quoteSubst "${TEXT_TO_INJECT}")~" public/index.html

Thanks to the Gitlab CI environment variables, the snippet has already all the information it needs to work. Of course you should customize the script with the right path for your index.html (or any other page you have).

Now everything is ready! Your team needs only to generate personal access tokens to login, and they are ready to go! You should store your personal access token in your password manager, so you don’t need to generate it each time.

Future features

One of the coolest things in Gitlab is that everything is always a work in progress, and each feature has some new goodies in every release. This is true for the Visual Reviews App as well. There is an epic that collects all the improvements they want to do, including removing the need for an access token, and adding ability to take screenshots that will be inserted in the MR comments as well.

That’s all for today, I hope you found this article useful! For any comment, feedback, critic, write to me on Twitter (@rpadovani93) or drop an email at riccardo@rpadovani.com.

I have also changed the blog theme to a custom version of Rapido.css. I think it increases the readability, but let me know what you think!

Ciao,
R.

on November 03, 2019 02:35 PM

October 30, 2019

In Ubuntu’s development process new package versions don’t immediately get released, but they enter the -proposed pocket first, where they are built and tested. In addition to testing the package itself other packages are also tested together with the updated package, to make sure the update doesn’t break the other packages either.

The packages in the -proposed pocket are listed on the update excuses page with their testing status. When a package is successfully built and all triggered tests passed the package can migrate to the release pocket, but when the builds or tests fail, the package is blocked from migration to preserve the quality of the release.

Sometimes packages are stuck in -proposed for a longer period because the build or test failures can’t be solved quickly. In the past several people may have triaged the same problem without being able to easily share their observations, but from now on if you figured out something about what broke, please open a bug against the stuck package with your findings and mark the package with the update-excuse tag. The bug will be linked to from the update excuses page so the next person picking up the problem can continue from there. You can even leave a patch in the bug so a developer with upload rights can find it easily and upload it right away.

The update-excuse tag applies to the current development series only, but it does not come alone. To leave notes for a specific release’s -proposed pocket, use the update-excuse-$SERIES tag, for example update-excuse-bionic to have the bug linked from 18.04’s (Bionic Beaver’s ) update excuses page.

Fixing failures in -proposed is big part of the integration work done by Ubuntu Developers and help is always very welcome. If you see your favorite package being stuck on update excuses, please take a look at why and maybe open an update-excuse bug. You may be the one who helped the package making it to the next Ubuntu release!

(The new tags were added by Tiago Stürmer Daitx and me during the last Canonical engineering sprint’s coding day. Fun! 🙂 )

on October 30, 2019 11:45 AM

October 29, 2019

After just over three years, my family and I are now Lawful Permanent Residents (Green Card holders) of the United States of America. It’s been a long journey.

Acknowledgements

Before anything else, I want to credit those who made it possible to reach this point. My then-manager Duncan Mak, his manager Miguel de Icaza. Amy and Alex from work for bailing us out of a pickle. Microsoft’s immigration office/HR. Gigi, the “destination services consultant” from DwellWorks. The immigration attorneys at Fragomen LLP. Lynn Community Health Center. And my family, for their unwavering support.

The kick-off

It all began in July 2016. With support from my management chain, I went through the process of applying for an L-1 intracompany transferee visa – a 3-5 year dual-intent visa, essentially a time-limited secondment from Microsoft UK to Microsoft Corp. After a lot of paperwork and an in-person interview at the US consulate in London, we were finally granted the visa (and L-2 dependent visas for the family) in April 2017. We arranged the actual move in July 2017, giving us a short window to wind up our affairs in the UK as much as possible, and run out most of my eldest child’s school year.

We sold the house, sold the car, gave to family all the electronics which wouldn’t work in the US (even with a transformer), and stashed a few more goodies in my parents’ attic. Microsoft arranged for movers to come and pack up our lives; they arranged a car for us for the final week; and a hotel for the final week too (we rejected the initial golf-spa-resort they offered and opted for a budget hotel chain in our home town, to keep sending our eldest to school with minimal disruption). And on the final day we set off at the crack of dawn to Heathrow Airport, to fly to Boston, Massachusetts, and try for a new life in the USA.

Finding our footing

I cannot complain about the provisions made by Microsoft – although not without snags. The 3.5 hours we spent in Logan airport waiting at immigration due to some computer problem on the day did not help us relax. Neither did the cat arriving at our company-arranged temporary condo before we did (with no food, or litter, or anything). Nor did the fact that the satnav provided with the company-arranged hire car not work – and that when I tried using my phone to navigate, it shot under the passenger seat the first time I had to brake, leading to a fraught commute from Logan to Third St, Cambridge.

Nevertheless, the liquor store under our condo building, and my co-workers Amy and Alex dropping off an emergency run of cat essentials, helped calm things down. We managed a good first night’s exhausted sleep, and started the following day with pancakes and syrup at a place called The Friendly Toast.

With the support of Gigi, a consultant hired to help us with early-relocation basics like social security and bank accounts, we eventually made our way to our own rental in Melrose (a small suburb north of Boston, a shortish walk from the MBTA Orange Line); with our own car (once the money from selling our house in the UK finally arrived); with my eldest enrolled in a local school. Aiming for normality.

The process

Fairly soon after settling in to office life, the emails from Microsoft Immigration started, for the process to apply for permanent residency. We were acutely aware of the time ticking on the three year visas – and we already burned 3 months of time prior to the move. Work permits; permission to leave and re-enter; Department of Labor certification. Papers, forms, papers, forms. Swearing that none of us have ever recruited child soldiers, or engaged in sex trafficking.

Tick tock.

Months at a time without hearing anything from USCIS.

Tick tock.

Work permits for all, but big delays listed on the monthly USCIS visa bulletin.

Tick tock.

We got to August 2019, and I started to really worry about the next deadline – our eldest’s passport expiring, along with the initial visas a couple of weeks later.

Tick tock.

Then my wife had a smart idea for plan B, something better than the burned out Mad Max dystopia waiting for us back in the UK: Microsoft just opened a big .NET development office in Prague, so maybe I could make a business justification for relocation to the Czech Republic?

I start teaching myself Czech.

Duolingo screenshot, Czech language, “can you see my goose”

Tick tock.

Then, a month later, out of the blue, a notice from USCIS: our Adjustment of Status interviews (in theory the final piece before being granted green cards) were scheduled, for less than a month later. Suddenly we went from too much time, to too little.

Ti-…

October

The problem with the one month of notice is we had one crucial missing piece of paperwork – for each of us, an I-693 medical exam issued by a USCIS-certified civil surgeon. I started calling around, and got a response from an immigration clinic in Lynn, with a date in mid October. They also gave us a rough indication of medical exams and extra vaccinations required for the I-693 which we were told to source via our normal doctors (where they would be billable to insurance, if not free entirely). Any costs in the immigration clinic can’t go via insurance or an HSA, because they’re officially immigration paperwork, not medical paperwork. Total cost ends up being over a grand.

More calling around. We got scheduled for various shots and tests, and went to our medical appointment with everything sorted.

Except…

Turns out the TB tests the kids had were no longer recognised by USCIS. And all four of us had vaccination record gaps. So not only unexpected jabs after we promised them it was all over – unexpected bloodletting too. And a follow-up appointment for results and final paperwork, only 2 days prior to the AOS interview.

By this point, I’m something of a wreck. The whole middle of October has been a barrage of non-stop, short-term, absolutely critical appointments.

Any missing paperwork, any errors, and we can kiss our new lives in the US goodbye.

Wednesday, I can’t eat, I can’t sleep, and various other physiological issues. The AOS interview is the next day. I’m as prepared as I can be, but still more terrified than I ever have been.

Any missing paperwork, any errors, and we can kiss our new lives in the US goodbye.

I was never this worried about going through a comparable process when applying for the visa, because the worst case there was the status quo. Here the worst case is having to restart our green card process, with too little time to reapply before the visas expire. Having wasted two years of my family’s comfort with nothing to show for it. The year it took my son to settle again at school. All of it riding on one meeting.

Thursday

Our AOS interviews are perfectly timed to coincide with lunch, so we load the kids up on snacks, and head to the USCIS office in Lawrence.

After parking up, we head inside, and wait. We have all the paperwork we could reasonably be expected to have – birth certificates, passports, even wedding photos to prove that our marriage is legit.

To keep the kids entertained in the absence of electronics (due to a no camera rule which bars tablets and phones) we have paper and crayons. I suggest “America” as a drawing prompt for my eldest, and he produces a statue of liberty and some flags, which I guess covers the topic for a 7 year old.

Finally we’re called in to see an Immigration Support Officer, the end-boss of American bureaucracy and… It’s fine. It’s fine! She just goes through our green card forms and confirms every answer; takes our medical forms and photos; checks the passports; asks us about our (Caribbean) wedding and takes a look at our photos; and gracefully accepts the eldest’s drawing for her wall.

We’re in and out of her office in under an hour. She tells us that unless she finds an issue in our background checks, we should be fine – expect an approval notice within 3 weeks, or call up if there’s still nothing in 6. Her tone is congratulatory, but with nothing tangible, and still the “unless” lingering, it’s hard to feel much of anything. We head home, numb more than anything.

Aftermath

After two fraught weeks, we’re both not entirely sure how to process things. I had expected a stress headache then normality, but instead it was more… Gradual.

During the following days, little things like the colours of the leaves leave me tearing up – and as my wife and I talk, we realise the extent to which the stress has been getting to us. And, more to the point, the extent to which being adrift without having somewhere we can confidently call home has caused us to close ourselves off.

The first day back in the office after the interview, a co-worker pulls me aside and asks if I’m okay – and I realise how much the answer has been “no”. Friday is the first day where I can even begin to figure that out.

The weekend continues with emotions all over the place, but a feeling of cautious optimism alongside.

I-485 Adjustment of Status approval notifications

On Monday, 4 calendar days after the AOS interview, we receive our notifications, confirming that we can stay. I’m still not sure I’m processing it right. We can start making real, long term plans now. Buying a house, the works.

I had it easy, and don’t deserve any sympathy

I’m a white guy, who had thousands of dollars’ worth of support from a global megacorp and their army of lawyers. The immigration process was fraught enough for me that I couldn’t sleep or eat – and I went through the process in one of the easiest routes available.

Youtube video from HBO show Last Week Tonight, covering legal migration into the USA

I am acutely aware of how much more terrifying and exhausting the process might be, for anyone without my resources and support.

Never, for a second, think that migration to the US – legal or otherwise – is easy.

The subheading where I answer the inevitable question from the peanut gallery

My eldest started school in the UK in September 2015. Previously he’d been at nursery, and we’d picked him up around 6-6:30pm every work day. Once he started at school, instead he needed picking up before 3pm. But my entire team at Xamarin was on Boston time, and did not have the world’s earliest risers – meaning I couldn’t have any meaningful conversations with co-workers until I had a child underfoot and the TV blaring. It made remote working suck, when it had been fine just a few months earlier. Don’t underestimate the impact of time zones on remote workers with families. I had begun to consider, at this point, my future at Microsoft, purely for logistical reasons.

And then, in June 2016, the UK suffered from a collective case of brain worms, and voted for self immolation.

I relocated my family to the US, because I could make a business case for my employer to fund it. It was the fastest, cheapest way to move my family away from the uncertainty of life in the UK after the brain-worm-addled plan to deport 13% of NHS workers. To cut off 40% of the national food supply. To make basic medications like Metformin and Estradiol rarities, rationed by pharmacists.

I relocated my family to the US, because despite all the country’s problems, despite the last three years of bureaucracy, it still gave them a better chance at a safe, stable life than staying in the UK.

And even if time proves me wrong about Brexit, at least now we can make our new lives, permanently, regardless.

on October 29, 2019 09:41 AM

October 26, 2019

Paco Molinero, Fernando Lanero, Javier Teruelo y Marcos Costales entrevistamos a Joan CiberSheep sobre la Ubucon Europe y analizamos la nueva versión de Ubuntu 19.10.

Ubuntu y otras hierbas
Escúchanos en:
on October 26, 2019 01:41 PM

October 24, 2019

Ubuntu 19.10 Released

Josh Powers

The next development release of Ubuntu, the Eoan Ermine, was released last week! This was the last development release before our upcoming LTS, codenamed Focal Fossa. As a result, lots of bug fixes, new features, and experience improvements have made their way into the release. Some highlights include: New GNOME version to 3.34 Further refinement to the desktop yaro theme Latest upstream stable kernel 5.3 OpenSSL 1.1.1 support Experimental ZFS on root support in the desktop installer OpenStack Train support See the release notes for more details.
on October 24, 2019 12:00 AM

October 23, 2019

The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Train on Ubuntu 18.04 LTS via the Ubuntu Cloud Archive. Details of the Train release can be found at:  https://www.openstack.org/software/train

To get access to the Ubuntu Train packages:

Ubuntu 18.04 LTS

You can enable the Ubuntu Cloud Archive pocket for OpenStack Train on Ubuntu 18.04 installations by running the following commands:

    sudo add-apt-repository cloud-archive:train
    sudo apt update

The Ubuntu Cloud Archive for Train includes updates for:

aodh, barbican, ceilometer, ceph (14.2.2), cinder, designate, designate-dashboard, dpdk (18.11.2), glance, gnocchi, heat, heat-dashboard, horizon, ironic, keystone, libvirt (5.4.0), magnum, manila, manila-ui, mistral, murano, murano-dashboard, networking-arista, networking-bagpipe, networking-bgpvpn, networking-hyperv, networking-l2gw, networking-mlnx, networking-odl, networking-ovn, networking-sfc, neutron, neutron-dynamic-routing, neutron-fwaas, neutron-lbaas, neutron-lbaas-dashboard, neutron-vpnaas, nova, octavia, openstack-trove, openvswitch (2.12.0), panko, placement, qemu (4.0), sahara, sahara-dashboard, senlin, swift, trove-dashboard, vmware-nsx, watcher, and zaqar.

For a full list of packages and versions, please refer to:

http://reqorts.qa.ubuntu.com/reports/ubuntu-server/cloud-archive/train_versions.html

Python support

The Train release of Ubuntu OpenStack is Python 3 only; all Python 2 packages have been dropped in Train.

Branch package builds

If you would like to try out the latest updates to branches, we deliver continuously integrated packages on each upstream commit via the following PPA’s:

    sudo add-apt-repository ppa:openstack-ubuntu-testing/mitaka
    sudo add-apt-repository ppa:openstack-ubuntu-testing/ocata
    sudo add-apt-repository ppa:openstack-ubuntu-testing/queens
    sudo add-apt-repository ppa:openstack-ubuntu-testing/rocky
    sudo add-apt-repository ppa:openstack-ubuntu-testing/train

Reporting bugs

If you have any issues please report bugs using the ‘ubuntu-bug’ tool to ensure that bugs get logged in the right place in Launchpad:

    sudo ubuntu-bug nova-conductor

Thanks to everyone who has contributed to OpenStack Train, both upstream and downstream. Special thanks to the Puppet OpenStack modules team and the OpenStack Charms team for their continued early testing of the Ubuntu Cloud Archive, as well as the Ubuntu and Debian OpenStack teams for all of their contributions.

Enjoy and see you in Ussuri!

Corey
(on behalf of the Ubuntu OpenStack team)

on October 23, 2019 12:56 PM

We are pleased to announce that Plasma 5.17.1, is now available in our backports PPA for Kubuntu 19.10.

The release announcement detailing the new features and improvements in Plasma 5.17 can be found here

To upgrade:

Add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

IMPORTANT

Please note that more bugfix releases are scheduled by KDE for Plasma 5.17, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.16 as included in the original 19.10 (Eoan) release.

Issues with Plasma itself can be reported on the KDE bugtracker [1]. In the case of packaging or other issues, please provide feedback on our mailing list [2], IRC [3], and/or file a bug against our PPA packages [4].

1. KDE bugtracker: https://bugs.kde.org
2. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
3. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
4. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

on October 23, 2019 05:48 AM

October 20, 2019

With the release of Ubuntu 19.10 “Eoan Ermine”, Ubuntu Server is available for the Raspberry Pi 4 for the first time. With the Pi 4’s major boost in performance, once we can install Ubuntu Server, we can install and run any of the flavors. Let’s get to it!

Getting the Server Image

First off, head to the Ubuntu 19.10 release images. We want one of the Preinstalled server images, since booting on the Raspberry Pi is still a tricky fiasco. You’ll see two installation options:

Two options, but only one right for the desktop.

We’ll want to choose the first option, Hard-Float. While the Pi 4 does in fact support the 64-bit ARM image, unfortunately USB devices fail to initialize with this option. If you have no need for mouse and keyboard, feel free to use the 64-bit option. This should be resolved in time for the 20.04 release.

Installing the Server Image

Once your server image has been downloaded, download and run Etcher. If you’ve never used Etcher before, it’s a simple, cross-platform solution for installing disk images to USB devices. It’s reliable and easy to use, and will help you avoid overwriting your hard drives.

Select your image and destination, then flash!

Select the downloaded image (.xz is fine, no need to extract), select the correct storage location, and the click Flash! After a few minutes, the image will be installed and validated, and you’ll be ready to go. Re-insert your MicroSD card into your Raspberry Pi, connect an ethernet cable, power it on, and proceed to the next step.

Note: Once USB installation finished, I received an error that the checksums did not match, but everything seems to work correctly afterward.

Logging In

This part tripped me up for a while. Once installed, the default username and password are both “ubuntu”. However, the first login to your Raspberry Pi has to be via SSH! First step, find the IP address of your Raspberry Pi device.

Be mindful that if you’re in a corporate or other shared environment, scanning for devices might be frowned upon. With that warning out of the way, let’s use nmap to look for our device. I’m not going to cover usage here, but a quick DuckDuckGo search can point you in the right direction. The server installation image defaults the hostname to “ubuntu”, so look for that.

$ nmap -sP 192.168.1.0/24
...
Nmap scan report for ubuntu.attlocal.net (192.168.1.230)
Host is up (0.00036s latency).
...

Once you know where the device is, SSH in and reset your password. Enter “yes” to continue connecting if prompted for the fingerprint.

$ ssh ubuntu@ubuntu.attlocal.net
ubuntu@ubuntu.attlocal.net's password: 
You are required to change your password immediately (administrator enforced)

WARNING: Your password has expired.
You must change your password now and login again!
Changing password for ubuntu.
Current password: 
New password: 
Retype new password: 
passwd: password updated successfully
Connection to ubuntu.attlocal.net closed.

Now, SSH in once more with your new password, and let’s install Xubuntu!

Installing Xubuntu

We’re almost done! Now it’s time to decide: Do you want Xubuntu Core, the minimal Xubuntu base that you can easily customize to your needs, or Xubuntu Desktop, our standard installation option? I’ll be doing installing Core for this guide, but if you want to install Desktop, just replace “xubuntu-core^” with “xubuntu-desktop^”.

Also worth noting, while setting up Xubuntu on the Raspberry Pi, I came across an issue that causes our default login screen to fail. This has been fixed upstream, but to work around this issue now we will be using Slick Greeter for our login screen. Now, let’s get back to the installation. Please note that the caret, ^, is not a typo!

sudo apt update
sudo apt install xubuntu-core^ slick-greeter

This will take a while. Once everything’s installed, the final step is to set Slick Greeter as the login screen. Create /etc/lightdm/lightdm.conf with the following contents using your favorite command-line editor.

[SeatDefaults]
greeter-session=slick-greeter

And finally, reboot!

sudo reboot

Installation Complete

Your Raspberry Pi 4 will now boot into a graphical environment, and you’ll be greeted by Slick Greeter. Login with the password you created earlier, and the Xubuntu desktop will load, same as you’d find outside of the Raspberry Pi.

Up and running with Xubuntu 19.10!

What’s Next?

That’s up to you, but the first thing I recommend is creating a new user. The default Ubuntu user is an administrator, and has a bit more power than you’d normally have on the a desktop installation.

Beyond that, the Pi’s the limit! Have fun, and enjoy running the mouse-based distribution on your mouse-sized computer.

Thanks!

I purchased my Raspberry Pi 4 with funds from my Patreon, so my patrons helped make this project possible. I’ll continue experimenting with the Pi 4, so look forward to even more awesome projects. Thanks everybody!

on October 20, 2019 01:11 PM

October 18, 2019

Thanks to our Sponsors

Kubuntu General News

The Kubuntu community is delighted and proud to ship Kubuntu 19.10. As a community of passionate contributors we need systems and services that enable us to work together, and host our development tools.

Our sponsors page provides details and links to the organisations that have supported us through our development process.

Bytemark is a UK based hosting provider that generously provide racked and hosted bare metal hardware upon which our build chain KCI ( Kubuntu Continuous Integration ) operates.

Kubuntu Continuous Integration Server, provided and sponsored by Bytemark

Bytemark the UK's

Linode, our US based hosting provider that generously provide scalable hosting upon which our build chain KCI operates.

Build and Packaging Servers provided and sponsored by Linode

linode-logo_standard_light_large

Big Blue Button provide an online virtual classroom primarily targeted for online learning environments, but has proved itself a valuable tool for remote collaborative working, and community events.

Video conference and training suite, as used by Kubuntu Podcast, provided by Big Blue Button

bbb_logo

We are deeply grateful for the support these organisations provide, and we welcome others to come join our community and pitch in.

on October 18, 2019 07:05 PM

October 17, 2019

Thanks to all the hard work from our contributors, Lubuntu 19.10 has been released! With the codename Eoan Ermine, Lubuntu 19.10 is the 17th release of Lubuntu and the third release of Lubuntu with LXQt as the default desktop environment. Support lifespan Lubuntu 19.10 will be supported for 9 months, until July 2020. If you […]
on October 17, 2019 06:38 PM

The Xubuntu team is happy to announce the immediate release of Xubuntu 19.10!

Xubuntu 19.10, codenamed Eoan Ermine, is a regular release and will be supported for 9 months, until July 2020. If you need a stable environment with longer support time, we recommend that you use Xubuntu 18.04 LTS instead.

The final release images are available as torrents and direct downloads from xubuntu.org/getxubuntu/

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

Xubuntu Core, our minimal installation option, is available to download from unit193.net/xubuntu/core/. Find out more about Xubuntu Core here.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues

Highlights

  • Xubuntu 19.10 features Xfce 4.14, released in August 2019 after nearly 4.5 years of development. Backed by GTK 3 and other modern technologies, Xfce 4.14 includes many new features, improved HiDPI support, and the same great performance for which Xfce is known.
  • Xfce Screensaver replaces Light Locker for screen locking. The new screensaver is built on years of development from the GNOME and MATE Screensaver projects and is tightly integrated with Xfce. It also features significantly improved support for Laptops.
  • We’ve added two new keyboard shortcuts to make transitioning from other desktop environments and operating systems easier.
    • Super + D will show your desktop, while
    • Super + L will now lock your screen.
  • ZFS on root is included as an experimental feature. Available in Ubuntu and the other flavors for this first time in 19.10, this feature enables full-disk installation of ZFS.
    • Remember, ZFS on root is experimental, so don’t run it on your production machines!

Known Issues

  • If more than one instance of the Xfce Pulseaudio Plugin is added to the panel, volume notifications will be duplicated.
  • Tooltips can become unresponsive in the Xfce Task Manager. Usually a bit of movement will cause the tooltip to fade away.
    • This bug will be fixed in Xubuntu 20.04!

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover both many of the other packages we carry and more generic issues.

Support

For support with the release, navigate to Help & Support for a complete list of methods to get help.

on October 17, 2019 06:23 PM
The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 19.10, code-named “Eoan Ermine”. This marks Ubuntu Studio’s 26th release. This release is a regular release and as such, it is supported for 9 months. For those requiring longer-term support, we encourage you to install Ubuntu Studio 18.04 “Bionic Beaver” and add […]
on October 17, 2019 05:26 PM
Stress testing CPU temperatures is not exactly straight forward.  CPU designs vary from CPU to CPU and each have their own strengths and weaknesses in cache design, integer maths, floating point math, bit-wise logical operations and branch prediction to name but a few.  I've been asked several times about the "best" CPU stressor method in stress-ng to use to make a CPU run hot.

As an experiment I ran all the CPU stressor methods in stress-ng for 60 seconds across a range of devices, from small ARM based Raspberry Pi 2 and 3 to much larger Xeon desktop servers just to see how hot CPUs get. The thermal measurements were based on the most relevant thermal zones, for example, on x86 this is the CPU package thermal zone.  In between each stress run 60 seconds of idle time was added to allow the CPU to cool.

Below are the results:

As one can see, quite a mixed set of results and it is hard to recommend any specific CPU stressor method as the "best" across a range of CPUs.  It does appear that the mix of 64 bit integer and floating point cpu stress methods do seem to be generally rather good for making most CPUs run hot.

With this is mind, I think we can conclude there is no such thing as a perfect way to make a CPU run hot as it is very architecture dependant.  Fortunately the stress-ng CPU stressor has a large suite of methods to exercise the CPU in different ways, so there should be a good stressor somewhere in that collection to max out your CPU.  Knowing which one is the tricky part(!)




on October 17, 2019 10:24 AM

Brief history of Calamares in Debian

Before Debian 9 was released, I was preparing a release for a derivative of Debian that was a bit different than other Debian systems I’ve prepared for redistribution before. This was targeted at end-users, some of whom might have used Ubuntu before, but otherwise had no Debian related experience. I needed to find a way to make Debian really easy for them to install. Several options were explored, and I found that Calamares did a great job of making it easy for typical users to get up and running fast.

After Debian 9 was released, I learned that other Debian derivatives were also using Calamares or planning to do so. It started to make sense to package Calamares in Debian so that we don’t do duplicate work in all these projects. On its own, Calamares isn’t very useful, if you ran the pure upstream version in Debian it would crash before it starts to install anything. This is because Calamares needs some configuration and helpers depending on the distribution. Most notably in Debian’s case, this means setting the location of the squashfs image we want to copy over, and some scripts to either install grub-pc or grub-efi depending on how we boot. Since I already did most of the work to figure all of this out, I created a package called calamares-settings-debian, which contains enough configuration to install Debian using Calamares so that derivatives can easily copy and adapt it to their own calamares-settings-* packages for use in their systems.

In Debian 9, the live images were released without an installer available in the live session. Unfortunately the debian-installer live session that was used in previous releases had become hard to maintain and had a growing number of bugs that didn’t make it suitable for release, so Steve from the live team suggested that we add Calamares to the Debian 10 test builds and give it a shot, which surprised me because I never thought that Calamares would actually ship on official Debian media. We tried it out, and it worked well so Debian 10 live media was released with it. It turned out great, every review of Debian 10 I’ve seen so far had very good things to say about it, and the very few problems people have found have already been fixed upstream (I plan to backport those fixes to buster soon).

Plans for Debian 11 (bullseye)

New slideshow

If I had to choose a biggest regret regarding the Debian 10 release, this slideshow would probably be it. It’s just the one slide depicted above. The time needed to create a nice slideshow was one constraint, but another was that I also didn’t have enough time to figure out how its translations work and do a proper call for translations in time for the hard freeze. I consider the slideshow a golden opportunity to explain to new users what the Debian project is about and what this new Debian system they’re installing is capable of, so this is basically a huge missed opportunity that I don’t want to repeat again.

I intend to pull in some help from the web team, publicity team and anyone else who might be interested to cover slides along the topics of (just a quick braindump, final slides will likely have significantly different content):

  • The Debian project, and what it’s about
    • Who develops Debian and why
    • Cover the social contract, and perhaps touch on the DFSG
  • Who uses Debian? Mention notable users and use cases
    • Big and small commercial companies
    • Educational institutions
    • …even NASA?
  • What Debian can do
    • Explain vast package library
    • Provide some tips and tricks on what to do next once the system is installed
  • Where to get help
    • Where to get documentation
    • Where to get additional help

It shouldn’t get to heavy and shouldn’t run longer than a maximum of three minutes or so, because in some cases that might be all we have during this stage of the installation.

Try out RAID support

Calamares now has RAID support. It’s still very new and as far as I know it’s not yet widely tested. It needs to be enabled as a compile-time option and depends on kpmcore 4.0.0, which Calamares uses for its partitioning backend. kpmcore 4.0.0 just entered unstable this week, so I plan to do an upload to test this soon.

RAID support is one of the biggest features missing from Calamares, and enabling it would make it a lot more useful for typical office environments where RAID 1 is typically used on worktations. Some consider RAID on desktops somewhat less important than it used to be. With fast SSDs and network provisioning with gigabit ethernet, it’s really quick to recover from a failed disk, but you still have downtime until the person responsible pops over to replace that disk. At least with RAID-1 you can avoid or drastically decrease downtime, which makes the cost of that extra disk completely worth while.

Add Debian-specific options

The intent is to keep the installer simple, so adding new options is a tricky business, but it would be nice to cover some Debian-specific options in the installer just like debian-installer does. At this point I’m considering adding:

  • Choosing a mirror. Currently it just default to writing a sources.list file that uses deb.debian.org, which is usually just fine.
  • Add an option to enable popularity contest (popcon). As a Debian developer I find popcon stats quite useful. Even though just a small percentage of people enable it, it provides enough data to help us understand how widely packages are used, especially in relation to other packages, and I’m slightly concerned that desktop users who now use Calamares instead of d-i who forget to enable popcon after installation, will result in skewing popcon results for desktop packages compared to previous releases.

Skip files that we’re deleting anyway

At DebConf19, I gave a lightning talk titled “Is it possible to install Debian in a lightning talk slot?”. The answer was sadly “No.”. The idea is that you should be able to install a full Debian desktop system within 5 minutes. In my preparations for the talk, I got it down to just under 6 minutes. It ended up taking just under 7 minutes during my lightnight talk, probably because I forgot to plug in my laptop into a power source and somehow got throttled to save power. Under 7 minutes is fast, but the exercise got me looking at what wasted the most time during installation.

Of the avoidable things that happen during installation, the thing that takes up the most time by a large margin is removing packages that we don’t want on the installed system. During installation, the whole live system is copied from the installation media over to the hard disk, and then the live packages (including Calamares) is removed from that installation. APT (or actually more speficically dpkg) is notorious for playing it safe with filesystem operations, so removing all these live packages takes quite some time (more than even copying them there in the first place).

The contents of the squashfs image is copied over to the filesystem using rsync, so it is possible to provide an exclude list of files that we don’t want. I filed a bug in Calamares to add support for such an exclude list, which was added in version 3.2.15 that was released this week. Now we also need to add support in the live image build scripts to generate these file lists based on the packages we want to remove, but that’s part of a different long blog post all together.

This feature also opens the door for a minimal mode option, where you could choose to skip non-essential packages such as LibreOffice and GIMP. In reality these packages will still be removed using APT in the destination filesystem, but it will be significantly faster since APT won’t have to remove any real files. The Ubuntu installer (Ubiquity) has done something similar for a few releases now.

Add a framebuffer session

As is the case with most Qt5 applications, Calamares can run directly on the Linux framebuffer without the need for Xorg or Wayland. To try it out, all you need to do is run “sudo calamares -platform linuxfb” on a live console and you’ll get Calamares right there in your framebuffer. It’s not tested upstream so it looks a bit rough. As far as I know I’m the only person so far to have installed a system using Calamares on the framebuffer.

The plan is to create a systemd unit to launch this at startup if ‘calamares’ is passed as a boot parameter. This way, derivatives who want this who uses a calamares-settings-debian (or their own fork) can just create a boot menu entry to activate the framebuffer installation without any additional work. I don’t think it should be too hard to make it look decent in this mode either,

Calamares on the framebuffer might also be useful for people who ship headless appliances based on Debian but who still need a simple installer.

Document calamares-settings-debian for derivatives

As the person who put together most of calamares-settings-debian, I consider it quite easy to understand and adapt calamares-settings-debian for other distributions. But even so, it takes a lot more time for someone who wants to adapt it for a derivative to delve into it than it would to just read some quick documentation on it first.

I plan to document calamares-settings-debian on the Debian wiki that covers everything that it does and how to adapt it for derivatives.

Improve Scripts

When writing helper scripts for Calamares in Debian I focused on getting it working, reliably and in time for the hard freeze. I cringed when looking at some of these again after the buster release, it’s not entirely horrible but it can use better conventions and be easier to follow, so I want to get it right for Bullseye. Some scripts might even be eliminated if we can build better images. For example, we install either grub-efi or grub-pc from the package pool on the installation media based on the boot method used, because in the past you couldn’t have both installed at the same time so they were just shipped as additional available packages. With changes in the GRUB packaging (for a few releases now already) it’s possible to have grub-efi and grub-pc-bin installed at the same time, so if we install both at build time it may be possible to simplify those pieces (and also save another few precious seconds of install time).

Feature Requests

I’m sure some people reading this will have more ideas, but I’m not in a position to accept feature requests right now, Calamares is one of a whole bunch of areas in Debian I’m working on in this release. If you have ideas or feature requests, rather consider filing them in Calamares’ upstream bug tracker on GitHub or get involved in the efforts. Calamares has an IRC channel on freenode (#calamares), and for Debian specific stuff you can join the Debian live channel on oftc (#debian-live).

on October 17, 2019 09:01 AM

October 15, 2019

¡Y comienza una nueva Ubucon Europea! En esta ocasión en Sintra, Portugal.
¡Bienvenidos!


Llegué el día anterior justo a tiempo para una cena de bienvenida organizada en un vivero de empresas extravagante: Chalet 12. Allí unas 25 personas compartimos momentos entrañables con una cena cocinada por la propia organización.
Marco | Costales | Tiago | Olive
Lo cierto es que la organización Ubuntu-PT estuvo realizando actividades y visitas toda la semana para los miembros de la comunidad que iban llegando con antelación, todo un detallazo.

Día 1

Llegué de los primeros al Centro Cultural Olga Cadaval, un edificio que se divide en dos alas principales con grandes espacios abiertos. 
A parte de las conferencias, había un stand de UBPorts y Libretrend. Incluso café gratis durante toda la jornada. En el stand de UBPorts pude probar el Pinebook con Ubuntu Touch.

Pinebook


Tras recoger mi identificación y un paquete de bienvenida (camiseta, pings, pegatinas...) comenzó en el auditorio la presentación de esta nueva edición por parte de Tiago Carrondo. 

Conferencia de apertura

Acto seguido, el mismo Tiago nos anunció el 15 cumpleaños de Ubuntu, algo en lo que no había caído y moló, repasando los momentos más importantes de Ubuntu en su corta pero intensa vida.
Conferencia 15 Cumpleaños


Yo puse mi granito de arena con dos conferencias, la primera por la mañana, rodeado de arte (cuadros de Nadir Afonso) analicé los peligros concernientes a nuestra privacidad online y cómo podemos mejorarla.
Privacy on the Net


En cuanto finalicé mi conferencia, acudí a ver la mitad de la conferencia de Rudy sobre "Events in your Local Community Team", donde repasa los logros de Ubuntu Paris, con sus Ubuntu Party y WebCafe.

Events in your Local Community Team

A las 13:15 nos fuimos a comer unos cuantos a un restaurante cerca de la estación. 

Comida


Yo impartía un workshop de dos horas a las 3 (o eso pensaba) sobre cómo desarrollar una aplicación nativa para Ubuntu Touch. Salimos Mateo Salta y yo un poco antes para llegar a tiempo, pero me estaba buscando Tiago, que la conferencia comenzaba a las 14:30 y había personas esperando desde entonces. Vaya vergüenza y desde aquí pedir disculpas a la organización y a los asistentes a mi conferencia por ese retraso. En el workshop mostré cómo realizar una linterna en QML para Ubuntu Touch, algo que maravilló a los asistentes por la sencillez y pocas líneas de código.

Creating an Ubuntu Phone app


El día lo finalizamos yendo a una cervecería para calentar motores

Saloon 2


Y posteriormente cenar todos juntos al restaurante O Tunel, donde degustamos platos tradicionales que estaban exquisitos. Estos momentos son los mejores (en mi opinión) pues es cuando realmente se crea y convive en comunidad.

Cena

Día 2

Día largo por delante, con 4 conferencias simultáneas.
Yo me decanté por la de Jesús Escolar y su conferencia Applied Security for Containers, una conferencia donde te das cuenta de los peligros que rodean todas las plataformas y servicios.
Applied Security for Containers


Después conocí a Vadim, desarrollador web profesional que nos mostró su flujo de trabajo y pequeños trucos para ganar tiempo desarrollando.
Scripts de Vadim


Tras Vadim, Marius Quabeck mostró los pasos para crear un podcast. Apunté algún programa que comentó para editar el podcast de Ubuntu y otras hierbas.

Quabeck mostrando cómo crear y editar un podcast


La comida no fue organizada y nos juntamos todos, por lo que costó encontrar un restaurante para tanta gente.
En la tarde, Joan CiberSheep comenzó las conferencias enseñándonos las posibilidades para crear una aplicación de Ubuntu Touch. Yo me quedé un poco anclado en el tiempo con los comandos y workflow de Canonical y UBPorts ha evolucionado muchísimo la programación del móvil con Ubuntu.
Joan


Finalmente, Simos nos mostró las bondades de LXC con su conferencia Linux Containers LXC/LXD.
Linux Containers


Destacar aquí la gifbox que montó Rudy y Olive, una cámara que junta una secuencia de fotografías en un gif, siendo muy divertido e inesperado el resultado final de cada uno que se fotografía.




Al atardecer, el plan fue juntarnos en una cervecería de las afueras. Tras unas tapas, el dueño nos mostró el proceso de elaboración de la cerveza en su pequeña bodega. 

Explicándonos la fabricación de cerveza


El plato principal fue un bacalao a la brasa junto a una degustación de cervezas. Este evento estaba subvencionado parcialmente por un mecenas anónimo, así que mil gracias desde este humilde post.
Cervecería


Como broche final, Jaime preparó una sorpresa que me entusiasmó, una bandina de 2 gaitas y un tambor nos amenizó y animó a bailar en una fiesta que duró hasta la media noche.

¡Fiesta! :)


Día 3

Hoy nos depara una fiesta del 15 aniversario de Ubuntu, estamos todos ansiosos de cómo será :P
Hoy podríamos decir que es la 'UBPortsCON', pues habrá un montón de conferencias sobre el estado de Ubuntu Touch.
Precisamente la primera de todas es de Jan Sprinz, repasando el pasado, mostrándonos el presente y analizando hacia dónde se encamina este interesante proyecto que nos otorga una alternativa libre a los todo poderosos Android e iOS.
Jan Sprinz narrando la historia de Ubuntu Touch


El mismo Jan nos enseñó uno de los bastiones de UBPorts, el instalador que automatiza y convierte en un juego de niños instalar Ubuntu Touch en nuestro móvil, siempre que sea uno de los dispositivos compatibles a los que ha sido portado.
Tras la conferencia de Jan, Rudy me avisó para ir a la Ubuntu Europe Federation Board Open Meeting, una federación creada precisamente para facilitar a organizadores realizar eventos ubunteros como este.
Finalizando la mañana, Joan CiberSheep nos explicó las guías de usabilidad y diseño de Ubuntu Touch.
Usabilidad y diseño de Ubuntu Touch


En esta ocasión comimos por grupos en distintos restaurantes y volvimos puntuales para realizar la fotografía de grupo.
Después el gran Martin Wimpress nos narró la historia de la paquetería snap y los motivos de Canonical para crearla.
Martin Wimpress


Una conferencia muy interesante fue la de Dario Cavedon, que enlazó de forma poco habitual su afición por correr con la privacidad.
Dario Cavedon


Escogí como última conferencia la de Rute Solipa, que nos explicó el proceso y las dificultades de migrar a software libre el municipio portugués de Seixal.

Migración de Seixal


En la noche, acudimos al mismo bar bar, cenando y celebrando a ritmo de gaita el 15 aniversario de Ubuntu :))

Fiesta de cumpleaños

Día 4

Último día de la Ubucon :'( Yo quiero más jejejeje
Escogí la conferencia de Michal Kohutek, quien nos mostró cómo mejorar los materiales educativos analizando con sensores el seguimiento ocular del lector.
Michal y Jesús Escolar con reconocimiento ocular


Marco Trevisan nos mostró la transición a GNOME del escritorio de Ubuntu y qué nos depara la futura versión LTS.
Futura Ubuntu 20.04


Y para finalizar, Tiago Carrondo, quien abrió el primer día, cerró el evento explicando qué es necesario para realizar una Ubucon, las dificultades para organizar esta edición y estadísticas de asistencia. Fue emotivo cuando todos los voluntarios subieron al escenario.

El final


Para la comida fuimos en grupos a distintos restaurantes, nosotros finalizamos en una cafetería con un café y pastel.

Comida


En la tarde había pensado pasear y conocer un poco mejor Sintra, pero con Joan, una conversación deriva a la siguiente, así que la tarde transcurrió en el mismo bar que cenamos los días anteriores. A la hora de la cena se juntó más gente y acabó dándonos la una de la madrugada mientras intentábamos arreglar el mundo :)

Los últimos supervivientes

El resumen


La Ubucon Europea se consolida año tras año. La organización este año ha sido muy buena, con muchas conferencias y actividades extra.
Sintra ha sido una buena elección, una ciudad acogedora, con buenas infraestructuras que permitiesen desarrollar un evento de estas características.
Y ha sido una muestra más de que lo mejor de Ubuntu es su comunidad.
 
¡Hasta el próximo año!


Parece que resuenan rumores de que el próximo año será en Italia... ¡Quién sabe, ojalá! :)
Ya en el recuerdo queda el haber disfrutado con evento único, el haber aprendido un poco en cada una de las conferencias y especialmente, el volver a ver a los amigos que se van formando en ediciones previas y que son los que realmente hacen que la Ubucon Europea sea entrañable.
on October 15, 2019 05:00 PM