October 22, 2018

Previously: v4.18.

Linux kernel v4.19 was released today. Here are some security-related things I found interesting:

L1 Terminal Fault (L1TF)

While it seems like ages ago, the fixes for L1TF actually landed at the start of the v4.19 merge window. As with the other speculation flaw fixes, lots of people were involved, and the scope was pretty wide: bare metal machines, virtualized machines, etc. LWN has a great write-up on the L1TF flaw and the kernel’s documentation on L1TF defenses is equally detailed. I like how clean the solution is for bare-metal machines: when a page table entry should be marked invalid, instead of only changing the “Present” flag, it also inverts the address portion so even a speculative lookup ignoring the “Present” flag will land in an unmapped area.

protected regular and fifo files

Salvatore Mesoraca implemented an O_CREAT restriction in /tmp directories for FIFOs and regular files. This is similar to the existing symlink restrictions, which take effect in sticky world-writable directories (e.g. /tmp) when the opening user does not match the owner of the existing file (or directory). When a program opens a FIFO or regular file with O_CREAT and this kind of user mismatch, it is treated like it was also opened with O_EXCL: it gets rejected because there is already a file there, and the kernel wants to protect the program from writing possibly sensitive contents to a file owned by a different user. This has become a more common attack vector now that symlink and hardlink races have been eliminated.

syscall register clearing, arm64

One of the ways attackers can influence potential speculative execution flaws in the kernel is to leak information into the kernel via “unused” register contents. Most syscalls take only a few arguments, so all the other calling-convention-defined registers can be cleared instead of just left with whatever contents they had in userspace. As it turns out, clearing registers is very fast. Similar to what was done on x86, Mark Rutland implemented a full register-clearing syscall wrapper on arm64.

Variable Length Array removals, part 3

As mentioned in part 1 and part 2, VLAs continue to be removed from the kernel. While CONFIG_THREAD_INFO_IN_TASK and CONFIG_VMAP_STACK cover most issues with stack exhaustion attacks, not all architectures have those features, so getting rid of VLAs makes sure we keep a few classes of flaws out of all kernel architectures and configurations. It’s been a long road, and it’s shaping up to be a 4-part saga with the remaining VLA removals landing in the next kernel. For v4.19, several folks continued to help grind away at the problem: Arnd Bergmann, Kyle Spiers, Laura Abbott, Martin Schwidefsky, Salvatore Mesoraca, and myself.

shift overflow helper
Jason Gunthorpe noticed that while the kernel recently gained add/sub/mul/div helpers to check for arithmetic overflow, we didn’t have anything for shift-left. He added check_shl_overflow() to round out the toolbox and Leon Romanovsky immediately put it to use to solve an overflow in RDMA.

That’s it for now; thanks for reading. The merge window is open for v4.20! Wish us luck. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

on October 22, 2018 11:17 PM

Leaders and scholars of online communities tend of think of community growth as the aggregate effect of inexperienced individuals arriving one-by-one. However, there is increasing evidence that growth in many online communities today involves newcomers arriving in groups with previous experience together in other communities. This difference has deep implications for how we think about the process of integrating newcomers. Instead of focusing only on individual socialization into the group culture, we must also understand how to manage mergers of existing groups with distinct cultures. Unfortunately, online community mergers have, to our knowledge, never been studied systematically.

To better understand mergers, my student Charlie Kiene spent six months in 2017 conducting ethnographic participant observation in two World of Warcraft raid guilds planning and undergoing mergers. The results—visible in the attendance plot below—shows that the top merger led to a thriving and sustainable community while the bottom merger led to failure and the eventual dissolution of the group. Why did one merger succeed while the other failed? What can managers of other communities learn from these examples?

In a new paper that will be published in the Proceedings of of the ACM Conference on Computer-supported Cooperative Work and Social Computing (CSCW) and that Charlie will present in New Jersey next month, I teamed up with Charlie and Aaron Shaw try to answer these questions.

Raid team attendance before and after merging. Guilds were given pseudonyms to protect the identity of the research subjects.

In our research setting, World of Warcraft (WoW), players form organized groups called “guilds” to take on the game’s toughest bosses in virtual dungeons that are called “raids.” Raids can be extremely challenging, and they require a large number of players to be successful. Below is a video demonstrating the kind of communication and coordination needed to be successful as a raid team in WoW.

Because participation in a raid guild requires time, discipline, and emotional investment, raid guilds are constantly losing members and recruiting new ones to resupply their ranks. One common strategy for doing so is arranging formal mergers. Our study involved following two such groups as they completed mergers. To collect data for our study, Charlie joined both groups, attended and recorded all activities, took copious field notes, and spent hours interviewing leaders.

Although our team did not anticipate the divergent outcomes shown in the figure above when we began, we analyzed our data with an eye toward identifying themes that might point to reasons for the success of one merger and the failure of the other. The answers that emerged from our analysis suggest that the key differences that led one merger to be successful and the other to fail revolved around differences in the ways that the two mergers managed organizational culture. This basic insight is supported by a body of research about organizational culture in firms but seem to have not made it onto the radar of most members or scholars of online communities. My coauthors and I think more attention to the role that organizational culture plays in online communities is essential.

We found evidence of cultural incompatibility in both mergers and it seems likely that some degree of cultural clashes is inevitable in any merger. The most important result of our analysis are three observations we drew about specific things that the successful merger did to effectively manage organizational culture. Drawn from our analysis, these themes point to concrete things that other communities facing mergers—either formal or informal—can do.

A recent, random example of a guild merger recruitment post found on the WoW forums.

First, when planning mergers, groups can strategically select other groups with similar organizational culture. The successful merger in our study involved a carefully planned process of advertising for a potential merger on forums, testing out group compatibility by participating in “trial” raid activities with potential guilds, and selecting the guild that most closely matched their own group’s culture. In our settings, this process helped prevent conflict from emerging and ensured that there was enough common ground to resolve it when it did.

Second, leaders can plan intentional opportunities to socialize members of the merged or acquired group. The leaders of the successful merger held community-wide social events in the game to help new members learn their community’s norms. They spelled out these norms in a visible list of rules. They even included the new members in both the brainstorming and voting process of changing the guild’s name to reflect that they were a single, new, cohesive unit. The leaders of the failed merger lacked any explicitly stated community rules, and opportunities for socializing the members of the new group were virtually absent. Newcomers from the merged group would only learn community norms when they broke one of the unstated social codes.

The guild leaders in the successful merger documented every successful high end raid boss achievement in a community-wide “Hall of Fame” journal. A screenshot is taken with every guild member who contributed to the achievement and uploaded to a “Hall of Fame” page.

Third and finally, our study suggested that social activities can be used to cultivate solidarity between the two merged groups, leading to increased retention of new members. We found that the successful guild merger organized an additional night of activity that was socially-oriented. In doing so, they provided a setting where solidarity between new and existing members can cultivate and motivate their members to stick around and keep playing with each other—even when it gets frustrating.

Our results suggest that by preparing in advance, ensuring some degree of cultural compatibility, and providing opportunities to socialize newcomers and cultivate solidarity, the potential for conflict resulting from mergers can be mitigated. While mergers between firms often occur to make more money or consolidate resources, the experience of the failed merger in our study shows that mergers between online communities put their entire communities at stake. We hope our work can be used by leaders in online communities to successfully manage potential conflict resulting from merging or acquiring members of other groups in a wide range of settings.

Much more detail is available our paper which will be published open access and which is currently available as a preprint.


Both this blog post and the paper it is based on are collaborative work by Charles Kiene from the University of Washington, Aaron Shaw from Northwestern University, and Benjamin Mako Hill from the University of Washington. We are also thrilled to mention that the paper received a Best Paper Honorable Mention award at CSCW 2018!

on October 22, 2018 10:55 PM

Welcome to the Ubuntu Weekly Newsletter, Issue 550 for the week of October 14 – 20, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on October 22, 2018 09:34 PM

October 20, 2018

Thanks to all the hard work from our contributors, Lubuntu 18.10 has been released! With the codename Cosmic Cuttlefish, Lubuntu 18.10 is the 15th release of Lubuntu and the first release of Lubuntu with LXQt as the default desktop environment, with support until July of 2019. Translated into: español What is Lubuntu? Lubuntu is an […]
on October 20, 2018 03:06 AM

Revimos o valioso feedback dos ouvintes, voltámos a falar do Pinebook, Olimex e Pihole. Depois passámos apelos à participação da comunidade, mas o prato principal foi a entrevista à Professora Manuela Aparício do Mestrado em Open Source Software do ISCTE/IUL. No fim além da agenda ainda fizemos uma pequena digressão pelo Open Source Lisboa 2018.

Atribuição e licenças

A imagem de capa: publicdomainpictures.net e está licenciada como CC0.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on October 20, 2018 12:36 AM

October 19, 2018

Debian GSoC 2018 report

Daniel Pocock

One of my major contributions to Debian in 2018 has been participation as a mentor and admin for Debian in Google Summer of Code (GSoC).

Here are a few observations about what happened this year, from my personal perspective in those roles.

Making a full report of everything that happens in GSoC is close to impossible. Here I consider issues that span multiple projects and the mentoring team. For details on individual projects completed by the students, please see their final reports posted in August on the mailing list.

Thanking our outgoing administrators

Nicolas Dandrimont and Sylvestre Ledru retired from the admin role after GSoC 2016 and Tom Marble has retired from the Outreachy administration role, we should be enormously grateful for the effort they have put in as these are very demanding roles.

When the last remaining member of the admin team, Molly, asked for people to step in for 2018, knowing the huge effort involved, I offered to help out on a very temporary basis. We drafted a new delegation but didn't seek to have it ratified until the team evolves. We started 2018 with Molly, Jaminy, Alex and myself. The role needs at least one new volunteer with strong mentoring experience for 2019.

Project ideas

Google encourages organizations to put project ideas up for discussion and also encourages students to spontaneously propose their own ideas. This latter concept is a significant difference between GSoC and Outreachy that has caused unintended confusion for some mentors in the past. I have frequently put teasers on my blog, without full specifications, to see how students would try to respond. Some mentors are much more precise, telling students exactly what needs to be delivered and how to go about it. Both approaches are valid early in the program.

Student inquiries

Students start sending inquiries to some mentors well before GSoC starts. When Google publishes the list of organizations to participate (that was on 12 February this year), the number of inquiries increases dramatically, in the form of personal emails to the mentors, inquiries on the debian-outreach mailing list, the IRC channel and many project-specific mailing lists and IRC channels.

Over 300 students contacted me personally or through the mailing list during the application phase (between 12 February and 27 March). This is a huge number and makes it impossible to engage in a dialogue with every student. In the last years where I have mentored, 2016 and 2018, I've personally but a bigger effort into engaging other mentors during this phase and introducing them to some of the students who already made a good first impression.

As an example, Jacob Adams first inquired about my PKI/PGP Clean Room idea back in January. I was really excited about his proposals but I knew I simply didn't have the time to mentor him personally, so I added his blog to Planet Debian and suggested he put out a call for help. One mentor, Daniele Nicolodi replied to that and I also introduced him to Thomas Levine. They both generously volunteered and together with Jacob, ensured a successful project. While I originally started the clean room, they deserve all the credit for the enhancements in 2018 and this emphasizes the importance of those introductions made during the early stages of GSoC.

In fact, there were half a dozen similar cases this year where I have interacted with a really promising student and referred them to the mentor(s) who appeared optimal for their profile.

After my recent travels in the Balkans, a number of people from Albania and Kosovo expressed an interest in GSoC and Outreachy. The students from Kosovo found that their country was not listed in the application form but the Google team very promptly added it, allowing them to apply for GSoC for the first time. Kosovo still can't participate in the Olympics or the World Cup, but they can compete in GSoC now.

At this stage, I was still uncertain if I would mentor any project myself in 2018 or only help with the admin role, which I had only agreed to do on a very temporary basis until the team evolves. Nonetheless, the day before student applications formally opened (12 March) and after looking at the interest areas of students who had already made contact, I decided to go ahead mentoring a single project, the wizard for new students and contributors.

Student selections

The application deadline closed on 27 March. At this time, Debian had 102 applications, an increase over the 75 applications from 2016. Five applicants were female, including three from Kosovo.

One challenge we've started to see is that since Google reduced the stipend for GSoC, Outreachy appears to pay more in many countries. Some women put more effort into an Outreachy application or don't apply for GSoC at all, even though there are far more places available in GSoC each year. GSoC typically takes over 1,000 interns in each round while Outreachy can only accept approximately 50.

Applicants are not evenly distributed across all projects. Some mentors/projects only receive one applicant and then mentors simply have to decide if they will accept the applicant or cancel the project. Other mentors receive ten or more complete applications and have to spend time studying them, comparing them and deciding on the best way to rank them and make a decision.

Given the large number of project ideas in Debian, we found that the Google portal didn't allow us to use enough category names to distinguish them all. We contacted the Google team about this and they very quickly increased the number of categories we could use, this made it much easier to tag the large number of applications so that each mentor could filter the list and only see their own applicants.

The project I mentored personally, a wizard for helping new students get started, attracted interest from 3 other co-mentors and 10 student applications. To help us compare the applications and share data we gathered from the students, we set up a shared spreadsheet using Debian's Sandstorm instance and Ethercalc. Thanks to Asheesh and Laura for setting up and maintaining this great service.

Slot requests

Switching from the mentor hat to the admin hat, we had to coordinate the requests from each mentor to calculate the total number of slots we wanted Google to fund for Debian's mentors.

Once again, Debian's Sandstorm instance, running Ethercalc, came to the rescue.

All mentors were granted access, reducing the effort for the admins and allowing a distributed, collective process of decision making. This ensured mentors could see that their slot requests were being counted correctly but it means far more than that too. Mentors put in a lot of effort to bring their projects to this stage and it is important for them to understand any contention for funding and make a group decision about which projects to prioritize if Google doesn't agree to fund all the slots.

Management tools and processes

Various topics were discussed by the team at the beginning of GSoC.

One discussion was about the definition of "team". Should the new delegation follow the existing pattern, reserving the word "team" for the admins, or should we move to the convention followed by the DebConf team, where the word "team" encompasses a broader group of the volunteers? A draft delegation text was prepared but we haven't asked for it to be ratified, this is a pending task for the 2019 team (more on that later).

There was discussion about the choice of project management tools, keeping with Debian's philosophy of only using entirely free tools. We compared various options, including Redmine with the Agile (Kanban) plugin, Kanboard (as used by DebConf team), and more Sandstorm-hosted possibilities, such as Wekan and Scrumblr. Some people also suggested ideas for project management within their Git repository, for example, using Org-mode. There was discussion about whether it would be desirable for admins to run an instance of one of these tools to manage our own workflow and whether it would be useful to have all students use the same tool to ease admin supervision and reporting. Personally, I don't think all students need to use the same tool as long as they use tools that provide public read-only URLs, or even better, a machine-readable API allowing admins to aggregate data about progress.

Admins set up a Git repository for admin and mentor files on Debian's new GitLab instance, Salsa. We tried to put in place a process to synchronize the mentor list on the wiki, the list of users granted team access in Salsa and the list of mentors maintained in the GSoC portal. This could be taken further by asking mentors and students to put a Moin Category tag on the bottom of their personal pages on the wiki, allowing indexes to be built automatically.

Students accepted

On 23 April, the list of selected students was confirmed. Shortly afterward, a Debian blog appeared welcoming the students.

OSCAL 2018, Albania and Kosovo visit

I traveled to Tirana, Albania for OSCAL'18 where I was joined by two of the Kosovan students selected by Debian. They helped run the Debian booth, comprising a demonstration of software defined radio from Debian Hams.

Enkelena Haxhiu and I gave a talk together about communications technology. This was Enkelena's first talk. In the audience was Arjen Kamphuis, he was one of the last people to ask a question at the end. His recent disappearance is a disturbing mystery.

DebConf18

A GSoC session took place at DebConf18, the video is available here and includes talks from GSoC and Outreachy participants past and present.

Final results

Many of the students have already been added to Planet Debian where they have blogged about what they did and what they learned in GSoC. More will appear in the near future.

If you like their project, if you have ideas for an event where they could present it or if you simply live in the same region, please feel free to contact the students directly and help them continue their free software adventure with us.

Meeting more students

Google's application form for organizations like Debian asks us what we do to stay in contact with students after GSoC. Crossing multiple passes in the Swiss and Italian alps to find Sergio Alberti at Capo di Lago is probably one of the more exotic answers to that question.

Looking back at past internships

I first mentored students in GSoC 2013. Since then, I've been involved in mentoring a total of 12 students in GSoC and 3 interns in Outreachy as well as introducing many others to mentors and organizations. Several of them stay in touch and it's always interesting to hear about their successes as they progress in their careers and in their enjoyment of free software.

The Outreachy organizers have chosen a picture of two of my former interns, Urvika Gola (Outreachy 2016) and Pranav Jain (GSoC 2016) for the mentors page of their web site. This is quite fitting as both of them have remained engaged and become involved in the mentoring process.

Lessons from GSoC 2018, preparing for 2019

One of the big challenges we faced this year is that as the new admin team was only coming together for the first time, we didn't have any policies in place before mentors and students started putting significant effort in to their proposals.

Potential mentors start to put in significant effort from February, when the list of participating organizations is usually announced by Google. Therefore, it seems like a good idea to make any policies clear to potential mentors before the end of January.

We faced a similar challenge with selecting mentors to attend the GSoC mentor summit. While some ideas were discussed about the design of a selection process or algorithm, the admins fell back on the previous policy based on a random selection as mentors may have anticipated that policy was still in force when they signed up.

As I mentioned already, there are several areas where GSoC and Outreachy are diverging, this already led to some unfortunate misunderstandings in both directions, for example, when people familiar with Outreachy rules have been unaware of GSoC differences and vice-versa and I'll confess to being one of several people who has been confused at least once. Mentors often focus on the projects and candidates and don't always notice the annual rule changes. Unfortunately, this requires involvement and patience from both the organizers and admins to guide the mentors through any differences at each step.

The umbrella organization question

One of the most contentious topics in Debian's GSoC 2018 program was the discussion of whether Debian can and should act as an umbrella organization for smaller projects who are unlikely to participate in GSoC in their own right.

As an example, in 2016, four students were mentored by Savoir Faire Linux (SFL), makers of the Ring project, under the Debian umbrella. In 2017, Ring joined the GNU Project and they mentored students under the GNU Project umbrella organization. DebConf17 coincidentally took place in Montreal, Canada, not far from the SFL headquarters and SFL participated as a platinum sponsor.

Google's Mentor Guide explicitly encourages organizations to consider this role, but does not oblige them to do so either:

Google’s program administrators actually look quite fondly on the umbrella organizations that participate each year.

For an organization like Debian, with our philosophy, independence from the cloud and distinct set of tools, such as the Salsa service mentioned earlier, being an umbrella organization gives us an opportunity to share the philosophy and working methods for mutual benefit while also giving encouragement to related projects that we use.

Some people expressed concern that this may cut into resources for Debian-centric projects, but it appears that Google has not limited the number of additional places in the program for this purpose. This is one of the significant differences with Outreachy, where the number of places is limited by funding constraints.

Therefore, if funding is not a constraint, I feel that the most important factor to evaluate when considering this issue is the size and capacity of the admin team. Google allows up to five people to be enrolled as admins and if enough experienced people volunteer, it can be easier for everybody whereas with only two admins, the minimum, it may not be feasible to act as an umbrella organization.

Within the team, we observed various differences of opinion: for example some people were keen on the umbrella role while others preferred to restrict participation to Debian-centric projects. We have the same situation with Outreachy: some mentors and admins only want to do GSoC, while others only do Outreachy and there are others, like myself, who have supported both programs equally. In situations like this, nobody is right or wrong.

Once that fundamental constraint, the size of the admin team, is considered, I personally feel that any related projects engaged on this basis can be evaluated for a wide range of synergies with the Debian community, including the people, their philosophy, the tools used and the extent to which their project will benefit Debian's developers and users. In other words, this doesn't mean any random project can ask to participate under the Debian umbrella but those who make the right moves may have a chance of doing so.

Financial

Google pays each organization an allowance of USD 500 for each slot awarded to the organization, plus some additional funds related to travel. This generally corresponds to the number of quality candidates identified by the organization during the selection process, regardless of whether the candidate accepts an internship or not. Where more than one organization requests funding (a slot) for the same student, both organizations receive a bounty, we had at least one case like this in 2018.

For 2018, Debian has received USD 17,200 from Google.

GSoC 2019 and beyond

Personally, as I indicated in January that I would only be able to do this on a temporary basis, I'm not going to participate as an admin in 2019 so it is a good time for other members of the community to think about the role. Each organization who wants to participate needs to propose a full list of admins to Google in January 2019, therefore, now is the time for potential admins to step forward, decide how they would like to work together as a team and work out the way to recruit mentors and projects.

Thanks to all the other admins, mentors, the GSoC team at Google, the Outreachy organizers and members of the wider free software community who supported this initiative in 2018. I'd particularly like to thank all the students though, it is really exciting to work with people who are so open minded, patient and remain committed even when faced with unanticipated challenges and adversity.

on October 19, 2018 08:26 AM

October 18, 2018

The Xubuntu team is happy to announce the immediate release of Xubuntu 18.10!

Xubuntu 18.10 is a regular release and will be supported for 9 months, until July 2019. If you need a stable environment with longer support time, we recommend that you use Xubuntu 18.04 LTS instead.

The final release images are available as torrents and direct downloads from xubuntu.org/getxubuntu/

As the main server might be busy in the first few days after the release, we recommend using the torrents if possible.

We’d like to thank everybody who contributed to this release of Xubuntu!

Highlights and Known Issues

Highlights

  • Several Xfce components and apps were updated to their 4.13 development releases, bringing us closer to a Gtk+3-only desktop
  • elementary Xfce Icon Theme 0.13 with the manila folder icons as seen in the upstream elementary icon theme
  • Greybird 3.22.9, which improves the look and feel of our window manager, alt-tab dialog, Chromium, and even pavucontrol
  • A new default wallpaper featuring a gentle purple tone that greatly complements our Gtk+ and icon themes

Known Issues

  • At times the panel could show 2 network icons, this appears to be a race condition which we have not been able to rectify in time for release
  • In the settings Manager, the mouse fails to scroll apps in settings manager (GTK+ 3 regression)

For more obscure known issues, information on affecting bugs, bug fixes, and a list of new package versions, please refer to the Xubuntu Release Notes.

The main Ubuntu Release Notes cover both many of the other packages we carry and more generic issues.

Support

For support with the release, navigate to Help & Support for a complete list of methods to get help.

on October 18, 2018 11:56 PM

Codenamed “Cosmic Cuttlefish”, 18.10 continues Ubuntu’s proud tradition
of integrating the latest and greatest open source technologies into a
high-quality, easy-to-use Linux distribution. The team has been hard at
work through this cycle, introducing new features and fixing bugs.

The Ubuntu kernel has been updated to the 4.18 based Linux kernel,
our default toolchain has moved to gcc 8.2 with glibc 2.28, and we’ve
also updated to openssl 1.1.1 and gnutls 3.6.4 with TLS1.3 support.

Ubuntu Desktop 18.04 LTS brings a fresh look with the community-driven
Yaru theme replacing our long-serving Ambiance and Radiance themes. We
are shipping the latest GNOME 3.30, Firefox 63, LibreOffice 6.1.2, and
many others.

Ubuntu Server 18.10 includes the Rocky release of OpenStack including
the clustering enabled LXD 3.0, new network configuration via netplan.io,
and iteration on the next-generation fast server installer. Ubuntu Server
brings major updates to industry standard packages available on private
clouds, public clouds, containers or bare metal in your datacentre.

The newest Ubuntu Budgie, Kubuntu, Lubuntu, Ubuntu Kylin, Ubuntu MATE,
Ubuntu Studio, and Xubuntu are also being released today.

More details can be found for these at their individual release notes:

https://wiki.ubuntu.com/CosmicCuttlefish/ReleaseNotes#Official_flavours

intenance updates will be provided for 9 months for all flavours
releasing with 18.10.

To get Ubuntu 18.10
——————-

In order to download Ubuntu 18.10, visit:

http://www.ubuntu.com/download

Users of Ubuntu 18.04 will be offered an automatic upgrade to 18.10
if they have selected to be notified of all releases, rather than just
LTS upgrades. For further information about upgrading, see:

http://www.ubuntu.com/download/desktop/upgrade

As always, upgrades to the latest version of Ubuntu are entirely free
of charge.

We recommend that all users read the release notes, which document
caveats, workarounds for known issues, as well as more in-depth notes
on the release itself. They are available at:

http://wiki.ubuntu.com/CosmicCuttlefish/ReleaseNotes

Find out what’s new in this release with a graphical overview:

http://www.ubuntu.com/desktop
http://www.ubuntu.com/desktop/features

If you have a question, or if you think you may have found a bug
but aren’t sure, you can try asking in any of the following places:

#ubuntu on irc.freenode.net
http://lists.ubuntu.com/mailman/listinfo/ubuntu-users
http://www.ubuntuforums.org
http://askubuntu.com

Help Shape Ubuntu
—————–

If you would like to help shape Ubuntu, take a look at the list
of ways you can participate at:

http://community.ubuntu.com/contribute

About Ubuntu
————

Ubuntu is a full-featured Linux distribution for desktops, laptops,
netbooks and servers, with a fast and easy installation and regular
releases. A tightly-integrated selection of excellent applications
is included, and an incredible variety of add-on software is just a
few clicks away.

Professional services including support are available from Canonical
and hundreds of other companies around the world. For more information
about support, visit:

http://www.ubuntu.com/support

More Information
—————-

You can learn more about Ubuntu and about this release on our
website listed below:

http://www.ubuntu.com

To sign up for future Ubuntu announcements, please subscribe to
Ubuntu’s very low volume announcement list at:

http://lists.ubuntu.com/mailman/listinfo/ubuntu-announce

Originally posted to the ubuntu-announce mailing list on Thu Oct 18 17:47:53 UTC 2018 by Adam Conrad, on behalf of the Ubuntu Release Team

on October 18, 2018 07:47 PM

Kubuntu 18.10 is released today

Kubuntu General News

Kubuntu 18.10 has been released, featuring the beautiful Plasma 5.13 desktop from KDE.

Codenamed “Cosmic Cuttlefish”, Kubuntu 18.10 continues our proud tradition of integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

The team has been hard at work through this cycle, introducing new features and fixing bugs.

Under the hood, there have been updates to many core packages, including a new 4.18-based kernel, Qt 5.11, KDE Frameworks 5.50, Plasma 5.13.5 and KDE Applications 18.04.3

Kubuntu has seen some exciting improvements, with newer versions of Qt, updates to major packages like Krita, Kdeconnect, Kstars, Peruse, Latte-dock, Firefox and LibreOffice, and stability improvements to KDE Plasma. In addition, Snap integration in Plasma Discover software center is now enabled by default, while Flatpak integration is also available to add on the settings page.

For a list of other application updates, upgrading notes and known bugs be sure to read our release notes:

https://wiki.ubuntu.com/CosmicCuttlefish/ReleaseNotes/Kubuntu

Download 18.10 or read about how to upgrade from 18.04.

Additionally, users who wish to test the latest Plasma 5.14.1 and Frameworks 5.51, which came too late in our release cycle to make it into 18.10 as default, can install these via our backports PPA. This represents only the 1st initial bugfix release of Plasma 5.14, with 4 more to be released in the coming months, so early adopters should be aware that there may more bugs to be found (and reported).

on October 18, 2018 05:53 PM
The Ubuntu Studio team is pleased to announce the release of Ubuntu Studio 18.10 “Cosmic Cuttlefish”. As a regular release, this version of Ubuntu Studio will be supported for 9 months. Since it’s just out, you may experience some issues, so you might want to wait a bit before upgrading. Please see the release notes […]
on October 18, 2018 05:45 PM

Ubuntu MATE 18.10 is a modest, yet strategic, upgrade over our 18.04 release. If you want bug fixes and improved hardware support then 18.10 is for you. For those who prefer staying on the LTS then everything in this 18.10 release is also important for the upcoming 18.04.2 release. Oh yeah, we've also made a bespoke Ubuntu MATE 18.10 image for the GPD Pocket and GPD Pocket 2. Read on to learn more...

Ubuntu MATE 18.10
Superposition on the Intel Core i7-8809G Radeon RX Vega M powered Hades Canyon NUC

What changed since the Ubuntu MATE 18.04 final release?

Curiously, the work during this Ubuntu MATE 18.10 release has really been focused on what will become Ubuntu MATE 18.04.2. Let me explain.

MATE Desktop

The upstream MATE Desktop team have been working on many bug fixes for MATE Desktop 1.20.3, that has resulted in a lot of maintenance updates in the upstream releases of MATE Desktop. The Debian packaging team for MATE Desktop, of which I am member, has been updating all the MATE packages to track these upstream bug fixes and new releases. Just about all MATE Desktop packages and associated components, such as AppMenu and MATE Dock Applet have been updated. Now that all these fixes exist in the 18.10 release, we will start the process of SRU'ing (backporting) them to 18.04 so that they will feature in the Ubuntu MATE 18.04.2 release due in February 2019. The fixes should start landing in Ubuntu MATE 18.04 very soon, well before the February deadline.

Hardware Enablement

Ubuntu MATE 18.04.2 will include a hardware enablement stack (HWE) based on what is shipped in Ubuntu 18.10. Ubuntu users are increasingly adopting the current generation of AMD RX Vega GPUs, both discrete and integrated solutions such as the Intel Core i7-8809G Radeon RX Vega M found in the Hades Canyon NUC and some laptops. I have been lobbying people within the Ubuntu project to upgrade to newer versions of the Linux kernel, firmware, Mesa and Vulkan that offer the best possible "out of box" support for AMD GPUs. Consequently, Ubuntu 18.10 (of any flavour) is great for owners of AMD graphics solutions and these improvements will soon be available in Ubuntu 18.04.2 too.

GPD Pocket

Alongside the generic image for 64-bit Intel PCs we're also releasing a bespoke image for the GPD Pocket and GPD Pocket 2 that includes the hardware specific tweaks to get these devices working "out of the box" without any faffing about. See our GPD Pocket page for more details.


Ubuntu MATE 18.10 running on the GPD Pocket (left) and GPD Pocket 2 (right)


Raspberry Pi images

We're planning on releasing Ubuntu MATE images for the Raspberry Pi after that 18.10 release is out, in October 2018. It takes usually takes about a month to get the Raspberry Pi images built and tested, but we've encountered some challenges with the 18.04 based images which has delayed their release. Hopefully we'll have something in time for Christmas 2018 :-)

Major Applications

Accompanying MATE Desktop 1.20.3 and Linux 4.18 are Firefox 59.0.2, VLC 3.0.4, LibreOffice 6.1.2.1 and Thunderbird 60.2.1.

Major Applications

See the Ubuntu 18.10 Release Notes for details of all the changes and improvements that Ubuntu MATE benefits from.

Download Ubuntu MATE 18.10

We've even redesigned the download page so it's even easier to get started.

Download

Known Issues

Here are the known issues.

Ubuntu MATE

  • Nothing significant.

Ubuntu family issues

This is our known list of bugs that affects all flavours.

You'll also want to check the Ubuntu MATE bug tracker to see what has already been reported. These issues will be addressed in due course.

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on October 18, 2018 05:00 PM

S11E32 – Thirty-Two Going on Spinster

Ubuntu Podcast from the UK LoCo

This week we interview Daniel Foré about the final release of elementary 5.0 (Juno), bring you some Android love and go over all your feedback.

It’s Season 11 Episode 32 of the Ubuntu Podcast! Alan Pope and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on October 18, 2018 02:00 PM

October 15, 2018

Jeremy Bicha wrote up an unknown Ubuntu feature: “printing” direct to a Google Drive PDF. I rather wanted this, but I don’t run the Gnome desktop, so I thought I might be out of luck. But no! It works fine on my Ubuntu MATE desktop too. A couple of extra tweaks are required, though. This is unfortunately a bit technical, but it should only need setting up once.

You need the Gnome Control Centre and Gnome Online Accounts installed, if you don’t have them already, as well as the Google Cloud Print extension that Jeremy mentions. From a terminal, run sudo apt install gnome-control-center gnome-online-accounts cpdb-backend-gcp.

Next, you need to launch the Control Centre, but it doesn’t like you if you’re not running the Gnome desktop. So, we lie to it. In that terminal, run XDG_CURRENT_DESKTOP=GNOME gnome-control-center online-accounts. This should correctly start the Control Centre, showing the online accounts. Sign in to your Google account using that window. (I only have Files and Printers selected; you don’t need Mail and Calendars and so on to get this printing working.)

Then… it all works. From now on, when you go to print something, the print dialogue will, after a couple of seconds, show a new entry: “Save to Google Drive”. Choose that, and your document will “print” to a PDF stored in Google Drive. Easy peasy. Nice one Jeremy for the write-up. It’d be neat if Ubuntu MATE could integrate this a little more tightly.

on October 15, 2018 10:31 PM

nymea

Michael Zanetti

It’s been quite a while since I had written a post now. Lots of things have changed around here but even though I am not actively developing for Ubuntu itself any more it doesn’t mean that I’ve left the Ubuntu and FOSS world in general. In fact, I’ve been pretty busy hacking on some more free software goodness. Some few have sure heard about it, but for the biggest part, allow me to introduce you to nymea.

nymea is an IoT platform mainly based on Ubuntu. Well, that’s where we develop on, we provide packages for debian and snaps for all the platforms supporting snaps too.

It consists of 3 parts: nymea:core, nymea:app and nymea:cloud.
The purpose of this project is to enable easy integration of various things with each other. Being plugin-based, it allows to make all sorts of things (devices, online services…) work together.

Practically speaking this means two things:

– It will allow users to have a completely open source smart home setup which does everything offline. Everything is processed offline, including the smartness. Turning your living room lights on when it gets dark? nymea will do it, and it’ll do it even without your internet connection. It comes with nymea:core to be installed on a gateway device in your home (a Raspberry Pi, or any other device that can run Ubuntu/Debian or snapd) and nymea:app, available in app stores and also as a desktop app in the snap store.

– It delivers a developer platform for device makers. Looking for a solution that easily allows you to make your device smart? Ubuntu:core + nymea:core together will get you sorted in no time to have an app for your “thing” and allow it to react on just about any input it gets.

nymea:cloud is an optional feature to nymea:core and nymea:app and allows to extend the nymea system with features like remote connection, push notifications or Alexa integration (not released yet).

So if that got you curious, check out https://wiki.nymea.io (and perhaps https://nymea.io in general) or simply install nymea and nymea-app and get going (on snap systems you need to connect some plugs and iterfaces for all the bits and pieces to work, alternatively we have a ppa ready for use too).

on October 15, 2018 05:01 PM

I am pleased to announce the release of Xfce Screensaver (xfce4-screensaver) 0.1.0! This is an early release targeted to testers and translators. Bugs and patches welcome!

About

Xfce Screensaver is a screen saver and locker that aims to have simple, sane, secure defaults and be well integrated with the Xfce desktop.

It is a port of MATE Screensaver, itself a port of GNOME Screensaver. It has been tightly integrated with the Xfce desktop, utilizing Xfce libraries and the Xfconf configuration backend.

Homepage · Bugzilla · Git

Features

  • Integration with the Xfce Desktop per-monitor wallpaper
  • Locking down of configuration settings via Xfconf
  • DBUS interface to limited screensaver interaction
  • Full translation support into many languages
  • Shared styles with LightDM GTK+ Greeter
  • Support for XScreensaver screensavers
  • User switching

Requirements

  • DBus >= 0.30
  • GLib >= 2.50.0
  • GTK+ >= 3.22.0
  • X11 >= 1.0
  • garcon >= 0.5.0
  • libxklavier >= 5.2
  • libxfce4ui >= 4.12.1
  • libxfce4util >= 4.12.1
  • Xfconf >= 4.12.1

Screenshots

Click to view slideshow.

Downloads

Please be aware that this is alpha-quality software. It is not currently recommended for use in production machines. I invite you to test it, report bugs, provide feedback, and submit patches so we can get it ready for the world.

Source tarball (md5sha1sha256)

on October 15, 2018 10:51 AM

October 14, 2018

Neste episódio falámos de Pinebooks, Librem Key, SolusOS e muito mais. Um episódio repleto de informação relevante sobre os temas que têm dominado a actualidade. Já sabes: Ouve, subscreve e partilha!

Atribuição e licenças

A imagem: Photo on Visualhunt.com

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on October 14, 2018 11:43 PM

There is an interesting hidden feature available in Ubuntu 18.04 LTS and newer. To enable this feature, first install cpdb-backend-gcp.

sudo apt install cpdb-backend-gcp

Make sure you are signed in to Google with GNOME Online Accounts. Open the Settings app1 to the Online Accounts page. If your Google account is near the top above the Add an account section, then you’re all set.

Currently, only LibreOffice is supported. Hopefully, for 19.04, other GTK+ apps will be able to use the feature.

This feature was developed by Nilanjana Lodh and Abhijeet Dubey when they were Google Summer of Code 2017 participants. Their mentors were Till Kamppeter, Aveek Basu, and Felipe Borges.

Till has been trying to get this feature installed by default in Ubuntu since 18.04 LTS, but it looks like it won’t make it in until 19.04.

I haven’t seen this feature packaged in any other Linux distros yet. That might be because people don’t know about this feature so that’s why I’m posting about it today! If you are a distro packager, the 3 packages you need are cpdb-libs , cpdb-backend-gcp, and cpdb-backend-cups. The final package enables easy printing to any IPP printer. (I didn’t mention it earlier because I believe Ubuntu 18.04 LTS already supports that feature through a different package.)

Save to Google Drive

In my original blog post, I confused the cpdb feature with a feature that already exists in GTK3 built with GNOME Online Accounts support. This should already work on most distros.

When you print a document, there will be an extra Save to Google Drive option. Saving to Google Drive saves a PDF of your document to your Google Drive account.

This post was edited on October 16 to mention that cpdb only supports LibreOffice now and that Save to Google Drive is a GTK3 feature instead.

October 17: Please see Felipe’s comments. It turns out that even Google Cloud Print works fine in distros with recent GTK3. The point of the cpdb feature is to make this work in apps that don’t use GTK3. So I guess the big benefit now is that you can use Google Cloud Print or Save to Google Drive from LibreOffice.

on October 14, 2018 02:31 PM

The Ubuntu release team have announced a 1st test ISO RC build for all 18.10 flavours.

Please help us test these and subsequent RC builds, so that we can have an amazing and well tested release in the coming week.

As noted below, the initial builds will NOT be the final ones.

Over the next few hours, builds will start popping on the Cosmic Final
milestone page[1] on the ISO tracker.  These builds are not final.
We're still waiting on a few more fixes, a few things to migrate, etc.
I've intentionally not updated base-files or the ISO labels to reflect
the release status (so please don't file bugs about those).

What there are, however, are "close enough" for people to be testing in
anger, filing bugs, fixing bugs, iterating image builds, and testing
all over again.  So, please, don't wait until Wednesday night to test,
testing just before release is TOO LATE to get anything fixed.  Get out
there, grab your favourite ISO, beat it up, report bugs, escalate bugs,
get things fixed, respin (if you're a flavour lead with access), and
test, test... And test.  Did I mention testing?  Please[2] test.

Thanks,

... Adam

[1] http://iso.qa.ubuntu.com/qatracker/milestones/397/builds
[2] Please.

Downloads for RC builds can be found by following the link after clicking through to ‘Cosmic Final’ on the Ubuntu ISO tracker. Please report test case results if you have a Ubuntu SSO account (or are prepared to make one). Feedback can also be given via our normal email lists, IRC, forums etc.

Upgrade testing from 18.04 in installed systems (VM or otherwise) is also a very useful way to help prepare for the new release. Instructions for upgrade can be found on the Ubuntu help wiki.

Ubuntu ISO tracker: http://iso.qa.ubuntu.com/qatracker/
Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
Kubuntu 18.10 Upgrade instructions: https://help.ubuntu.com/community/CosmicUpgrades/Kubuntu

on October 14, 2018 08:46 AM
Adam Conrad always does a great job in stating that people should test the Release Candidates. Here’s what he has said this time: Over the next few hours, builds will start popping on the Cosmic Final milestone page[1] on the ISO tracker. These builds are not final. We’re still waiting on a few more fixes, […]
on October 14, 2018 12:04 AM

October 13, 2018

I’m back to blogging, after shutting down my wordpress.com hosted blog in spring. This time, fully privacy aware, self hosted, and integrated with mastodon.

Let’s talk details: In spring, I shutdown my wordpress.com hosted blog, due to concerns about GDPR implications with comment hosting and ads and stuff. I’d like to apologize for using that, back when I did this (in 2007), it was the easiest way to get into blogging. Please forgive me for subjecting you to that!

Recently, Google announced the end of Google+. As some of you might know, I posted a lot of medium-long posts there, rather than doing blog posts; especially after I disabled the wordpress site.

With the end of Google+, I want to try something new: I’ll host longer pieces on this blog, and post shorter messages on @juliank@mastodon.social. If you follow the Mastodon account, you will see toots for each new blog post as well, linking to the blog post.

Mastodon integration and privacy

Now comes the interesting part: If you reply to the toot, your reply will be shown on the blog itself. This works with a tiny bit of JavaScript that talks to a simple server-side script that finds toots from me mentioning the blog post, and then replies to that.

This protects your privacy, because mastodon.social does not see which blog post you are looking at, because it is contacted by the server, not by you. Rendering avatars requires loading images from mastodon.social’s file server, however - to improve your privacy, all avatars are loaded with referrerpolicy='no-referrer', so assuming your browser is half-way sane, it should not be telling mastodon.social which post you visited either. In fact, the entire domain also sets Referrer-Policy: no-referrer as an http header, so any link you follow will not have a referrer set.

The integration was originally written by @bjoern@mastodon.social – I have done some moderate improvements to adapt it to my theme, make it more reusable, and replace and extend the caching done in a JSON file with a Redis database.

Source code

This blog is free software; generated by the Hugo snap. All source code for it is available:

(Yes I am aware that hosting the repositories on GitHub is a bit ironic given the whole focus on privacy and self-hosting).

The theme makes use of Hugo pipes to minify and fingerprint JavaScript, and vendorizes all dependencies instead of embedding CDN links, to, again, protect your privacy.

Future work

I think I want to make the theme dark, to be more friendly to the eyes. I also might want to make the mastodon integration a bit more friendly to use. And I want to get rid of jQuery, it’s only used for a handful of calls in the Mastodon integration JavaScript.

If you have any other idea for improvements, feel free to join the conversation in the mastodon toot, send me an email, or open an issue at the github projects.

Closing thoughts

I think the end of Google+ will be an interesting time, requring a lot of people in the open source world to replace one of their main communication channels with a different approach.

Mastodon and Diaspora are both in the race, and I fear the community will split or everyone will have two accounts in the end. I personally think that Mastodon + syndicated blogs provide a good balance: You can quickly write short posts (up to 500 characters), and you can host long articles on your own and link to them.

I hope that one day diaspora* and mastodon federate together. If we end up with one federated network that would be the best outcome.

on October 13, 2018 09:03 PM

This week, the popular screenshot app Shutter was removed from Debian Unstable & Ubuntu 18.10. (It had already been removed from Debian “Buster” 6 months ago and some of its “optional” dependencies had already been removed from Ubuntu 18.04 LTS).

Shutter will need to be ported to gtk3 before it can return to Debian. (Ideally, it would support Wayland desktops too but that’s not a blocker for inclusion in Debian.)

See the Debian bug for more discussion.

I am told that flameshot is a nice well-maintained screenshot app.

I believe Snap or Flatpak are great ways to make apps that use obsolete libraries available on modern distros that can no longer keep those libraries around. There isn’t a Snap or Flatpak version of Shutter yet, so hopefully someone interested in that will help create one.

on October 13, 2018 06:29 PM

October 12, 2018

S11E31 – Thirty-One Dates in Thirty-One Days

Ubuntu Podcast from the UK LoCo

This week Ubuntu Podcast debuts on Spotify and re-embraces Mastodon. We’ve been unboxing the GPD Pocket 2 and building a Clockwork Pi. We discuss Plex releasing as a Snap, Microsoft joining the OIN, Minecraft open-sourcing some libraries, Google axing Google+, Etcher (allegedly) not honouring privacy settings, plus we also round up community news and events.

It’s Season 11 Episode 31 of the Ubuntu Podcast! Alan Pope and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on October 12, 2018 02:00 PM

At DerbyCon 8, I had the opportunity to take the “Adversarial Attacks and Hunt Teaming” presented by Ben Ten and Larry Spohn from TrustedSec. I went into the course hoping to get a refresher on the latest techniques for Windows domains (I do mostly Linux, IoT & Web Apps at work) as well as to get a better understanding of how hunt teaming is done. (As a Red Teamer, I feel understanding the work done by the blue team is critical to better success and reducing detection.) From the course description:

This course is completely hands-on, focusing on the latest attack techniques and building a defense strategy around them. This workshop will cover both red and blue team efforts and provide methods for understanding how to best detect threats in an enterprise. It will give penetration testers the ability to learn the newest techniques, as well as teach blue teamers how to defend against them.

The Good

The course was definitely hands-on, which I really appreciate as someone who learns by “doing” rather than by listening to someone talk. Both instructors were obviously knowledgeable and able to answer questions about how tools and techniques work. It’s really valuable to understand why things work instead of just running commands blindly. Having the why lets you pivot your knowledge to other tools when your first choice isn’t working for some reason. (AV, endpoint protection, etc.)

Both instructors are strong teachers with an obvious passion for what they do. They presented the material well and mostly at a reasonable pace. They also tag-team well: while one is presenting, the other can help students having issues without delaying the entire class.

The final lab/exam was really good. We were challenged to get Domain Admin on a network we hadn’t seen so far, with the top 5 finishers receiving challenge coins. Despite how little I do with Windows, I was happy to be one of the recipients!

TrustedSec Coin

The Bad

The course began quite slowly for my experience level. The first half-day or so involved basic reconnaisance with nmap and an introduction to Metasploit. While I understand that not everyone has experience with these tools, the course description did not make me feel like it would be as basic as was presented.

There was a section on physical attacks that, while extremely interesting, was not really a good fit for the rest of the course material. It was too brief to really learn how to execute these attacks from a Red Team perspective, and physical security is often out of scope for the Blue Team (or handled by a different group). Other than entertainment value, I do not feel like it added anything to the course.

I would have liked a little more “Blue” content. The hunt-teaming section was mostly about configuring Windows Logging and pointing it to an ELK server for aggregation and analysis. Again, this was interesting, but we did not dive into other sources of data (network firewalls, non-Windows systems, etc.) like I hoped we would. It also did not spend any time discussing how to relate different events, only how to log the events you would want to look for.

Summary

Overall, I think this is a good course presented by excellent instructors. If you’ve done an OSCP course or even basic penetration testing, expect some duplication in the first day or so, but there will still be techniques that you might not have seen (or had the chance to try out) before. This was my first time trying the “Kerberoasting” attack, so it was nice to be able to do it hands-on. Overall a solid course, but I’d generally recommend it to those early in their careers or transitioning to an offensive security role.

on October 12, 2018 07:00 AM

October 10, 2018

In the previous post,

we saw how to build distrobuilder, then use it to create a LXD container image for Ubuntu. We used one of the existing configuration files for an Ubuntu container image.

In this post, we are going to see how to compose such YAML configuration files that describe how the container image will look like. The aim of this post is to deal with a minimal configuration file to create a container image for Alpine Linux. A future post will deal with a more complete configuration file.

Creating a minimal configuration for a container image

Here is the minimal configuration for a Alpine Linux container image. Note that we have omitted some parts that will make the container more useful (namespaces, etc). The containers from this container image will still work for our humble purposes.

image: 
description: My Alpine Linux
distribution: minimalalpine
release: 3.8.1

source:
downloader: alpinelinux-http
url: http://dl-cdn.alpinelinux.org/alpine/
keys:
- 0482D84022F52DF1C4E7CD43293ACD0907D9495A
keyserver: keyserver.ubuntu.com

packages:
manager: apk

Save this as a file with filename such as myalpine.yaml, and then build the container image. It takes a couple of seconds to build the container image. We will come back to the minimal configuration and explain in detail in the next section.

$ sudo $HOME/go/bin/distrobuilder build-lxd myalpine.yaml 
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.8/community/x86_64/APKINDEX.tar.gz
v3.8.1-27-g42946288bd [http://dl-cdn.alpinelinux.org/alpine/v3.8/main]
v3.8.1-23-ga2d8d72222 [http://dl-cdn.alpinelinux.org/alpine/v3.8/community]
OK: 9539 distinct packages available
Parallel mksquashfs: Using 4 processors
Creating 4.0 filesystem on /home/username/ContainerImages/minimal/rootfs.squashfs, block size 131072.
[==================================================|] 90/90 100%
Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
compressed data, compressed metadata, compressed fragments, compressed xattrs
duplicates are removed
Filesystem size 2093.68 Kbytes (2.04 Mbytes)
48.30% of uncompressed filesystem size (4334.32 Kbytes)
Inode table size 3010 bytes (2.94 Kbytes)
17.41% of uncompressed inode table size (17290 bytes)
Directory table size 4404 bytes (4.30 Kbytes)
54.01% of uncompressed directory table size (8154 bytes)
Number of duplicate files found 5
Number of inodes 481
Number of files 64
Number of fragments 5
Number of symbolic links 329
Number of device nodes 1
Number of fifo nodes 0
Number of socket nodes 0
Number of directories 87
Number of ids (unique uids + gids) 2
Number of uids 1
root (0)
Number of gids 2
root (0)
shadow (42)
$

And here is the container image. The size of the container image is about 2MB.

$ ls -l
total 2108
-rw-r--r-- 1 root root 364 Oct 10 20:30 lxd.tar.xz
-rw-rw-r-- 1 user user 287 Oct 10 20:30 myalpine.yaml
-rw-r--r-- 1 root root 2146304 Oct 10 20:30 rootfs.squashfs

Let’s import it into our LXD installation.

$ lxc image import --alias myminimal lxd.tar.xz rootfs.squashfs 
Image imported with fingerprint: ee9208767e745bb980a074006fa462f6878e763539c439e6bfa34c029cfc318b

And now launch a container from this container image.

$ lxc launch myminimal mycontainer
Creating mycontainer
Starting mycontainer

Let’s see the container running. It’s running, but did not get an IP address. That’s part of the cost-cutting in the initial minimal configuration file.

$ lxc list mycontainer
+-------------+---------+------+------+
| NAME | STATE | IPV4 | IPV6 |
+-------------+---------+------+------+
| mycontainer | RUNNING | | |
+-------------+---------+------+------+

Let’s get a shell in the container and start doing things! First, set up the network configuration.

$ lxc exec mycontainer -- sh
~ # pwd
/root
~ # cat /etc/network/interfaces
cat: can't open '/etc/network/interfaces': No such file or directory
~ # echo "auto eth0" > /etc/network/interfaces
~ # echo "iface eth0 inet dhcp" >> /etc/network/interfaces

Then, get an IP address using DHCP.

~ # ifup eth0
udhcpc: started, v1.28.4
udhcpc: sending discover
udhcpc: sending discover
udhcpc: sending select for 10.50.250.150
udhcpc: lease of 10.50.250.150 obtained, lease time 3600

We got a lease, but for some reason the network was not configured. Both ifconfig and route showed no configuration. So, we complete the network configuration manually. And it works, we have access to the Internet!

~ # ifconfig eth0 10.50.250.150 up
~ # route add -net default gw 10.50.250.1
~ # ping -c 1 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=120 time=17.451 ms
--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 17.451/17.451/17.451 ms
~ # exit
$

Let’s clear up and start studying the configuration file. We force-delete the container, and then delete the container image.

$ lxc delete --force mycontainer
$ lxc image delete myminimal

Understanding the configuration file of a container image

Here is again the container file for a minimal Alpine container image. It has three sections,

  1. image, with information about the image. We can put anything for the description and distribution name. The release version, though, should exist.
  2. source, which describes where to get the image, ISO or packages of the distribution. The downloader is a plugin in distrobuilder that knows how to get the appropriate files, as long as it knows the URL and the release version. The url is the URL prefix of the location with the files. keys and keyserver are used to verify digitally the authenticity of the files.
  3. packages, which indicates the plugin that knows how to deal with the specific package manager of the distribution. In general, you can also indicate here which additional packages to install, which to remove and which to update.
image: 
description: My Alpine Linux
distribution: minimalalpine
release: 3.8.1

source:
downloader: alpinelinux-http
url: http://dl-cdn.alpinelinux.org/alpine/
keys:
- 0482D84022F52DF1C4E7CD43293ACD0907D9495A
keyserver: keyserver.ubuntu.com

packages:
manager: apk

The downloader and url go hand in hand. The URL is the prefix for the repository that the downloader will use to get the necessary files.

The keys are necessary to verify the authenticity of the files. The keyserver is used to download the actual public keys of the IDs that were specified in the keys. You could very well not specify a keyserver, and distrobuilder would request the keys from the root PGP servers. However, those servers are often overloaded and the attempt can easily fail. It happened to me several times so that I explicitly use now the Ubuntu keyserver.

Summary

We have seen how to use a minimal configuration file for an Alpine container image. In future posts, we are going to see how to create more complete configuration files.

on October 10, 2018 08:12 PM

October 09, 2018

Couchsurfing and Airbnb are websites that connect people with an extra guest room or couch with random strangers on the Internet who are looking for a place to stay. Although Couchsurfing predates Airbnb by about five years, the two sites are designed to help people do the same basic thing and they work in extremely similar ways. They differ, however, in one crucial respect. On Couchsurfing, the exchange of money in return for hosting is explicitly banned. In other words, couchsurfing only supports the social exchange of hospitality. On Airbnb, users must use money: the website is a market on which people can buy and sell hospitality.

Graph of monthly signups on Couchsurfing and Airbnb.Comparison of yearly sign-ups of trusted hosts on Couchsurfing and Airbnb. Hosts are “trusted” when they have any form of references or verification in Couchsurfing and at least one review in Airbnb.

The figure above compares the number of people with at least some trust or verification on both  Couchsurfing and Airbnb based on when each user signed up. The picture, as I have argued elsewhere, reflects a broader pattern that has occurred on the web over the last 15 years. Increasingly, social-based systems of production and exchange, many like Couchsurfing created during the first decade of the Internet boom, are being supplanted and eclipsed by similar market-based players like Airbnb.

In a paper led by Max Klein that was recently published and will be presented at the ACM Conference on Computer-supported Cooperative Work and Social Computing (CSCW) which will be held in Jersey City in early November 2018, we sought to provide a window into what this change means and what might be at stake. At the core of our research were a set of interviews we conducted with “dual-users” (i.e. users experienced on both Couchsurfing and Airbnb). Analyses of these interviews pointed to three major differences, which we explored quantitatively from public data on the two sites.

First, we found that users felt that hosting on Airbnb appears to require higher quality services than Couchsurfing. For example, we found that people who at some point only hosted on Couchsurfing often said that they did not host on Airbnb because they felt that their homes weren’t of sufficient quality. One participant explained that:

“I always wanted to host on Airbnb but I didn’t actually have a bedroom that I felt would be sufficient for guests who are paying for it.”

An another interviewee said:

“If I were to be paying for it, I’d expect a nice stay. This is why I never Airbnb-hosted before, because recently I couldn’t enable that [kind of hosting].”

We conducted a quantitative analysis of rates of Airbnb and Couchsurfing in different cities in the United States and found that median home prices are positively related to number of per capita Airbnb hosts and a negatively related to the number of Couchsurfing hosts. Our exploratory models predicted that for each $100,000 increase in median house price in a city, there will be about 43.4 more Airbnb hosts per 100,000 citizens, and 3.8 fewer hosts on Couchsurfing.

A second major theme we identified was that, while Couchsurfing emphasizes people, Airbnb places more emphasis on places. One of our participants explained:

“People who go on Airbnb, they are looking for a specific goal, a specific service, expecting the place is going to be clean […] the water isn’t leaking from the sink. I know people who do Couchsurfing even though they could definitely afford to use Airbnb every time they travel, because they want that human experience.”

In a follow-up quantitative analysis we conducted of the profile text from hosts on the two websites with a commonly-used system for text analysis called LIWC, we found that, compared to Couchsurfing, a lower proportion of words in Airbnb profiles were classified as being about people while a larger proportion of words were classified as being about places.

Finally, our research suggested that although hosts are the powerful parties in exchange on Couchsurfing, social power shifts from hosts to guests on Airbnb. Reflecting a much broader theme in our interviews, one of our participants expressed this concisely, saying:

“On Airbnb the host is trying to attract the guest, whereas on Couchsurfing, it works the other way round. It’s the guest that has to make an effort for the host to accept them.”

Previous research on Airbnb has shown that guests tend to give their hosts lower ratings than vice versa. Sociologists have suggested that this asymmetry in ratings will tend to reflect the direction of underlying social power balances.

power difference bar graphAverage sentiment score of reviews in Airbnb and Couchsurfing, separated by direction (guest-to-host, or host-to-guest). Error bars show the 95% confidence interval.

We both replicated this finding from previous work and found that, as suggested in our interviews, the relationship is reversed on Couchsurfing. As shown in the figure above, we found Airbnb guests will typically give a less positive review to their host than vice-versa while in Couchsurfing guests will typically a more positive review to the host.

As Internet-based hospitality shifts from social systems to the market, we hope that our paper can point to some of what is changing and some of what is lost. For example, our first result suggests that less wealthy participants may be cut out by market-based platforms. Our second theme suggests a shift toward less human-focused modes of interaction brought on by increased “marketization.” We see the third theme as providing somewhat of a silver-lining in that shifting power toward guests was seen by some of our participants as a positive change in terms of safety and trust in that guests. Travelers in unfamiliar places often are often vulnerable and shifting power toward guests can be helpful.

Although our study is only of Couchsurfing and Airbnb, we believe that the shift away from social exchange and toward markets has broad implications across the sharing economy. We end our paper by speculating a little about the generalizability of our results. I have recently spoken at much more length about the underlying dynamics driving the shift we describe in  my recent LibrePlanet keynote address.

More details are available in our paper which we have made available as a preprint on our website. The final version is behind a paywall in the ACM digital library.


This blog post, and paper that it describes, is a collaborative project by Maximilian Klein, Jinhao Zhao, Jiajun Ni, Isaac Johnson, Benjamin Mako Hill, and Haiyi Zhu. Versions of this blog post were posted on several of our personal and institutional websites. Support came from GroupLens Research at the University of Minnesota and the Department of Communication at the University of Washington.
on October 09, 2018 05:02 PM

With LXC and LXD you can run system containers, which are containers that behave like a full operating system (like a Virtual Machine does). There are already official container images for most Linux distributions. When you run lxc launch ubuntu:18.04 mycontainer, you are using the ubuntu: repository of container images to launch a container with Ubuntu 18.04.

In this post, we are going to see

  1. an introduction to the tool distrobuilderthat creates container images
  2. how to recreate a container image
  3. how to customize a container image

Introduction to distrobuilder

The following are the command line options of distrobuilder. You can use distrobuilder to create container images for both LXC and LXD.

$ distrobuilder
System container image builder for LXC and LXD

Usage:
  distrobuilder [command]

Available Commands:
  build-dir   Build plain rootfs
  build-lxc   Build LXC image from scratch
  build-lxd   Build LXD image from scratch
  help        Help about any command
  pack-lxc    Create LXC image from existing rootfs
  pack-lxd    Create LXD image from existing rootfs

Flags:
      --cache-dir   Cache directory
      --cleanup     Clean up cache directory (default true)
  -h, --help        help for distrobuilder
  -o, --options     Override options (list of key=value)

Use "distrobuilder [command] --help" for more information about a command.

The build-dir command builds the root filesystem (rootfs) of the distribution and stops there. This option makes sense if we plan to make some custom manual changes to the rootfs. We would then need to use either pack-lxc or pack-lxd to package up the rootfs into a container image.

The build-lxc and build-lxd commands create container images for either LXC or LXD, both from scratch. They both require a YAML configuration file, and that’s what is only needed from them to produce  a container image.

Installation

Currently, there are no binary packages of distrobuilder. Therefore, you will need to compile it from source. To do so, first install the Go programming language, and some other dependencies. Here are the commands to do this.

sudo apt update
sudo apt install -y golang-go debootstrap rsync gpg squashfs-tools

Second, download the source code of the distrobuilder repository (this repository). The source will be placed in $HOME/go/src/github.com/lxc/distrobuilder/Here is the command to do this.

go get -d -v github.com/lxc/distrobuilder

Third, enter the directory with the source code of distrobuilder and run make to compile the source code. This will generate the executable program distrobuilder, and it will be located at $HOME/go/bin/distrobuilder. Here are the commands to do this.

cd $HOME/go/src/github.com/lxc/distrobuilder
make
cd

Creating a container image

To create a container image, first create a directory where you will be placing the container images, and enter that directory.

mkdir -p $HOME/ContainerImages/ubuntu/
cd $HOME/ContainerImages/ubuntu/

Then, copy one of the example yaml configuration files for container images into this directory. In this example, we are creating an Ubuntu container image.

cp $HOME/go/src/github.com/lxc/distrobuilder/doc/examples/ubuntu ubuntu.yaml

Finally, run distrobuilder to create the container image. We are using the build-lxd option to create a container image for LXD. We need sudo because the process of preparing the rootfs requires to set the ownership and permissions of files to IDs that a non-root account cannot perform. Also note the way we invoke distrobuilder (as $HOME/go/bin/distrobuilder). It has to be an absolute path because under sudo the $PATH is different from our current non-root user account.

sudo $HOME/go/bin/distrobuilder build-lxd ubuntu.yaml

It takes about five minutes to build the Ubuntu container image. Be patient.

If the command is successful, you will get an output similar to the following. The lxd.tar.xz file is the description of the container image. The rootfs.squasfs file is the root filesystem (rootfs) of the container image. The set of these two files is the container image.

multipass@dazzling-termite:~/ContainerImages/ubuntu$ ls -l
total 121032
-rw-r--r-- 1 root      root            560 Oct  3 13:28 lxd.tar.xz
-rw-r--r-- 1 root      root      123928576 Oct  3 13:28 rootfs.squashfs
-rw-rw-r-- 1 multipass multipass      3317 Oct  3 13:19 ubuntu.yaml
multipass@dazzling-termite:~/ContainerImages/ubuntu$

Adding the container image to LXD

To add the container image to a LXD installation, use the lxc image import command as follows.

multipass@dazzling-termite:~/ContainerImages/ubuntu$ lxc image import lxd.tar.xz rootfs.squashfs --alias mycontainerimage
Image imported with fingerprint: ae81c04327b5b115383a4f90b969c97f5ef417e02d4210d40cbb17a038729a27

Let’s see the container image in LXD. The ubuntu.yaml had a setting to create an Ubuntu 17.10 (artful) image. The size is 118MB.

$ lxc image list mycontainerimage
+------------------+--------------+--------+---------------+--------+----------+------------------------------+
|      ALIAS       | FINGERPRINT  | PUBLIC |  DESCRIPTION  |  ARCH  |   SIZE   |         UPLOAD DATE          |
+------------------+--------------+--------+---------------+--------+----------+------------------------------+
| mycontainerimage | ae81c04327b5 | no     | Ubuntu artful | x86_64 | 118.19MB | Oct 3, 2018 at 12:09pm (UTC) |
+------------------+--------------+--------+---------------+--------+----------+------------------------------+

Launching a container from the container image

To launch a container from the freshly created container image, use lxc launch as follows. Note that you do not specify a repository of container images (like ubuntu: or images:) because the image is located locally.

$ lxc launch mycontainerimage c1
Creating c1
Starting c1

How to customize a container image

The ubuntu.yaml configuration file contains all the details that are required to create an Ubuntu container image. We can edit the file and make changes to the generated container image.

Changing the distribution release

The file that is currently included in the distrobuilder repository has the following section:

image:
distribution: ubuntu
release: artful
description: Ubuntu {{ image.release }}
architecture: amd64

We can change to either bionic (for Ubuntu 18.04) or cosmic (for Ubuntu 18.10), save and finally build again the container image.

Troubleshooting

Error “gpg: no valid OpenPGP data found”

$ sudo $HOME/go/bin/distrobuilder build-lxd ubuntu.yaml
Error: Error while downloading source: Failed to create keyring: gpg: keyring /tmp/distrobuilder.920564219/secring.gpg' created gpg: keyring/tmp/distrobuilder.920564219/pubring.gpg' created
gpg: requesting key C0B21F32 from hkp server pgp.mit.edu
gpgkeys: key 790BC7277767219C42C86F933B4FE6ACC0B21F32 can't be retrieved
gpg: no valid OpenPGP data found.
gpg: Total number processed: 0
gpg: keyserver communications error: keyserver helper general error
gpg: keyserver communications error: unknown pubkey algorithm
gpg: keyserver receive failed: unknown pubkey algorithm

The keyserver pgp.mit.edu is often under load and does not respond. You can edit the YAML configuration file and replace pgp.mit.edu with keyserver.ubuntu.com.

Error “gpg: keyserver timed out”

$ sudo $HOME/go/bin/distrobuilder build-lxd ubuntu.yaml
Error: Error while downloading source: Failed to create keyring: gpg: keyring /tmp/distrobuilder.854636592/secring.gpg' created gpg: keyring/tmp/distrobuilder.854636592/pubring.gpg' created
gpg: requesting key C0B21F32 from hkp server pgp.mit.edu
gpg: keyserver timed out
gpg: keyserver receive failed: keyserver error

The keyserver pgp.mit.edu is often under load and does not respond. You can edit the YAML configuration file and replace pgp.mit.edu with keyserver.ubuntu.com.

on October 09, 2018 12:20 PM

I like using kdiff3, I also like using git, I also like using bundles for applications. Let’s put the three together!

Set up the KDE git flatpak repo and install kdiff3

flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak remote-add --if-not-exists kdeapps --from https://distribute.kde.org/kdeapps.flatpakrepo
flatpak install kdeapps org.kde.kdiff3

Write a tiny shim around this so we can use it from git. Put it in /usr/bin/kdiff3 or $HOME/bin/kdiff3 if $PATH is set up to include bins from $HOME.

#/bin/sh
exec flatpak run org.kde.kdiff3 "$@"

Don’t forget to chmod +x kdiff3 it!

git mergetool should now pick up our kdiff3 wrapper automatically. So all that’s left to do is having a merge conflict and off we go with git mergetool

on October 09, 2018 11:00 AM

October 06, 2018

En esta ocasión, Francisco MolineroFrancisco Javier Teruelo y Marcos Costalescharlamos sobre el envío de estadísticas a Canonical y si deberíamos de donar más al softwe libre.

Capítulo 1 de la 3ª temporada

El podcast esta disponible para escuchar en:
on October 06, 2018 01:22 PM

October 05, 2018

In various free software communities, I've come across incidents where people have been criticized inappropriately when they couldn't attend an event or didn't meet other people's expectations. This has happened to me a few times and I've seen it happen to other people too.

As it turns out, this is an incredibly bad thing to do. I'm not writing about this to criticize any one person or group in return. Rather, it is written in the hope that people who are still holding grudges like this might finally put them aside and also to reassure other volunteers that you don't have to accept this type of criticism.

Here are some of the comments I received personally:

"Last year, you signed up for the conference but didn't attend, cancelling on the last minute, when you had already been ..."

"says the person who didn't attend any of the two ... he was invited to, because, well, he had no time"

"you didn't stayed at the booth enough at ..., never showed up at the booth at the ... and never joined ..."

Having seen this in multiple places, I don't want this blog to focus on any one organization, person or event.

In all these cases, the emails were sent to large groups on CC, one of them appeared on a public list. Nobody else stepped in to point out how wrong this is.

Out of these three incidents, one of them subsequently apologized and I sincerely thank him for that.

The emails these were taken from were quite negative and accusatory. In two of these cases, the accusation was being made after almost a year had passed. It leaves me wondering how many people in the free software community are holding grudges like this and for how long.

Personally, going to an event usually means giving talks and workshops. Where possible, I try to involve other people in my presentations too and may disappear for an hour or skip a social gathering while we review slides. Every volunteer, whether they are speakers, organizers or whatever else usually knows the most important place where they should be at any moment and it isn't helpful to criticize them months later without even asking, for example, about what they were doing rather than what they weren't doing.

Think about some of the cases where a volunteer might cancel their trip or leave an event early:

  • At the last minute they decided to go to the pub instead.
  • They never intended to go in the first place and wanted to waste your time.
  • They are not completely comfortable telling you their reason because they haven't got to know you well enough or they don't want to put it in an email.
  • There is some incredibly difficult personal issue that may well be impossible for them to tell you about because it is uncomfortable or has privacy implications. Imagine if a sibling commits suicide, somebody or their spouse has a miscarriage, a child with a mental health issue or a developer who is simply burnt out. A lot of people wouldn't tell you about tragedies in this category and they are entitled to their privacy.

When you think about it, the first two cases are actually really unlikely. You don't do that yourself, so why do you assume or imply that any other member of the community would behave that way?

So it comes down to the fact that when something like this happens, it is probably one of the latter two cases.

Even if it really was one of the first two cases, criticizing them won't make them more likely to show up next time, it has no positive consequences.

In the third case, if the person doesn't trust you well enough to tell you the reason they changed their plans, they are going to trust you even less after this criticism.

In the fourth case, your criticism is going to be extraordinarily hurtful for them. Blaming them, criticizing them, stigmatizing them and even punishing them and impeding their future participation will appear incredibly cruel from the perspective of anybody who has suffered from some serious tragedy: yet these things have happened right under our noses in respected free software projects.

What is more, the way the subconscious mind works and makes associations, they are going to be reminded about that tragedy or crisis when they see you (or one of your emails) in future. They may become quite brisk in dealing with you or go out of their way to avoid you.

Many organizations have adopted codes of conduct recently. In Debian, it calls on us to assume good faith. The implication is that if somebody doesn't behave the way you hope or expect, or if somebody tells you there is a personal problem without giving you any details, the safest thing to do and the only thing to do is to assume it is something in the most serious category and treat them with the respect that you would show them if they had fully explained such difficult circumstances to you.

on October 05, 2018 08:35 AM

October 03, 2018

I've been running static analysis using CoverityScan on linux-next for 2 years with the aim to find bugs (and try to fix some) before they are merged into Linux.  I have also been gathering the defect count data and tracking the defect trends:
As one can see from above, CoverityScan has found a considerable amount of defects and these are being steadily fixed by the Linux developer community.  The encouraging fact is that the outstanding issues are reducing over time. Some of the spikes in the data are because of changes in the analysis that I'm running (e.g. getting more coverage), but even so, one can see a definite trend downwards in the total defects in the Kernel.

With static analysis, some of these reported defects are false positives or corner cases that are in fact impossible to occur in real life and I am slowly working through these and annotating them so they don't get reported in the defect count.

It must be also noted that over these two years the kernel has grown from around 14.6 million to 17.1 million lines of code so the defect count has dropped from 1 defect in every ~2100 lines to 1 defect in every ~3000 lines over the past 2 years.  All in all, it is a remarkable improvement for such a large and complex codebase that is growing in size at such rate.
on October 03, 2018 09:17 AM

October 01, 2018

Long time no blog

It’s been a busy half a year since my last blog post. Immersing myself in the cloud native world and my non-work related training kept me quite busy. I learned quite a lot and it’s been a very rewarding experience, but whenever I thought “hey, I should blog about this”, something else came up which grabbed my attention.

This time I wanted to get the word out about a fun project I was involved in and reflect a bit on some aspects of these last six months.

Hacktoberfest

I wrote about it in the Weaveworks blog already: the Weave Scope and Weave Flux teams are participating in Hacktoberfest! ❤ If you haven’t heard about it yet: Hacktoberfest is a month-long celebration of getting involved in open source. It’s happening for the fifth time now and the general idea is: You sign up, and if you manage to contribute five pull requests on Github in October you win a Hacktoberfest t-shirt. To me it’s a very low-barrier initiative and fun way to get to know Open Source and new communities. I love how everybody benefits from this.

As a contributor, apart from getting a nice t-shirt, it’s your chance to learn more about open source, get in touch with a community of developers, learn about how they do things, learn about new tools, being welcomed with your view on a problem and maybe joining the team after all.

From my own experience I know how empowering it feels to be welcomed into a group of fellow contributors who all care about the project as much as you do. To learn from seasoned developers and have your work integrated into later releases and others benefit from your fix as well.

I still remember how quickly after joining the Ubuntu community about 13-14 years ago (yes, it’s really been that long) and after having been encouraged to get some of my work uploaded into Ubuntu, I realised that we could do a lot better at helping new contributors get started. I wanted others to benefit from my experience and what I had learned from the (quite busy) maintainers in the day. I started collecting links to helpful docs, code snippets and tasks to work on on the Wiki. It was a simple thing to do, and over time our whole developer team took the experience of new contributors on as a cornerstone of the project.

As a project member and maintainer, events like Hacktoberfest are your opportunity to reflect on questions like

  • How do we invite new contributors in the project? How high or low is the bar for entry?
  • Are our docs sufficient? Can new folks find their way around easily?
  • Is our use of tools and process well-defined and clear?
  • Are we good at identifying new issues and detailing where and how to fix them?

So apart from just finding new contributors or getting issues fixed in your code, you also get to learn about an outsider’s perspective and how your project is run. If you take the feedback seriously, it’s a good way to pave the way for others to come in.

Weave Scope and Weave Flux

Scope and Flux are two very important projects at Weaveworks and they both tell the GitOps story from different angles.

Weave Scope has a special place in my heart. When I demoed Weave Cloud at events in the past couple of months the Explore part of it (which uses Weave Scope internally) especially drew attention, because everyone immediately got how important observability is in a micro-services world. Scope can be used in very different scenarios and is written using modern languages. I’ve been part of the Weave Scope meetings in the past few weeks and it was great to see how people from different parts of the globe got together, front-enders, back-enders, designers and just generally interested folks who want to improve Scope for their particular use. I was glad that Satyam Zode brought up the idea of participating in Hacktoberfest and started by going through the list of issues to see which would be good for new contributors (thanks Bryan and Filip who helped out as well!).

If this sounds interesting check out the Weave Scope Hacktoberfest page.

Weave Flux is the Kubernetes GitOps operator, it’s what deploys new images and config to your cluster and makes sure that the state of the cluster matches the config in git. It’s easy to get up and running and very versatile. It’s what Weaveworks has been using for years now and many of our customers and general users rely on to go faster. In the past months I’ve seen an influx of new developers on Slack, drive-by contributions and new thoughts about where to take the project. I’m really pleased how friendly this community is and how keen to help each other out.

If you’re interested in helping out here, take a look at the Weave Flux Hacktoberfest page.

Leave a comment if you’re participating in Hacktoberfest too!

on October 01, 2018 02:10 PM

There's a lot of information stored all over the Internet about me, about you, about everyone. At best, most of it can just go away because it's useless, at worst it is potentially harmful. A humorous take on this by Molly Lewis:

The place that this is the most obvious is social media. I really liked this post on old tweets by Vicki Lai which talks about the why and how of deleting Tweets. It applies to all social media. But this all got me thinking about my blog.

Blog posts tend to be more thought out (or at least I try) and seem to me to be part of the larger web. So just deleting them after a matter of time doesn't feel the same as tweets. If someone was writing about the Unity HUD I would hope they'd reference my HUD 2.0 post, as I love the direction it was going. I have other posts that are... less significant. The ones that are the most interesting are the ones that are linked to by other people, so what I'm going to do is stop linking to old blog posts. That way posts that aren't linked to by other people will stop being indexed by search engines and effectively disappear from the Internet. I have no idea if this will actually work.

The policy that I settled on was to have the latest five posts on my blog page, and then having the archives point to posts of the last two years. This means I need to write five posts every two years (easy right!) to keep it consistent. Turned out implementing it in Jekyll was a little tricky, but this post on Jekyll date filtering helped me put it together.

I think that my attitudes to data are generational difference. For my generation the idea that we could have hard drives big enough to keep historical data is exciting. Talking to younger people I think they understand it is a liability. Perhaps fixing my blog is just me trying to be young.

on October 01, 2018 12:00 AM

September 30, 2018

Debian 10 Buster KDE PlasmaDebian 10 Buster KDE Plasma

It has been a busy week!

My significant accomplishment this week is the packaging of squashfuse for Debian.

This is required for libappimage, which is next on my to-do list.

I have uploaded it to mentors here: https://mentors.debian.net/package/squashfuse

I do have a mentor/sponsor, but under the KDE umbrella ( Thank you lisandro! ),

he is very busy and I would like to give him a break on this one.

If anyone has some spare time to give this a look, thank you!

This was a bit of a learning experience as most of my experience is in the KDE world.

I have begun the journey to Debian Maintainer status ( and eventually Debian Developer ),

it seems all those key signing parties I have been to… I did it wrong 🙁 oops.

I now have caff installed and will work on getting my web of trust in order.

In other news, I have finished packaging ring-kde for Debian/KDENeon,

but would like to get it tested in Neon before I dump it on my mentor 🙂

I have hit few hiccups with a git build in Neon, so I am reaching out to the developer.

I will post when it is ready for testing.

I am slowly working through debian packaging of Mycroft (a beast!) , I will post when I have something reasonable to test.

Sadly, I did not get much time for KDE binary-factory CI work this week, but I should have more time next week. 🙂

Thank you and have a great week everyone!

on September 30, 2018 11:35 PM

September 28, 2018

October approaches, and Ubuntu marches steadly along the road from one LTS to another. Ubuntu 18.10 is another step in Ubuntu’s future. And now it’s time to unveil a small part of that change: the community wallpapers to be included in Ubuntu 18.10!

Every cycle, talented artists around the world create media and release it under licenses that encourage sharing and adaptation. This cycle we had some amazing images submitted to the Ubuntu 18.10 Free Culture Showcase photo pool on Flickr, where all eligible submissions can be found. The competition was fierce; narrowing down the options to the final selections was painful!

But there can be only 12, and the final images that will be included in Ubuntu 18.10 are:

A big congratulations to the winners, and thanks to everyone who submitted a wallpaper. You can find these wallpapers (along with dozens of other stunning wallpapers) today at the links above, or in your desktop wallpaper list after you upgrade to or install Ubuntu 18.10 on October 18th.

on September 28, 2018 07:00 AM
The Ubuntu Studio team is pleased to announce the final beta release of Ubuntu Studio 18.10 Cosmic Cuttlefish. While this beta is reasonably free of any showstopper CD build or installer bugs, you may find some bugs within. This image is, however, reasonably representative of what you will find when Ubuntu Studio 18.10 is released […]
on September 28, 2018 05:09 AM

September 27, 2018

Ubuntu MATE 18.10 is a modest, yet strategic, upgrade over our 18.04 release. If you want bug fixes and improved hardware support then 18.10 is for you. For those who prefer staying on the LTS then everything in this 18.10 release is also important for the upcoming 18.04.2 release. Read on to learn more...

We are preparing Ubuntu MATE 18.10 (Cosmic Cuttlefish) for distribution on October 18th, 2018 With this Beta pre-release, you can see what we are trying out in preparation for our next (stable) version.

Ubuntu MATE 18.10 Beta
Superposition on the Intel Core i7-8809G Radeon RX Vega M powered Hades Canyon NUC

What works?

People tell us that Ubuntu MATE is stable. You may, or may not, agree.

Ubuntu MATE Beta Releases are NOT recommended for:

  • Regular users who are not aware of pre-release issues
  • Anyone who needs a stable system
  • Anyone uncomfortable running a possibly frequently broken system
  • Anyone in a production environment with data or workflows that need to be reliable

Ubuntu MATE Beta Releases are recommended for:

  • Regular users who want to help us test by finding, reporting, and/or fixing bugs
  • Ubuntu MATE, MATE, and GTK+ developers

What changed since the Ubuntu MATE 18.04 final release?

Curiously, the work during this Ubuntu MATE 18.10 release has really been focused on what will become Ubuntu MATE 18.04.2. Let me explain.

MATE Desktop

The upstream MATE Desktop team have been working on many bug fixes for MATE Desktop 1.20.x, that has resulted in a lot of maintenance updates in the upstream releases of MATE Desktop. The Debian packaging team for MATE Desktop, of which I am member, has been updating all the MATE packages to track these upstream bug fixes and new releases. Just about all MATE Desktop packages and associated components, such as AppMenu and MATE Dock Applet have been updated. Now that all these fixes exist in the 18.10 release, we will start the process of SRU'ing (backporting) them to 18.04 so that they will feature in the Ubuntu MATE 18.04.2 release due in February 2019. The fixes should start landing in Ubuntu MATE 18.04 very soon, well before the February deadline.

Hardware Enablement

Ubuntu MATE 18.04.2 will include a hardware enablement stack (HWE) based on what is shipped in Ubuntu 18.10. Ubuntu users are increasingly adopting the current generation of AMD RX Vega GPUs, both discrete and integrated solutions such as the Intel Core i7-8809G Radeon RX Vega M found in the Hades Canyon NUC and some laptops. I have been lobbying people within the Ubuntu project to upgrade to newer versions of the Linux kernel, firmware, Mesa and Vulkan that offer the best possible "out of box" support for AMD GPUs. Consequently, Ubuntu 18.10 (of any flavour) is great for owners of AMD graphics solutions and these improvements will soon be available in Ubuntu 18.04.2 too.

Download Ubuntu MATE 18.10 Beta

We've even redesigned the download page so it's even easier to get started.

Download

Known Issues

Here are the known issues.

Ubuntu MATE

  • The Software Boutique doesn't list any available software.
    • An update, due very soon, will re-stock the software library and add a few new applications too.

Ubuntu family issues

This is our known list of bugs that affects all flavours.

You'll also want to check the Ubuntu MATE bug tracker to see what has already been reported. These issues will be addressed in due course.

Feedback

Is there anything you can help with or want to be involved in? Maybe you just want to discuss your experiences or ask the maintainers some questions. Please come and talk to us.

on September 27, 2018 11:30 PM

September 26, 2018

Work Items To Remember

Stephen Michael Kellat

Sometimes I truly cannot remember everything. There have been many, many things going on as of late. Being on medical leave has not been helpful, either.

As we look to the last quarter of 2018, there are some matters I need to remind myself about keeping in the work plan:

  1. Finish the write-up on the research for Outernet/Othernet.

  2. Begin looking at what I need to do to set up a FidoNet node. I haven’t been involved in FidoNet since high school during President Bill Clinton’s second term in office.

  3. Consider the possibility that the folks of DarkNetPlan failed. After looking at this post I honestly need to look at finding a micrographics artist that I can set up a working relationship with. Passing digital data via microfilm sounds old-fashioned but seems more durable these days.

  4. Construct a proper permanent HF antenna for operating. I am a ham radio operator with General class privileges in the United States that remain barely used even though I am only a few years away from joining the Quarter Century Wireless Association.

  5. Figure out what I’m doing wrong setting up multiple HDHomeRun receivers to be tapped by a PVR-styled computer.

  6. Pick up 18 graduate semester hours so I can teach as an adjunct somewhere. This would generally have to happen in a graduate certificate program in the US or at the halfway mark in a master’s degree program.

With my day job being constantly in flux, I am sure I’ve missed something in the listing above.

on September 26, 2018 02:30 AM

September 23, 2018

Data access control isn’t easy. While it can sound quite simple (just give access to the authorized entities), it is very difficult, both on a theoretical side (who is an authorized entity? What does authorized mean? And how do we identify an entity?) and on a pratical side.

On the pratical side, how we will see, disclose of private data is often a unwanted side effect of an useful feature.

Facebook and Instagram

Facebook bought Instagram back in 2012. Since then, a lot of integrations have been implemented between them: among the others, when you suscribe to Instagram, it will suggest you who to follow based on your Facebook friends.

Your Instagram and Facebook accounts are then somehow linked: it happens both if you sign up to Instagram using your Facebook account (doh!), but also if you sign up to Instagram creating a new account but using the same email you use in your Facebook account (there are also other way Instagram links your new account with an existing Facebook account, but they are not of our interest here).

So if you want to create a secret Instagram account, create a new mail for it ;-)

Back in topic: Instagram used to enable all its feature to new users, before they have confirmed their email address. This was to do not “interrupt” usage of the website / app, they would have been time to confirm the email later in their usage.

Email address confirmation is useful to confirm you are signing up using your own email address, and not one of someone else.

Data leak

One of the features available before confirming the email address, was the suggestion of who to follow based on the Facebook friends of the account Instagram automatically linked.

This made super easy to retrieve the Facebook’s friend list of anyone who doesn’t have an Instagram account, and since there are more than 2 billions Facebook accounts but just 800 millions Instagram accounts, it means that at least 1 billion and half accounts were vulnerable.

The method was simple: knowing the email address of the target (and an email address is all but secret), the attacker had just to sign up to Instagram with that email, and then go to the suggestions of people to follow to see victim’s friends.

List of victim's friends

Conclusion

The combination of two useful features (suggestion of people to follow based on a linked Facebook account, being able to use the new Instagram account immediately) made this data leak possible.

It wasn’t important if the attacker was a Facebook’s friend with the victim, or the privacy settings of the victim’s account on Facebook. Heck, the attacker didn’t need a Facebook account at all!

Timeline

  • 20 August 2018: first disclosure to Facebook
  • 20 August 2018: request of other information from Facebook
  • 20 August 2018: more information provided to Facebook
  • 21 August 2018: Facebook closed the issue saying wasn’t a security issue
  • 21 August 2018: I submitted a new demo with more information
  • 23 August 2018: Facebook confirmed the issue
  • 30 August 2018: Facebook deployed a fix and asked for a test
  • 12 September 2018: Facebook awarded me a bounty

Bounty

Facebook awarded me a $3000 bounty award for the disclosure. This was the first time I was awarded for a security disclosure for Facebook, I am quite happy with the result and I applaude Facebook for making all the process really straightforward.

For any comment, feedback, critic, write me on Twitter (@rpadovani93) or drop an email at riccardo@rpadovani.com.

Regards, R.

on September 23, 2018 09:00 AM

And Another Thing

Stephen Michael Kellat

My Zotero database has some unfortunate comparisons and contrasts in it. For example:

Crowe, J. (2018, September 21). Google Employees Considered Changing Search Algorithm to Fight Travel Ban. Retrieved September 22, 2018, from https://www.nationalreview.com/news/google-employees-considered-changing-search-algorithm-to-fight-travel-ban/

Not the happiest of news that, apparently, President Donald John Trump isn't totally unjustified in his paranoia. The blackbox that is search at Google can potentially be tampered with. Without any understanding of what goes on inside Google's "black box" system, there isn't really much to assuage President Trump's fears.

That this sort of a possibility could come up in 2018 should not be surprising. After all, here are some further citations from my Zotero database:

Kellat, S. M. (2006). Intellectual Terrorism and the Church: The Case of the Google Bomb. Conference paper. Retrieved from http://eprints.rclis.org/10147/

Kellat, S. M. (2007). Print-Based Culture Meets An “Amazoogle” World: New Challenges To A Priesthood of Readers. Conference paper. Retrieved from http://eprints.rclis.org/10146/

I suppose I merely wrote about the matter initially in terms of malicious external actors twelve years ago. The idea of internal malicious actors came up eleven years ago in my writing. After that I began following the various color uprisings and the like but forgot to keep writing. I used to be a working academic but for some reason detoured into being a tax collector these days after spending time as a podcaster.

There seems to be low-hanging fruit to pursue again in research about this digital life.

on September 23, 2018 02:53 AM

September 21, 2018

Adventure Time GIF: [Princess Bubblegum] People get built different. We don't need to figure it out. We just need to respect it

on September 21, 2018 05:43 PM