May 24, 2018

Starting Thursday, May 24th the about-to-be released 2019 new edition of my book, Ubuntu Unleashed, will be listed in InformIT’s Summer Coming Soon sale, which goes through May 29th. The discount is 40% off print and 45% off eBooks, no discount code will be required. Here’s the link: InformIT Summer Sale
icon.

on May 24, 2018 04:59 AM

May 23, 2018

The current 2-year term of the Technical Board is over, and it’s time for electing a new one. For the next two weeks (until 6 June 2018) we are collecting nominations, then our SABDFL will shortlist the candidates and confirm their candidacy with them, and finally the shortlist will be put to a vote by ~ubuntu-dev.

Anyone from the Ubuntu community can nominate someone.

Please send nominations (of yourself or someone else) to Mark Shuttleworth <mark.shuttleworth at ubuntu.com> and CC: the nominee. You can optionally CC: the Technical Board mailing list, but as this is public, you *must* get the agreement of the nominated person before you CC: the list.

The current board can be seen at ~techboard.

Originally posted to the ubuntu-devel-announce mailing list on Wed May 23 18:19:18 UTC 2018 by Walter Lapchynski on behalf of the Ubuntu Community Council.

on May 23, 2018 06:54 PM

During the last few weeks of the 18.04 (Bionic Beaver) cycle, we had 2 people drop by in our development channel trying to respond to the call for testers from the Development and QA Teams.

It quickly became apparent to me that I was having to repeat myself in order to make it “basic” enough for someone who had never tested for us, to understand what I was trying to put across.

After pointing to the various resources we have, and other flavours use – it transpired that they both would have preferred something a bit easier to start with.

So I asked them to write it for us all.

Rather than belabour my point here, I’ve asked both of them to write a few words about what they needed and what they have achieved for everyone.

Before they get that chance – I would just like to thank them both for the hours of work they have put in drafting, tweaking and getting the pages into a position where we can tell you all of their existence.

You can see the fruits of their labour at our updated web page for Testers and the new pages we have at the New Tester wiki.

Kev
On behalf of the Xubuntu Development and QA Teams.

“I see the whole idea of OS software and communities helping themselves as a breath of fresh air in an ever more profit obsessed world (yes, I am a cynical old git).

I really wanted to help, but just didn’t think that I had any of the the skills required, and the guides always seemed to assume a level of knowledge that I just didn’t have.

So, when I was asked to help write a ‘New Testers’ guide for my beloved Xubuntu I absolutely jumped at the chance, knowing that my ignorance was my greatest asset.

I hope what resulted from our work will help those like me (people who can easily learn but need to be told pretty much everything from the bottom up) to start testing and enjoy the warm, satisfied glow of contributing to their community.
Most of all, I really enjoyed collaborating with some very nice people indeed.”
Leigh Sutherland

“I marvel at how we live in an age in which we can collaborate and share with people all over the world – as such I really like the ideas of free and open source. A long time happy Xubuntu user, I felt the time to be involved, to go from user-only to contributor was long overdue – Xubuntu is a community effort after all. So, when the call for testing came last March, I dove in. At first testing seemed daunting, complicated and very technical. But, with leaps and bounds, and the endless patience and kindness of the Xubuntu-bunch over at Xubuntu-development, I got going. I felt I was at last “paying back”. When flocculant asked if I would help him and Leigh to write some pages to make the information about testing more accessible for users like me, with limited technical skills and knowledge, I really liked the idea. And that started a collaboration I really enjoyed.

It’s my hope that with these pages we’ve been able to get across the information needed by someone like I was when I started -technical newby, noob- to simply get set up to get testing.

It’s also my hope people like you will tell us where and how these pages can be improved, with the aim to make the first forays into testing as gentle and easy as possible. Because without testing we as a community can not make xubuntu as good as we’d want it to be.”
Willem Hobers

on May 23, 2018 04:49 PM

Seymour Papert is credited as saying that tools to support learning should have “high ceilings” and “low floors.” The phrase is meant to suggest that tools should allow learners to do complex and intellectually sophisticated things but should also be easy to begin using quickly. Mitchel Resnick extended the metaphor to argue that learning toolkits should also have “wide walls” in that they should appeal to diverse groups of learners and allow for a broad variety of creative outcomes. In a new paper, Sayamindu Dasgupta and I attempted to provide an empirical test of Resnick’s wide walls theory. Using a natural experiment in the Scratch online community, we found causal evidence that “widening walls” can, as Resnick suggested, increase both engagement and learning.

Over the last ten years, the “wide walls” design principle has been widely cited in the design of new systems. For example, Resnick and his collaborators relied heavily on the principle in the design of the Scratch programming language. Scratch allows young learners to produce not only games, but also interactive art, music videos, greetings card, stories, and much more. As part of that team, Sayamindu was guided by “wide walls” principle when he designed and implemented the Scratch cloud variables system in 2011-2012.

While designing the system, Sayamindu hoped to “widen walls” by supporting a broader range of ways to use variables and data structures in Scratch. Scratch cloud variables extend the affordances of the normal Scratch variable by adding persistence and shared-ness. A simple example of something possible with cloud variables, but not without them, is a global high-score leaderboard in a game (example code is below). After the system was launched, we saw many young Scratch users using the system to engage with data structures in new and incredibly creative ways.

cloud-variable-scriptExample of Scratch code that uses a cloud variable to keep track of high-scores among all players of a game.

Although these examples reflected powerful anecdotal evidence, we were also interested in using quantitative data to reflect the causal effect of the system. Understanding the causal effect of a new design in real world settings is a major challenge. To do so, we took advantage of a “natural experiment” and some clever techniques from econometrics to measure how learners’ behavior changed when they were given access to a wider design space.

Understanding the design of our study requires understanding a little bit about how access to the Scratch cloud variable system is granted. Although the system has been accessible to Scratch users since 2013, new Scratch users do not get access immediately. They are granted access only after a certain amount of time and activity on the website (the specific criteria are not public). Our “experiment” involved a sudden change in policy that altered the criteria for who gets access to the cloud variable feature. Through no act of their own, more than 14,000 users were given access to feature, literally overnight. We looked at these Scratch users immediately before and after the policy change to estimate the effect of access to the broader design space that cloud variables afforded.

We found that use of data-related features was, as predicted, increased by both access to and use of cloud variables. We also found that this increase was not only an effect of projects that use cloud variables themselves. In other words, learners with access to cloud variables—and especially those who had used it—were more likely to use “plain-old” data-structures in their projects as well.

The graph below visualizes the results of one of the statistical models in our paper and suggests that we would expect that 33% of projects by a prototypical “average” Scratch user would use data structures if the user in question had never used used cloud variables but that we would expect that 60% of projects by a similar user would if they had used the system.

Model-predicted probability that a project made by a prototypical Scratch user will contain data structures (w/o counting projects with cloud variables)

It is important to note that the estimated effective above is a “local average effect” among people who used the system because they were granted access by the sudden change in policy (this is a subtle but important point that we explain this in some depth in the paper). Although we urge care and skepticism in interpreting our numbers, we believe our results are encouraging evidence in support of the “wide walls” design principle.

Of course, our work is not without important limitations. Critically, we also found that rate of adoption of cloud variables was very low. Although it is hard to pinpoint the exact reason for this from the data we observed, it has been suggested that widening walls may have a potential negative side-effect of making it harder for learners to imagine what the new creative possibilities might be in the absence of targeted support and scaffolding. Also important to remember is that our study measures “wide walls” in a specific way in a specific context and that it is hard to know how well our findings will generalize to other contexts and communities. We discuss these caveats, as well as our methods, models, and theoretical background in detail in our paper which now available for download as an open-access piece from the ACM digital library.


This blog post, and the open access paper that it describes, is a collaborative project with Sayamindu Dasgupta. Financial support came from the eScience Institute and the Department of Communication at the University of Washington. Quantitative analyses for this project were completed using the Hyak high performance computing cluster at the University of Washington.

on May 23, 2018 04:17 PM

May 22, 2018

UbuCon Europe 2018: Analysing a dream (English)

The idea of organising the Ubucon in Xixon, Asturies was set two years ago, while participating in the European Ubucon in Essen (Germany). The Paris Ubucon took place and in those days we uderstood that there was a group enough of people with the capacities and the will to hold an European Congress for Ubuntu lovers. We had learnt a lot from German and French colleagues thanks to their respective amazing organizations and, at the same time, our handicap was the lack of s consolidated group in Spain.


Asturies



The first task was to bring together a group of people big enough and with the motivation of working together both in preparation and the development of activities during the three main days of the Ubucon. Eleven volunteers responded to the call of Marcos Costales, creating a telegram group where two main decisions were taken:
  • Chosen city, Xixon
  • Date, coinciding with the release of Ubuntu 18.04


2


A particular building was selected for the Ubucon. The "Antiguo instituto Jovellanos" had everything we needed: perfect location in the center of the city, big conference room for 100 people, inner courtyard and the availability of several extra rooms in case we had more and more speakers.


3

We made our move and offered Spain as a potential host for the next UbuCon Europe. We knew that the idea was floating in the minds of our portuguese and british colleagues but someway, he had the feeling that it was our moment to take the risk and we did. Considering that there is no European Organisation for Ubuntu (although it is getting close), we tacitly recevied the honor of being in charge of the European Ubucon 2018. Then , the long process of making a dream come true began.

4

The organization was simple and even simpler. Pivoting on Marcos as the main responsible, we organized several groups in Telegram: communication, organization....and we began to give publicity to the event.
The gathering of attendees was done through the website http://ubucon.eu where after a first press release that some important blogs of Ubuntu in Spanish spread, we received an avalanche of registrations (more than 100 in the first days) that made us fear for our capacity of reception and that on the other hand allowed us to take the pulse of the interest aroused.

We created a GMail account to manage communications, reused existing European UbuCon accounts on Twitter and Google+ and created a Facebook account managed by our friend Diogo Constantino from Portugal, to give information to everyone on social networks.
We used Telegram to create an information channel (205 members) and two groups, one for attendees (68 members) and one for speakers (31 members).
Suddenly it seemed to us that the creature was growing and growing, even above our expectations. We had to ask for institutional support and we got it.

The Municipal Foundation of Culture and Education of Xixón gave us the Old Jovellanos Institute. 
The Gijón Congress Office provided us with contacts and discounts on bus and train transport (ALSA and RENFE).
Canonical helped us financially by paying the insurance that covered our possible accident liabilities and the small costs of auxiliary material. 
Nathan Haines, Marius Quabeck and Ubuntu FR provided us with tablecloths for the tables.
Slimbook provided us with laptops for each of the conference rooms and for the reception of attendees.

5

At the time, with our dream growing in the wishing oven like a huge cake, it seemed to us that we needed legal protection in the form of a partnership and we tried. We live in a big old country that is not agile for these things and besides the difficulty of bringing together people from Alicante, Andalusia, Asturias, the Balearic Islands, Barcelona, León and Madrid we were put ahead of the administration and it was not possible.
Speaking of the dispersion of the organizers. How did we coordinate? 
Telegram has been the main axis. EtherPad is used to build documents collaboratively. Mumble (hosted on a Raspberry PI) for coordination meetings, Drive for documents and records of attendees and speakers and MailChimp for bulk mailings.


6

So it was time to ask for speakers and then what was already overflowing became a real luxury. We began to receive requests for talks from individuals and businesses and there were weeks when we had to meet every other day to decide on approvals.  Finally we had 37 talks, conferences, workshop and 6 stands, 3 podcasts broadcasting in Spanish, Portuguese and German. On Saturday the 28th we had to provide up to 4 spaces at a time to accommodate everyone.

7

An Ubucon Europe must have at least three objectives to achieve:
  1. Sharing knowledge
  2. Bring Europeans together around Ubuntu, strengthen bonds of friendship
  3. Have fun
8


To achieve the third objective we had the best possible host. Marcos was in his hometown and had all the elements to make the hours of living together and having fun unique. We knew it was very important that social events be close to each other. Xixon is a city with an ideal size for this to happen and so they organized themselves. Focused on Saturday's spicha, which was attended by 87 people, the rest of the days we had a complete program that allowed those who did not know Spain, and Asturias in particular, to touch the sky of a tremendously welcoming land with their fingers. Live music in the Trisquel, drinks and disco by the sea, cider and Asturian gastronomy in the Poniente beach and cultural visits... Could you ask for more?


9

10


And the dream came true, on Friday 27th April Ubucon Europe 2018 was inaugurated. All the scheduled events were on schedule and the staff of 8 people attended as well as we knew the reception of the event as well as the logistics of the up to 4 simultaneous rooms that we needed at some point. Without incident, 140 attendees were able to listen to some of the 37 talks, more than 350 messages were published on Twitter, hundreds of posts on Google + and Facebook and we spent 474 € on the small expenses of the organization, which have possibly provided the city with 40,000 € of profit between hotels, restaurants, transport.... The social events were a success and the group of participants/speakers stayed together for three days.
We're proud of you. We can't always make our dreams come true.

11

And that's all. See you next time!




UbuCon Europe 2018: Análisis de un sueño (Spanish)

La organización de la Ubucon en Xixón se empezó a gestar dos años antes, mientras participábamos en Alemania en la Ubucon Europea de Essen, luego vino París y para esas fechas empezamos a entender que había un grupo suficiente de personas con capacidad de organización y con  voluntad de llevar a cabo un congreso europeo de amantes de Ubuntu. Habíamos aprendido de alemanes y franceses con organizaciones fantásticas y sin embargo teníamos en nuestra contra la inexistencia de un equipo consolidado en España.

12

Lo primero fue reunir a un número suficiente de personas dispuestas a trabajar, tanto en la preparación como en el desarrollo de los tres días. A la convocactoria de Marcos Costales nos apuntamos 11 personas reunidas en un grupo de Telegram que tomamos las primeras decisiones:
  •     La ciudad elegida, Xixón
  •     Las fechas coincidentes con la salida de Ubuntu 18.04

Conseguimos un edificio singular para la celebración. El antiguo Instituto Jovellanos tenía todo lo que necesitábamos: estaba en un lugar céntrico de la ciudad, tenía un salón de actos para más de 100 personas, un patio para montar stands y la posibilidad de utilizar distintas aulas según fuéramos consiguiendo más conferenciantes.


13


Nos decidimos y nos postulamos como país anfitrión. Sabíamos que la idea rondaba entre portugueses y británicos, pero de alguna manera teníamos la sensación de que era nuestro momento para lanzarnos a la piscina y así lo hicimos y dado que hasta ahora no existe ninguna organización europea de Ubuntu (aunque ya está cerca) de una manera tácita se nos otorgó el honor de hacernos cargo de la Ubucon Europea de 2018 y entonces empezó la carrera por hacer realidad un sueño.

14

La organización fue sencilla y aun se simplificó más. Pivotando sobre Marcos como responsable pincipal, organizamos en Telegram varios grupos: comunicación, organización....y empezamos a dar publicidad al evento.
La recogida de asistentes se hizo a través de la página web http://ubucon.eu donde tras un primer comunicado de prensa que difundieron algunos blogs importantes de Ubuntu en español, recibimos una avalancha de inscripciones (más de 100 en los primeros días) que nos hizo temer por nuestra capacidad de acogida y que por otro lado nos permitió ir tomando el pulso al interés suscitado.

Creamos una cuentra de GMail para administrar las comunicaciones, reutilizamos las cuentas existentes de la UbuCon europea en Twitter y Google+ y creamos una de Facebook que administró nuestro amigo Diogo Constantino de Portugal, para dar información a todo el mundo en redes sociales.
Usamos Telegram para crear un canal de información (205 miembros) y dos grupos, uno para asistentes (68 miembros) y otro para conferenciantes (31 miembros).
De pronto nos pareció que la criatura crecía y crecía, incluso por encima de nuestras espectativas. Tuvimos que pedir apoyo institucional y lo conseguimos.

La Fundación Municipal de Cultura y Educación de Xixón nos cedió el Antiguo Instituto Jovellanos
La Oficina de Congresos de Gijón nos facilitó contactos y descuentos en transporte de autobús y tren (ALSA y RENFE).
Canonical nos ayudó económicamente pagando el seguro que cubría nuestras posibles responsabilidades por accidentes y los pequeños costes de material auxiliar
Nathan Haines, Marius Quabeck y Ubuntu FR nos facilitaron manteles para las mesas.
Slimbook nos facilitó portátiles para cada una de las salas de conferencias y para la recepción de asistentes.

En aquel momento, con nuestro sueño creciendo en el horno de los deseos como un enorme bizcocho, nos pareció que necesitábamos un amparo legal en forma de asociación y lo intentamos. Vivimos en un país grande y viejo que no es ágil para estas cosas y además de la dificultad de reunir a personas de Alicante, Andalucía, Asturias, Baleares, Barcelona, León y Madrid se nos puso por delante la administración y no fue posible.
Hablando de la dispersión de los organizadores. ¿Cómo nos coordinábamos? 
Telegram ha sido el eje principal. EtherPad  lo usamos para construir documentos de manera colaborativa. Mumble (alojado en una Raspberry PI) para las reuniones de coordinación, Drive para los documentos y registros de asistentes y conferenciantes y MailChimp para el envío de correos masivos.

15

Así las cosas llegó el momento de pedir conferenciantes y entonces lo que ya estaba siendo un desborde empezó a ser realmente un lujo. Empezamos a recibir peticiones de charlas por parte de particulares y empresas y hubo semanas en las que cada dos días teníamos que reunirnos para decidir las aprobaciones.  Finalmente tuvimos 37 charlas, conferencias, workshop y 6 stands, 3 podcasts emitiendo en español, portugués y alemán. El sábado 28 tuvimos que habilitar hasta 4 espacios a la vez para albergar a todo el mundo.

16

Una Ubucon Europe debe tener al menos tres objetivos a cumplir:
  1.     Compartir conocimiento
  1.     Reunir a los europeos en torno a Ubuntu, reforzar lazos de amistad
  1.     Divertirse

Para conseguir el tercer objetivo contábamos con el mejor anfitrión posible. Marcos estaba en su ciudad y tenía todos los mimbres para que las horas de convivencia y diversión fuesen únicas. Sabíamos que era muy importante que los eventos sociales estuvieran cerca unos de otros. Xixón es una ciudad con un tamaño ideal para que esto ocurriera y así se organizaron. Centrados en la espicha del sábado a la que asistieron 87 personas, el resto de los días tuvimos un programa completo que permitió a quienes no conocian España, y Asturias en particular tocar con los dedos el cielo de una tierra tremendamente acogedora. Música en vivo en el Trisquel, copas y disco junto al mar, sidra y gastronomia asturiana en la Playa de Poniente y visitas culturales...¿Se podía pedir más?


17

Y el sueño se hizo realidad, el viernes 27 de Abril se inauguró la Ubucon Europe 2018. Todos los actos programados cumplieron con los horarios previstos y el staff de 8 personas atendimos tan bien como supimos la recepción del evento así como la logística de las hasta 4 salas simultáneas que en algún momento necesitamos. Sin incidentes, 140 asistentes pudieron escuchar alguna de las 37 charlas, se publicaron más de 350 mesajes en Twitter, cientos de post en Google + y Facebook y empleamos 474 € en los pequeños gastos de la organización, que posiblemente hayan proporcionado a la ciudad unos 40.000 € de beneficio entre hoteles, restaurantes, transporte... Los actos sociales fueron un éxito y el grupo de participantes/conferenciantes se mantuvo unido los tres días.
Estamos orgullosos. No siempre podemos hacer realidad nuestros sueños.


18

Y esto ha sido todo. ¡Nos vemos en la siguiente!

19

UbuCon Europe 2018 | Made with ❤ by:
  • Fco. Javier Teruelo de Luis
  • Sergi Quiles Pérez
  • Francisco Molinero
  • Santiago Moreira
    • Antonio Fernandes
    • Paul Hodgetts
    • Joan CiberSheep
    • Fernando Lanero
    • Manu Cogolludo
    • Marcos Costales

    20

    Text redaction by Paco Molinero. Translation by Santiago Moreira.


    on May 22, 2018 09:32 AM

    May 21, 2018

    Over the weekend I've been in Tirana, Albania for OSCAL 2018.

    Crowdfunding report

    The crowdfunding campaign to buy hardware for the radio demo was successful. The gross sum received was GBP 110.00, there were Paypal fees of GBP 6.48 and the net amount after currency conversion was EUR 118.29. Here is a complete list of transaction IDs for transparency so you can see that if you donated, your contribution was included in the total I have reported in this blog. Thank you to everybody who made this a success.

    The funds were used to purchase an Ultracell UCG45-12 sealed lead-acid battery from Tashi in Tirana, here is the receipt. After OSCAL, the battery is being used at a joint meeting of the Prishtina hackerspace and SHRAK, the amateur radio club of Kosovo on 24 May. The battery will remain in the region to support any members of the ham community who want to visit the hackerspaces and events.

    Debian and Ham radio booth

    Local volunteers from Albania and Kosovo helped run a Debian and ham radio/SDR booth on Saturday, 19 May.

    The antenna was erected as a folded dipole with one end joined to the Tirana Pyramid and the other end attached to the marquee sheltering the booths. We operated on the twenty meter band using an RTL-SDR dongle and upconverter for reception and a Yaesu FT-857D for transmission. An MFJ-1708 RF Sense Switch was used for automatically switching between the SDR and transceiver on PTT and an MFJ-971 ATU for tuning the antenna.

    I successfully made contact with 9A1D, a station in Croatia. Enkelena Haxhiu, one of our GSoC students, made contact with Z68AA in her own country, Kosovo.

    Anybody hoping that Albania was a suitably remote place to hide from media coverage of the British royal wedding would have been disappointed as we tuned in to GR9RW from London and tried unsuccessfully to make contact with them. Communism and royalty mix like oil and water: if a deceased dictator was already feeling bruised about an antenna on his pyramid, he would probably enjoy water torture more than a radio transmission celebrating one of the world's most successful hereditary monarchies.

    A versatile venue and the dictator's revenge

    It isn't hard to imagine communist dictator Enver Hoxha turning in his grave at the thought of his pyramid being used for an antenna for communication that would have attracted severe punishment under his totalitarian regime. Perhaps Hoxha had imagined the possibility that people may gather freely in the streets: as the sun moved overhead, the glass facade above the entrance to the pyramid reflected the sun under the shelter of the marquees, giving everybody a tan, a low-key version of a solar death ray from a sci-fi movie. Must remember to wear sunscreen for my next showdown with a dictator.

    The security guard stationed at the pyramid for the day was kept busy chasing away children and more than a few adults who kept arriving to climb the pyramid and slide down the side.

    Meeting with Debian's Google Summer of Code students

    Debian has three Google Summer of Code students in Kosovo this year. Two of them, Enkelena and Diellza, were able to attend OSCAL. Albania is one of the few countries they can visit easily and OSCAL deserves special commendation for the fact that it brings otherwise isolated citizens of Kosovo into contact with an increasingly large delegation of foreign visitors who come back year after year.

    We had some brief discussions about how their projects are starting and things we can do together during my visit to Kosovo.

    Workshops and talks

    On Sunday, 20 May, I ran a workshop Introduction to Debian and a workshop on Free and open source accounting. At the end of the day Enkelena Haxhiu and I presented the final talk in the Pyramid, Death by a thousand chats, looking at how free software gives us a unique opportunity to disable a lot of unhealthy notifications by default.

    on May 21, 2018 08:44 PM

    Welcome to the Ubuntu Weekly Newsletter, Issue 528 for the week of May 13 – 19, 2018. The full version of this issue is available here.

    In this issue we cover:

    The Ubuntu Weekly Newsletter is brought to you by:

    • Krytarik Raido
    • Bashing-om
    • Wild Man
    • Chris Guiver
    • And many others

    If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

    Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

    on May 21, 2018 08:07 PM

    Are you using Kubuntu 18.04, our current LTS release?

    We currently have the Plasma 5.12.5 LTS bugfix release available in our Updates PPA, but we would like to provide the important fixes and translations in this release to all users via updates in the main Ubuntu archive. This would also mean these updates would be provide by default with the 18.04.1 point release ISO expected in late July.

    The Stable Release Update tracking bug can be found here: https://bugs.launchpad.net/ubuntu/+source/plasma-desktop/+bug/1768245

    A launchpad.net account is required to post testing feedback as bug comments.

    The Plasma 5.12.5 changelog can be found at: https://www.kde.org/announcements/plasma-5.12.4-5.12.5-changelog.php

    [Test Case]

    * General tests:
    – Does plasma desktop start as normal with no apparent regressions over 5.12.4?
    – General workflow – testers should carry out their normal tasks, using the plasma features they normally do, and test common subsystems such as audio, settings changes, compositing, desktop affects, suspend etc.

    * Specific tests:
    – Check the changelog:
    – Identify items with front/user facing changes capable of specific testing. e.g. “weather plasmoid fetches BBC weather data.”
    – Test the ‘fixed’ functionality.

    Testing involves some technical set up to do, so while you do not need to be a highly advanced K/Ubuntu user, some proficiently in apt based package management is advisable.

    Details on how to enable the propose repository can be found at: https://wiki.ubuntu.com/Testing/EnableProposed.

    Unfortunately that page illustrates Xenial and Ubuntu Unity rather than Bionic in Kubuntu. Using Discover or Muon, use Settings > More, enter your password, and ensure that Pre-release updates (bionic-proposed) is ticked in the Updates tab.

    Or from the commandline, you can modify the software sources manually by adding the following line to /etc/apt/sources.list:

    deb http://archive.ubuntu.com/ubuntu/ bionic-proposed restricted main multiverse universe

    It is not advisable to upgrade all available packages from proposed, as many will be unrelated to this testing and may NOT have been sufficiently verified as updates to assume safe. So the safest but a little involved method would be to use Muon (or even synaptic!) to select each upgradeable packages with a version containing 5.12.5-0ubuntu0.1 (5.12.5.1-0ubuntu0.1 for plasma-discover due to an additional update).

    Please report your findings on the bug report. If you need some guidance on how to structure your report, please see https://wiki.ubuntu.com/QATeam/PerformingSRUVerification. Testing is very important to the quality of the software Ubuntu and Kubuntu developers package and release.

    We need your help to get this important bug-fix release out the door to all of our users.

    Thanks! Please stop by the Kubuntu-devel IRC channel or Telegram group if you need clarification of any of the steps to follow.

    on May 21, 2018 03:36 PM

    May 18, 2018

    Sometimes we are on connections that have a dynamic ip. This will add your current external ip to ~/.external-ip.

    Each time the script is run, it will dig an OpenDNS resolver to grab your external ip. If it is different from what is in ~/.external-ip it will echo the new ip. Otherwise it will return nothing.

    #!/bin/sh
    # Check external IP for change
    # Ideal for use in a cron job
    #
    # Usage: sh check-ext-ip.sh
    #
    # Returns: Nothing if the IP is same, or the new IP address
    #          First run always returns current address
    #
    # Requires dig:
    #    Debian/Ubuntu: apt install dnsutils
    #    Solus: eopkg install bind-utils
    #    CentOS/Fedora: yum install bind-utils
    #
    # by Sina Mashek <sina@sinacutie.stream>
    # Released under CC0 or Public Domain, whichever is supported
    
    # Where we will store the external IP
    EXT_IP="$HOME/.external-ip"
    
    # Check if dig is installed
    if [ "$(command -v dig)" = "" ]; then
        echo "This script requires 'dig' to run"
    
        # Load distribution release information
        . /etc/os-release
    
        # Check for supported release; set proper package manager and package name
        if [ "$ID" = "debian" ] || [ "$ID" = "ubuntu" ]; then
            MGR="apt"
            PKG="dnsutils"
        elif [ "$ID" = "fedora" ] || [ "$ID" = "centos" ]; then
            MGR="yum"
            PKG="bind-utils"
        elif [ "$ID" = "solus" ]; then
            MGR="eopkg"
            PKG="bind-utils"
        else
            echo "Please consult your package manager for the correct package"
            exit 1
        fi
    
        # Will run if one of the above supported distributions was found
        echo "Installing $PKG ..."
        sudo "$MGR" install "$PKG"
    fi
    
    # We check our external IP directly from a DNS request
    GET_IP="$(dig +short myip.opendns.com @resolver1.opendns.com)"
    
    # Check if ~/.external-ip exists
    if [ -f "$EXT_IP" ]; then
        # If the external ip is the same as the current ip
        if [ "$(cat "$EXT_IP")" = "$GET_IP" ]; then
            exit 0
        fi
    # If it doesn't exist or is not the same, grab and save the current IP
    else
        echo "$GET_IP" > "$EXT_IP"
    fi
    
    on May 18, 2018 09:00 PM

    Sometimes we are on connections that have a dynamic ip. This will add your current external ip to ~/.external-ip.

    Each time the script is run, it will dig an OpenDNS resolver to grab your external ip. If it is different from what is in ~/.external-ip it will echo the new ip. Otherwise it will return nothing.

    #!/bin/sh
    # Check external IP for change
    # Ideal for use in a cron job
    #
    # Usage: sh check-ext-ip.sh
    #
    # Returns: Nothing if the IP is same, or the new IP address
    #          First run always returns current address
    #
    # Requires dig:
    #    Debian/Ubuntu: apt install dnsutils
    #    Solus: eopkg install bind-utils
    #    CentOS/Fedora: yum install bind-utils
    #
    # by Sina Mashek <sina@sinacutie.stream>
    # Released under CC0 or Public Domain, whichever is supported
    
    # Where we will store the external IP
    EXT_IP="$HOME/.external-ip"
    
    # Check if dig is installed
    if [ "$(command -v dig)" = "" ]; then
        echo "This script requires 'dig' to run"
    
        # Load distribution release information
        . /etc/os-release
    
        # Check for supported release; set proper package manager and package name
        if [ "$ID" = "debian" ] || [ "$ID" = "ubuntu" ]; then
            MGR="apt"
            PKG="dnsutils"
        elif [ "$ID" = "fedora" ] || [ "$ID" = "centos" ]; then
            MGR="yum"
            PKG="bind-utils"
        elif [ "$ID" = "solus" ]; then
            MGR="eopkg"
            PKG="bind-utils"
        else
            echo "Please consult your package manager for the correct package"
            exit 1
        fi
    
        # Will run if one of the above supported distributions was found
        echo "Installing $PKG ..."
        sudo "$MGR" install "$PKG"
    fi
    
    # We check our external IP directly from a DNS request
    GET_IP="$(dig +short myip.opendns.com @resolver1.opendns.com)"
    
    # Check if ~/.external-ip exists
    if [ -f "$EXT_IP" ]; then
        # If the external ip is the same as the current ip
        if [ "$(cat "$EXT_IP")" = "$GET_IP" ]; then
            exit 0
        fi
    # If it doesn't exist or is not the same, grab and save the current IP
    else
        echo "$GET_IP" > "$EXT_IP"
    fi
    
    on May 18, 2018 09:00 PM

    Hello Planet GNOME!

    Marco Trevisan (Treviño)

    Hey guys, although I’ve been around for a while hidden in the patches, some months ago (already!?!) I did my application to join the GNOME Foundation, and few days after – thanks to some anonymous votes – I got approved :), and thus I’m officially part of the family!

    So, thanks again, and sorry for my late “hello” 🙂

    on May 18, 2018 03:46 PM

    S11E11 – Station Eleven - Ubuntu Podcast

    Ubuntu Podcast from the UK LoCo

    This week we reconstruct a bathroom and join the wireless gaming revolution. We discuss the Steam Link app for Android and iOS, the accessible Microsoft Xbox controller, Linux applications coming to ChromeOS and round up the community news.

    It’s Season 11 Episode 11 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

    In this week’s show:

    That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

    on May 18, 2018 11:54 AM

    May 16, 2018

    Overview

    I'm presenting here the technical aspects of setting up a small-scale testing lab in my basement, using as little hardware as possible, and keeping costs to a minimum. For one thing, systems needed to be mobile if possible, easy to replace, and as flexible as possible to support various testing scenarios. I may wish to bring part of this network with me on short trips to give a talk, for example.

    One of the core aspects of this lab is its use of the network. I have former experience with Cisco hardware, so I picked some relatively cheap devices off eBay: a decent layer 3 switch (Cisco C3750, 24 ports, with PoE support in case I'd want to start using that), a small Cisco ASA 5505 to act as a router. The router's configuration is basic, just enough to make sure this lab can be isolated behind a firewall, and have an IP on all networks. The switch's config is even simpler, and consists in setting up VLANs for each segment of the lab (different networks for different things). It connects infrastructure (the MAAS server, other systems that just need to always be up) via 802.1q trunks; the servers are configured with IPs on each appropriate VLAN. VLAN 1 is my "normal" home network, so that things will work correctly even when not supporting VLANs (which means VLAN 1 is set to the the native VLAN and to be untagged wherever appropriate). VLAN 10 is "staging", for use with my own custom boot server. VLAN 15 is "sandbox" for use with MAAS. The switch is only powered on when necessary, to save on electricity costs and to avoid hearing its whine (since I work in the same room). This means it is usually powered off, as the ASA already provides many ethernet ports. The telco rack in use was salvaged, and so were most brackets, except for the specialized bracket for the ASA which was bought separately. Total costs for this setup is estimated to about 500$, since everything comes from cheap eBay listings or salvaged, reused equipment.

    The Cisco hardware was specifically selected because I had prior experience with them, so I could make sure the features I wanted were supported: VLANs, basic routing, and logs I can make sense of. Any hardware could do -- VLANs aren't absolutely required, but given many network ports on a switch, it tends to avoid requiring multiple switches instead.

    My main DNS / DHCP / boot server is a raspberry pi 2. It serves both the home network and the staging network. DNS is set up such that the home network can resolve any names on any of the networks: using home.example.com or staging.example.com, or even maas.example.com as a domain name following the name of the system. Name resolution for the maas.example.com domain is forwarded to the MAAS server. More on all of this later.

    The MAAS server has been set up on an old Thinkpad X230 (my former work laptop); I've been routinely using it (and reinstalling it) for various tests, but that meant reinstalling often, possibly conflicting with other projects if I tried to test more than one thing at a time. It was repurposed to just run Ubuntu 18.04, with a MAAS region and rack controller installed, along with libvirt (qemu) available over the network to remotely start virtual machines. It is connected to both VLAN 10 and VLAN 15.

    Additional testing hardware can be attached to either VLAN 10 or VLAN 15 as appropriate -- the C3750 is configured so "top" ports are in VLAN 10, and "bottom" ports are in VLAN 15, for convenience. The first four ports are configured as trunk ports if necessary. I do use a Dell Vostro V130 and a generic Acer Aspire laptop for testing "on hardware". They are connected to the switch only when needed.

    Finally, "clients" for the lab may be connected anywhere (but are likely to be on the "home" network). They are able to reach the MAAS web UI directly, or can use MAAS CLI or any other features to deploy systems from the MAAS servers' libvirt installation.

    Setting up the network hardware

    I will avoid going into the details of the Cisco hardware too much; configuration is specific to this hardware. The ASA has a restrictive firewall that blocks off most things, and allows SSH and HTTP access. Things that need access the internet go through the MAAS internal proxy.

    For simplicity, the ASA is always .1 in any subnet, the switch is .2 when it is required (and was made accessible over serial cable from the MAAS server). The rasberrypi is always .5, and the MAAS server is always .25. DHCP ranges were designed to reserve anything .25 and below for static assignments on the staging and sandbox networks, and since I use a /23 subnet for home, half is for static assignments, and the other half is for DHCP there.

    MAAS server hardware setup

    Netplan is used to configure the network on Ubuntu systems. The MAAS server's configuration looks like this:

    network:
        ethernets:
            enp0s25:
                addresses: []
                dhcp4: true
                optional: true
        bridges:
            maasbr0:
                addresses: [ 10.3.99.25/24 ]
                dhcp4: no
                dhcp6: no
                interfaces: [ vlan15 ]
            staging:
                addresses: [ 10.3.98.25/24 ]
                dhcp4: no
                dhcp6: no
                interfaces: [ vlan10 ]
        vlans:
            vlan15:
                dhcp4: no
                dhcp6: no
                accept-ra: no
                id: 15
                link: enp0s25
            vlan10:
                dhcp4: no
                dhcp6: no
                accept-ra: no
                id: 10
                link: enp0s25
        version: 2
    Both VLANs are behind bridges as to allow setting virtual machines on any network. Additional configuration files were added to define these bridges for libvirt (/etc/libvirt/qemu/networks/maasbr0.xml):
    <network>
    <name>maasbr0</name>
    <bridge name="maasbr0">
    <forward mode="bridge">
    </forward></bridge></network>
    Libvirt also needs to be accessible from the network, so that MAAS can drive it using the "pod" feature. Uncomment "listen_tcp = 1", and set authentication as you see fit, in /etc/libvirt/libvirtd.conf. Also set:

    libvirtd_opts="-l"

    In /etc/default/libvirtd, then restart the libvirtd service.


    dnsmasq server

    The raspberrypi has similar netplan config, but sets up static addresses on all interfaces (since it is the DHCP server). Here, dnsmasq is used to provide DNS, DHCP, and TFTP. The configuration is in multiple files; but here are some of the important parts:
    dhcp-leasefile=/depot/dnsmasq/dnsmasq.leases
    dhcp-hostsdir=/depot/dnsmasq/reservations
    dhcp-authoritative
    dhcp-fqdn
    # copied from maas, specify boot files per-arch.
    dhcp-boot=tag:x86_64-efi,bootx64.efi
    dhcp-boot=tag:i386-pc,pxelinux
    dhcp-match=set:i386-pc, option:client-arch, 0 #x86-32
    dhcp-match=set:x86_64-efi, option:client-arch, 7 #EFI x86-64
    # pass search domains everywhere, it's easier to type short names
    dhcp-option=119,home.example.com,staging.example.com,maas.example.com
    domain=example.com
    no-hosts
    addn-hosts=/depot/dnsmasq/dns/
    domain-needed
    expand-hosts
    no-resolv
    # home network
    domain=home.example.com,10.3.0.0/23
    auth-zone=home.example.com,10.3.0.0/23
    dhcp-range=set:home,10.3.1.50,10.3.1.250,255.255.254.0,8h
    # specify the default gw / next router
    dhcp-option=tag:home,3,10.3.0.1
    # define the tftp server
    dhcp-option=tag:home,66,10.3.0.5
    # staging is configured as above, but on 10.3.98.0/24.
    # maas.example.com: "isolated" maas network.
    # send all DNS requests for X.maas.example.com to 10.3.99.25 (maas server)
    server=/maas.example.com/10.3.99.25
    # very basic tftp config
    enable-tftp
    tftp-root=/depot/tftp
    tftp-no-fail
    # set some "upstream" nameservers for general name resolution.
    server=8.8.8.8
    server=8.8.4.4


    DHCP reservations (to avoid IPs changing across reboots for some systems I know I'll want to reach regularly) are kept in /depot/dnsmasq/reservations (as per the above), and look like this:

    de:ad:be:ef:ca:fe,10.3.0.21

    I did put one per file, with meaningful filenames. This helps with debugging and making changes when network cards are changed, etc. The names used for the files do not match DNS names, but instead are a short description of the device (such as "thinkpad-x230"), since I may want to rename things later.

    Similarly, files in /depot/dnsmasq/dns have names describing the hardware, but then contain entries in hosts file form:

    10.3.0.21 izanagi

    Again, this is used so any rename of a device only requires changing the content of a single file in /depot/dnsmasq/dns, rather than also requiring renaming other files, or matching MAC addresses to make sure the right change is made.


    Installing MAAS

    At this point, the configuration for the networking should already be completed, and libvirt should be ready and accessible from the network.

    The MAAS installation process is very straightforward. Simply install the maas package, which will pull in maas-rack-controller and maas-region-controller.

    Once the configuration is complete, you can log in to the web interface. Use it to make sure, under Subnets, that only the MAAS-driven VLAN has DHCP enabled. To enable or disable DHCP, click the link in the VLAN column, and use the "Take action" menu to provide or disable DHCP.

    This is necessary if you do not want MAAS to fully manage all of the network and provide DNS and DHCP for all systems. In my case, I am leaving MAAS in its own isolated network since I would keep the server offline if I do not need it (and the home network needs to keep working if I'm away).

    Some extra modifications were made to the stock MAAS configuration to change the behavior of deployed systems. For example; I often test packages in -proposed, so it is convenient to have that enabled by default, with the archive pinned to avoid accidentally installing these packages. Given that I also do netplan development and might try things that would break the network connectivity, I also make sure there is a static password for the 'ubuntu' user, and that I have my own account created (again, with a static, known, and stupidly simple password) so I can connect to the deployed systems on their console. I have added the following to /etc/maas/preseed/curtin_userdata:


    late_commands:
    [...]
      pinning_00: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Package: *' >> /etc/apt/preferences.d/proposed"]
      pinning_01: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin: release a={{release}}-proposed' >> /etc/apt/preferences.d/proposed"]
      pinning_02: ["curtin", "in-target", "--", "sh", "-c", "/bin/echo 'Pin-Priority: -1' >> /etc/apt/preferences.d/proposed"]
    apt:
      sources:
        proposed.list:
          source: deb $MIRROR {{release}}-proposed main universe
    write_files:
      userconfig:
        path: /etc/cloud/cloud.cfg.d/99-users.cfg
        content: |
          system_info:
            default_user:
              lock_passwd: False
              plain_text_passwd: [REDACTED]
          users:
            - default
            - name: mtrudel
              groups: sudo
              gecos: Matt
              shell: /bin/bash
              lock-passwd: False
              passwd: [REDACTED]


    The pinning_ entries are simply added to the end of the "late_commands" section.

    For the libvirt instance, you will need to add it to MAAS using the maas CLI tool. For this, you will need to get your MAAS API key from the web UI (click your username, then look under MAAS keys), and run the following commands:

    maas login local   http://localhost:5240/MAAS/  [your MAAS API key]
    maas local pods create type=virsh power_address="qemu+tcp://127.0.1.1/system"

    The pod will be given a name automatically; you'll then be able to use the web interface to "compose" new machines and control them via MAAS. If you want to remotely use the systems' Spice graphical console, you may need to change settings for the VM to allow Spice connections on all interfaces, and power it off and on again.


    Setting up the client

    Deployed hosts are now reachable normally over SSH by using their fully-qualified name, and specifying to use the ubuntu user (or another user you already configured):

    ssh ubuntu@vocal-toad.maas.example.com

    There is an inconvenience with using MAAS to control virtual machines like this, they are easy to reinstall, so their host hashes will change frequently if you access them via SSH. There's a way around that, using a specially crafted ssh_config (~/.ssh/config). Here, I'm sharing the relevant parts of the configuration file I use:

    CanonicalDomains home.example.com
    CanonicalizeHostname yes
    CanonicalizeFallbackLocal no
    HashKnownHosts no
    UseRoaming no
    # canonicalize* options seem to break github for some reason
    # I haven't spent much time looking into it, so let's make sure it will go through the
    # DNS resolution logic in SSH correctly.
    Host github.com
      Hostname github.com.
    Host *.maas
      Hostname %h.example.com
    Host *.staging
      Hostname %h.example.com
    Host *.maas.example.com
      User ubuntu
      StrictHostKeyChecking no
      UserKnownHostsFile /dev/null

    Host *.staging.example.com
      StrictHostKeyChecking no
      UserKnownHostsFile /dev/null
    Host *.lxd
      StrictHostKeyChecking no
      UserKnownHostsFile /dev/null
      ProxyCommand nc $(lxc list -c s4 $(basename %h .lxd) | grep RUNNING | cut -d' ' -f4) %p
    Host *.libvirt
      StrictHostKeyChecking no
      UserKnownHostsFile /dev/null
      ProxyCommand nc $(virsh domifaddr $(basename %h .libvirt) | grep ipv4 | sed 's/.* //; s,/.*,,') %p

    As a bonus, I have included some code that makes it easy to SSH to local libvirt systems or lxd containers.

    The net effect is that I can avoid having the warnings about changed hashes for MAAS-controlled systems and machines in the staging network, but keep getting them for all other systems.

    Now, this means that to reach a host on the MAAS network, a client system only needs to use the short name with .maas tacked on:

    vocal-toad.maas
    And the system will be reachable, and you will not have any warning about known host hashes (but do note that this is specific to a sandbox environment, you definitely want to see such warnings in a production environment, as it can indicate that the system you are connecting to might not be the one you think).

    It's not bad, but the goal would be to use just the short names. I am working around this using a tiny script:

    #!/bin/sh
    ssh $@.maas

    And I saved this as "sandbox" in ~/bin and making it executable.

    And with this, the lab is ready.

    Usage

    To connect to a deployed system, one can now do the following:


    $ sandbox vocal-toad
    Warning: Permanently added 'vocal-toad.maas.example.com,10.3.99.12' (ECDSA) to the list of known hosts.
    Welcome to Ubuntu Cosmic Cuttlefish (development branch) (GNU/Linux 4.15.0-21-generic x86_64)
    [...]
    ubuntu@vocal-toad:~$
    ubuntu@vocal-toad:~$ id mtrudel
    uid=1000(mtrudel) gid=1000(mtrudel) groups=1000(mtrudel),27(sudo)

    Mobility

    One important point for me was the mobility of the lab. While some of the network infrastructure must remain in place, I am able to undock the Thinkpad X230 (the MAAS server), and connect it via wireless to an external network. It will continue to "manage" or otherwise control VLAN 15 on the wired interface. In these cases, I bring another small configurable switch: a Cisco Catalyst 2960 (8 ports + 1), which is set up with the VLANs. A client could then be connected directly on VLAN 15 behind the MAAS server, and is free to make use of the MAAS proxy service to reach the internet. This allows me to bring the MAAS server along with all its virtual machines, as well as to be able to deploy new systems by connecting them to the switch. Both systems fit easily in a standard laptop bag along with another laptop (a "client").

    All the systems used in the "semi-permanent" form of this lab can easily run on a single home power outlet, so issues are unlikely to arise in mobile form. The smaller switch is rated for 0.5amp, and two laptops do not pull very much power.

    Next steps

    One of the issues that remains with this setup is that it is limited to either starting MAAS images or starting images that are custom built and hooked up to the raspberry pi, which leads to a high effort to integrate new images:
    • Custom (desktop?) images could be loaded into MAAS, to facilitate starting a desktop build.
    • Automate customizing installed packages based on tags applied to the machines.
      • juju would shine there; it can deploy workloads based on available machines in MAAS with the specified tags.
      • Also install a generic system with customized packages, not necessarily single workloads, and/or install extra packages after the initial system deployment.
        • This could be done using chef or puppet, but will require setting up the infrastructure for it.
      • Integrate automatic installation of snaps.
    • Load new images into the raspberry pi automatically for netboot / preseeded installs
      • I have scripts for this, but they will take time to adapt
      • Space on such a device is at a premium, there must be some culling of old images
    on May 16, 2018 10:47 PM

    Video Channel Updates

    Jonathan Carter

    Last month, I started doing something that I’ve been meaning to do for years, and that’s to start a video channel and make some free software related videos.

    I started out uploading to my YouTube channel which has been dormant for a really long time, and then last week, I also uploaded my videos to my own site, highvoltage.tv. It’s a MediaDrop instance, a video hosting platform written in Python.

    I’ll still keep uploading to YouTube, but ultimately I’d like to make my self-hosted site the primary source for my content. Not sure if I’ll stay with MediaDrop, but it does tick a lot of boxes, and if its easy enough to extend, I’ll probably stick with it. MediaDrop might also be a good platform for viewing the Debian meetings videos like the DebConf videos. 

    My current topics are very much Debian related, but that doesn’t exclude any other types of content from being included in the future. Here’s what I have so far:

    • Video Logs: Almost like a blog, in video format.
    • Howto: Howto videos.
    • Debian Package of the Day: Exploring packages in the Debian archive.
    • Debian Package Management: Howto series on Debian package management, a precursor to a series that I’ll do on Debian packaging.
    • What’s the Difference: Just comparing 2 or more things.
    • Let’s Internet: Read stuff from Reddit, Slashdot, Quora, blogs and other media.

    It’s still early days and there’s a bunch of ideas that I still want to implement, so the content will hopefully get a lot better as time goes on.

    I have also quit Facebook last month, so I dusted off my old Mastodon account and started posting there again: https://mastodon.xyz/@highvoltage

    You can also subscribe to my videos via RSS: https://highvoltage.tv/latest.xml

    Other than that I’m open to ideas, thanks for reading :)

    on May 16, 2018 06:19 PM

    May 15, 2018

    A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

    Individual reports

    In March, about 183 work hours have been dispatched among 13 paid contributors. Their reports are available:

    • Abhijith PA did 5 hours (out of 10 hours allocated, thus keeping 5 extra hours for May).
    • Antoine Beaupré did 12h.
    • Ben Hutchings did 17 hours (out of 15h allocated + 2 remaining hours).
    • Brian May did 10 hours.
    • Chris Lamb did 16.25 hours.
    • Emilio Pozuelo Monfort did 11.5 hours (out of 16.25 hours allocated + 5 remaining hours, thus keeping 9.75 extra hours for May).
    • Holger Levsen did nothing (out of 16.25 hours allocated + 16.5 hours remaining, thus keeping 32.75 extra hours for May). He did not get hours allocated for May and is expected to catch up.
    • Hugo Lefeuvre did 20.5 hours (out of 16.25 hours allocated + 4.25 remaining hours).
    • Markus Koschany did 16.25 hours.
    • Ola Lundqvist did 11 hours (out of 14 hours allocated + 9.5 remaining hours, thus keeping 12.5 extra hours for May).
    • Roberto C. Sanchez did 7 hours (out of 16.25 hours allocated + 15.75 hours remaining, but immediately gave back the 25 remaining hours).
    • Santiago Ruano Rincón did 8 hours.
    • Thorsten Alteholz did 16.25 hours.

    Evolution of the situation

    The number of sponsored hours did not change. But a few sponsors interested in having more than 5 years of support should join LTS next month since this was a pre-requisite to benefit from extended LTS support. I did update Freexian’s website to show this as a benefit offered to LTS sponsors.

    The security tracker currently lists 20 packages with a known CVE and the dla-needed.txt file 16. At two week from Wheezy’s end-of-life, the number of open issues is close to an historical low.

    Thanks to our sponsors

    New sponsors are in bold.

    No comment | Liked this article? Click here. | My blog is Flattr-enabled.

    on May 15, 2018 03:32 PM

    May 14, 2018

    I realized I haven’t been putting many videos online recently. As such, I have started recording some instructional and coaching videos that I am putting online that I hope are useful to you folks.

    To get started, I wanted to touch on the topic of handling failure and poor decisions in a way that helps to identify pragmatic outcomes and lead towards better outcomes. This video introduces the issue, delves into how to unpick and understand the components of failure, and some practical recommendations for concrete next steps after this assessment.

    Here is the video:

    Can’t see it? Click here.

    The post Video: How to Manage Failure and Poor Decisions – A Practical Guide appeared first on Jono Bacon.

    on May 14, 2018 09:12 PM

    The crowdfunding campaign has so far raised enough money to buy a small lead-acid battery but hopefully with another four days to go before OSCAL we can reach the target of an AGM battery. In the interest of transparency, I will shortly publish a summary of the donations.

    The campaign has been a great opportunity to publish some information that will hopefully help other people too. In particular, a lot of what I've written about power sources isn't just applicable for ham radio, it can be used for any demo or exhibit involving electronics or electrical parts like motors.

    People have also asked various questions and so I've prepared some more details about PowerPoles today to help answer them.

    OSCAL organizer urgently looking for an Apple MacBook PSU

    In an unfortunate twist of fate while I've been blogging about power sources, one of the OSCAL organizers has a MacBook and the Apple-patented PSU conveniently failed just a few days before OSCAL. It is the 85W MagSafe 2 PSU and it is not easily found in Albania. If anybody can get one to me while I'm in Berlin at Kamailio World then I can take it to Tirana on Wednesday night. If you live near one of the other OSCAL speakers you could also send it with them.

    If only Apple used PowerPole...

    Why batteries?

    The first question many people asked is why use batteries and not a power supply. There are two answers for this: portability and availability. Many hams like to operate their radios away from their home sometimes. At an event, you don't always know in advance whether you will be close to a mains power socket. Taking a battery eliminates that worry. Batteries also provide better availability in times of crisis: whenever there is a natural disaster, ham radio is often the first mode of communication to be re-established. Radio hams can operate their stations independently of the power grid.

    Note that while the battery looks a lot like a car battery, it is actually a deep cycle battery, sometimes referred to as a leisure battery. This type of battery is often promoted for use in caravans and boats.

    Why PowerPole?

    Many amateur radio groups have already standardized on the use of PowerPole in recent years. The reason for having a standard is that people can share power sources or swap equipment around easily, especially in emergencies. The same logic applies when setting up a demo at an event where multiple volunteers might mix and match equipment at a booth.

    WICEN, ARES / RACES and RAYNET-UK are some of the well known groups in the world of emergency communications and they all recommend PowerPole.

    Sites like eBay and Amazon have many bulk packs of PowerPoles. Some are genuine, some are copies. In the UK, I've previously purchased PowerPole packs and accessories from sites like Torberry and Sotabeams.

    The pen is mightier than the sword, but what about the crimper?

    The PowerPole plugs for 15A, 30A and 45A are all interchangeable and they can all be crimped with a single tool. The official tool is quite expensive but there are many after-market alternatives like this one. It takes less than a minute to insert the terminal, insert the wire, crimp and make a secure connection.

    Here are some packets of PowerPoles in every size:

    Example cables

    It is easy to make your own cables or to take any existing cables, cut the plugs off one end and put PowerPoles on them.

    Here is a cable with banana plugs on one end and PowerPole on the other end. You can buy cables like this or if you already have cables with banana plugs on both ends, you can cut them in half and put PowerPoles on them. This can be a useful patch cable for connecting a desktop power supply to a PowerPole PDU:

    Here is the Yaesu E-DC-20 cable used to power many mobile radios. It is designed for about 25A. The exposed copper section simply needs to be trimmed and then inserted into a PowerPole 30:

    Many small devices have these round 2.1mm coaxial power sockets. It is easy to find a packet of the pigtails on eBay and attach PowerPoles to them (tip: buy the pack that includes both male and female connections for more versatility). It is essential to check that the devices are all rated for the same voltage: if your battery is 12V and you connect a 5V device, the device will probably be destroyed.

    Distributing power between multiple devices

    There are a wide range of power distribution units (PDUs) for PowerPole users. Notice that PowerPoles are interchangeable and in some of these devices you can insert power through any of the inputs. Most of these devices have a fuse on every connection for extra security and isolation. Some of the more interesting devices also have a USB charging outlet. The West Mountain Radio RigRunner range includes many permutations. You can find a variety of PDUs from different vendors through an Amazon search or eBay.

    In the photo from last week's blog, I have the Fuser-6 distributed by Sotabeams in the UK (below, right). I bought it pre-assembled but you can also make it yourself. I also have a Windcamp 8-port PDU purchased from Amazon (left):

    Despite all those fuses on the PDU, it is also highly recommended to insert a fuse in the section of wire coming off the battery terminals or PSU. It is easy to find maxi blade fuse holders on eBay and in some electrical retailers:

    Need help crimping your cables?

    If you don't want to buy a crimper or you would like somebody to help you, you can bring some of your cables to a hackerspace or ask if anybody from the Debian hams team will bring one to an event to help you.

    I'm bringing my own crimper and some PowerPoles to OSCAL this weekend, if you would like to help us power up the demo there please consider contributing to the crowdfunding campaign.

    on May 14, 2018 07:25 PM
    Plans for Ubuntu Studio 18.10 – Cosmic Cuttlefish For Ubuntu 18.10, we have been starting to think outside-the-box. There is something to be said of remaining with what you have and refining it, but staying in one spot can lead quickly to stagnation. Coming up with new ideas and progressing forward with those ideas is […]
    on May 14, 2018 06:44 PM
    Here is the fifth issue of This Week in Lubuntu Development. You can read the previous issue here. Changes General Lubuntu 18.04 was released! Some work was done on the Lubuntu Manual by Lubuntu contributor Lyn Perrine and Lubuntu Translations Team Lead Marcin Mikołajczak. You can see the commits they have made here. We need […]
    on May 14, 2018 02:56 AM

    May 12, 2018

    11 years of Ubuntu membership

    Andrea Corbellini

    It's been 11 years and 1 month since I was awarded with official Ubuntu membership. I will never forget that day: as a kid I had to write about myself on IRC, in front of the Community Council members and answer their questions in a language that was not my primary one. I must confess that I was a bit scared that evening, but once I made it, it felt so good. It felt good not just because of the award itself, but rather because that was the recognition that I did something that mattered. I did something useful that other people could benefit from. And for me, that meant a lot.

    So much time has passed since then. So many things have changed both in my life and around me, for better or worse. So many that I cannot even enumerate all of them. Nonetheless, deep inside of me, I still feel like that young kid: curious, always ready to experiment, full of hopes and uncertain (but never scared) about the future.

    Through the years I received the support of a bunch of people who believed in me, and I thank them all. But if today I feel so hopeful it's undoubtedly thanks to one person in particular, a person who holds a special place in my life. A big thank you goes to you.

    on May 12, 2018 09:30 PM

    May 11, 2018

    S11E10 – Ten Little Ladybugs - Ubuntu Podcast

    Ubuntu Podcast from the UK LoCo

    This week we’ve been smashing up a bathroom like rock stars. We discuss the Ubuntu 18.04 (Bionic Beaver) LTS release, serve up some command line love and go over your feedback.

    It’s Season 11 Episode 10 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

    In this week’s show:

    • We discuss what we’ve been up to recently:
      • Mark has been smashing his bathroom.
    • We discuss the Ubuntu 18.04 (Bionic Beaver) LTS release.

    • We share a Command Line Lurve:

      • yes – repeatedly output “y” or specified string for piping into interactive programs
    yes | fsck /var
    
    • And we go over all your amazing feedback – thanks for sending it – please keep sending it!

    • Image credit: Kirstyn Paynter

    That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

    on May 11, 2018 02:00 PM

    At Google I/O 2018, one of the presentations was on What’s new in Android apps for Chrome OS (Google I/O ’18). The third and most exciting developer tool shown in the presentation, was the ability to run graphical Linux apps on Chrome OS. Here is a screenshot of a native Linux terminal application, as shown in the presentation.

    They were so excited that the presenter said it would knock the socks off of the participants. And they had arranged a giveaway of socks. Actual socks swag to those attending the presentation 8-).

    The way that they get the GUI apps from the LXD container to appear to the screen, is similar to

    How to run graphics-accelerated GUI apps in LXD containers on your Ubuntu desktop

    Project Crostini

    Project Crostini is the Chrome OS project to add support to run Linux GUI apps on Chrome OS.

    The components that facilitate Project Crostini can be found at https://github.com/lstoll/cros-crostini That page has instructions for those that wanted to enable the running of Linux GUI apps on Chrome OS, when Project Crostini was still under development. Lincoln Stoll dissected the source of Chrome OS and created a helpful list of the involved repositories.

    The basic component is The Chrome OS Virtual Machine Monitor (crossvm), which runs untrusted operating systems through Linux’s KVM interface. The Linux distribution would run in a VM. The test repositories make reference to the X server, XWayland and Wayland. There is a repository called sommelier, which is a nested Wayland compositor with X11 forwarding support. It needs more searching to figure out where the source code ended into the Chrome OS repository and what is actually being used.

    Update #1: Here are the vm_tools in Chrome OS. They include garcon,  a service that gets added in the container and communicates with another service outside of the container (vm_concierge).

    What is important, is that LXD runs in this VM and is configured to launch a machine container with a Linux distribution. We are going in depth into this.

    LXD in Project Crostini

    Here is the file that does the initial configuration of the LXD service. It preseeds LXD with the following configuration.

    1. It uses a storage pool with the btrfs filesystem.
    2. It sets up a private bridge for networking.
    3. It configures the default LXD profile with relevant settings that will be applied to the container when it gets created.
      1. The container will not autostart when the Chromebook is restarted. It will get started manually.
      2. There will be private networking.
      3. The directory /opt/google/cros-containers of the host gets shared into the container as both /opt/google/cros-containers and /opt/google/garcon.
      4. The container will be able to get the IP address of the host from the file /dev/.host_ip (inside the container).
      5. The Wayland socket of the VM is shared to the container. This means that GUI applications that run in the container, can appear in the X server running in the VM of Chrome OS.
      6. The /dev/wl0 device file of the host is shared into the container as /dev/wl0. With permissions 0666. That’s the wireless interface.
    # Storage pools
    storage_pools:
    - name: default
     driver: btrfs
     config:
       source: /mnt/stateful/lxd/storage-pools/default
    
    # Network
    # IPv4 address is configured by the host.
    networks:
    - name: lxdbr0
      type: bridge
      config:
        ipv4.address: none
        ipv6.address: none
    
    # Profiles
    profiles:
    - name: default
      config:
        boot.autostart: false
      devices:
        root:
          path: /
          pool: default
          type: disk
        eth0:
          nictype: bridged
          parent: lxdbr0
          type: nic
        cros_containers:
          path: /opt/google/cros-containers
          source: /opt/google/cros-containers
          type: disk
        garcon:
          path: /opt/google/garcon
          source: /opt/google/cros-containers
          type: disk
        host-ip:
          path: /dev/.host_ip
          source: /run/host_ip
          type: disk
        wayland-sock:
          path: /dev/.wayland-0
          source: /run/wayland-0
          type: disk
        wl0:
          source: /dev/wl0
          type: unix-char
          mode: 0666

    Here it creates the btrfs storage pool (termina-lxd-scripts/files/stateful_setup.sh),

     mkfs.btrfs /dev/vdb || true # The disk may already be formatted.
     mount -o user_subvol_rm_allowed /dev/vdb /mnt/stateful

    With the completed LXD configuration, let’s see how the container gets created. It is this file, termina-lxd-scripts/files/run_container.sh Specifically,

    1. It configures an LXD remote URL, https://storage.googleapis.com/cros-containers, that has the container image. It is accessible through the simplestreams protocol.
    2. It launches the container image with
      lxc launch google:debian/stretch

    How to try google:debian/stretch on our own LXD installation

    Let’s delve deeper in the container image that is used in Chrome OS. For this, we are adding the LXD remote URL and then launch the container.

    First, let’s add the LXD remote.

    $ lxc remote add google https://storage.googleapis.com/cros-containers --protocol=simplestreams

    Let’s verify that it has been added.

    $ lxc remote list
    +--------+------------------------------------------------+---------------+-----------+--------+--------+
    | NAME   | URL                                            | PROTOCOL      | AUTH TYPE | PUBLIC | STATIC |
    +---------------------------------------------------------+---------------+-----------+--------+--------+
    | google | https://storage.googleapis.com/cros-containers | simplestreams |           | YES    | NO     |
    +---------------------------------------------------------+---------------+-----------+--------+--------+
    | images | https://images.linuxcontainers.org             | simplestreams |           | YES    | NO     |
    +---------------------------------------------------------+---------------+-----------+--------+--------+
    | local (default) | unix://                               | lxd           | tls       | NO     | YES    |
    +---------------------------------------------------------+---------------+-----------+--------+--------+
    | ubuntu | https://cloud-images.ubuntu.com/releases       | simplestreams |           | YES    | YES    |
    +---------------------------------------------------------+---------------+-----------+--------+--------+
    | ubuntu-daily | https://cloud-images.ubuntu.com/daily    | simplestreams |           | YES    | YES    |
    +---------------------------------------------------------+---------------+-----------+--------+--------+

    What’s in the google: container image repository?

    $ lxc image list google:
    +-------------------------+--------------+--------+-------------------------------------------------------+--------+----------+------------------------------+
    | ALIAS                   | FINGERPRINT  | PUBLIC | DESCRIPTION                                           | ARCH   | SIZE     | UPLOAD DATE                  |
    +-------------------------+--------------+--------+-------------------------------------------------------+--------+----------+------------------------------+
    | debian/stretch (3 more) | 706f2390a7f6 | yes    | Debian for Chromium OS stretch amd64 (20180504_22:19) | x86_64 | 194.82MB | May 4, 2018 at 12:00am (UTC) |
    +-------------------------+--------------+--------+-------------------------------------------------------+--------+----------+------------------------------+

    It is a single image for x86_64, based on Debian Stretch (20180504_22:19).

    Let’s see again those details for the specific container image from Google.

    $ lxc image show google:debian/stretch
    auto_update: false
    properties:
     architecture: amd64
     description: Debian for Chromium OS stretch amd64 (20180504_22:19)
     os: Debian for Chromium OS
     release: stretch
     serial: "20180504_22:19"
    public: true

    Compare those details with the stock debian/stretch container image,

    $ lxc image show images:debian/stretch
    auto_update: false
    properties:
     architecture: amd64
     description: Debian stretch amd64 (20180511_05:25)
     os: Debian
     release: stretch
     serial: "20180511_05:25"
    public: true

    Can we then get detailed info of the Google container image?

    $ lxc image info google:debian/stretch
    Fingerprint: 706f2390a7f67655df8d0d5d46038ed993ad28cb161648781fbd60af4b52dd76
    Size: 194.82MB
    Architecture: x86_64
    Public: yes
    Timestamps:
     Created: 2018/05/04 00:00 UTC
     Uploaded: 2018/05/04 00:00 UTC
     Expires: never
     Last used: never
    Properties:
     serial: 20180504_22:19
     description: Debian for Chromium OS stretch amd64 (20180504_22:19)
     os: Debian for Chromium OS
     release: stretch
     architecture: amd64
    Aliases:
     - debian/stretch/default
     - debian/stretch/default/amd64
     - debian/stretch
     - debian/stretch/amd64
    Cached: no
    Auto update: disabled

    Compare those details with the stock debian/stretch container image.

    $ lxc image info images:debian/stretch
    Fingerprint: 07341ea710a44508c12e5b3b437bd13fa334e56b3c4e2808c32fd7e6b12df8d1
    Size: 110.22MB
    Architecture: x86_64
    Public: yes
    Timestamps:
     Created: 2018/05/11 00:00 UTC
     Uploaded: 2018/05/11 00:00 UTC
     Expires: never
     Last used: never
    Properties:
     os: Debian
     release: stretch
     architecture: amd64
     serial: 20180511_05:25
     description: Debian stretch amd64 (20180511_05:25)
    Aliases:
     - debian/stretch/default
     - debian/stretch/default/amd64
     - debian/9/default
     - debian/9/default/amd64
     - debian/stretch
     - debian/stretch/amd64
     - debian/9
     - debian/9/amd64
    Cached: no
    Auto update: disabled

    Up to now we learned that the Google debian/stretch container image has 95MB of extra files (compressed).

    It’s time to launch a container with google:debian/stretch!

    $ lxc launch google:debian/stretch chrome-os-linux
    Creating chrome-os-linux
    Starting chrome-os-linux 
    $

    Now, get a shell into this container.

    $ lxc exec chrome-os-linux bash
    root@chrome-os-linux:~#

    There is no non-root account,

    root@chrome-os-linux:~# ls /home/
    root@chrome-os-linux:~#

    Differences with stock debian/stretch image

    These are the Chrome OS-specific packages that the container image has. These are not architecture-specific files (architecture: all).

    ii cros-adapta 0.1 all Chromium OS GTK Theme This package provides symlinks
    ii cros-apt-config 0.12 all APT config for Chromium OS integration. This package
    ii cros-garcon 0.10 all Chromium OS Garcon Bridge. This package provides the
    ii cros-guest-tools 0.12 all Metapackage for Chromium OS integration. This package has
    ii cros-sommelier 0.11 all sommelier base package. This package installs unitfiles
    ii cros-sommelier-config 0.11 all sommelier config for Chromium OS integration. This
    ii cros-sudo-config 0.10 all sudo config for Chromium OS integration. This package
    ii cros-systemd-overrides 0.10 all systemd overrides for running under Chromium OS. This
    ii cros-ui-config 0.11 all UI integration for Chromium OS This package installs
    ii cros-unattended-upgrades 0.10 all Unattended upgrades config. This package installs an
    ii cros-wayland 0.10 all Wayland extras for virtwl in Chromium OS. This package

    There are 305 additional packages in total in the Chrome OS container image of Debian stretch, compared to the stock Debian Stretch image.

    adwaita-icon-theme
    apt-transport-https
    apt-utils
    at-spi2-core
    bash-completion
    ca-certificates
    cpp
    cpp-6
    cros-adapta
    cros-apt-config
    cros-garcon
    cros-guest-tools
    cros-sommelier
    cros-sommelier-config
    cros-sudo-config
    cros-systemd-overrides
    cros-ui-config
    cros-unattended-upgrades
    cros-wayland
    curl
    dbus-x11
    dconf-cli
    dconf-gsettings-backend:amd64
    dconf-service
    desktop-file-utils
    dh-python
    distro-info-data
    fontconfig
    fontconfig-config
    fonts-croscore
    fonts-dejavu-core
    fonts-roboto
    fonts-roboto-hinted
    glib-networking:amd64
    glib-networking-common
    glib-networking-services
    gnome-icon-theme
    gsettings-desktop-schemas
    gtk-update-icon-cache
    hicolor-icon-theme
    i965-va-driver:amd64
    less
    libapt-inst2.0:amd64
    libasound2:amd64
    libasound2-data
    libasound2-plugins:amd64
    libasyncns0:amd64
    libatk-bridge2.0-0:amd64
    libatk1.0-0:amd64
    libatk1.0-data
    libatspi2.0-0:amd64
    libauthen-sasl-perl
    libavahi-client3:amd64
    libavahi-common-data:amd64
    libavahi-common3:amd64
    libavcodec57:amd64
    libavresample3:amd64
    libavutil55:amd64
    libcairo-gobject2:amd64
    libcairo2:amd64
    libcolord2:amd64
    libcroco3:amd64
    libcrystalhd3:amd64
    libcups2:amd64
    libcurl3:amd64
    libcurl3-gnutls:amd64
    libdatrie1:amd64
    libdconf1:amd64
    libdrm-amdgpu1:amd64
    libdrm-intel1:amd64
    libdrm-nouveau2:amd64
    libdrm-radeon1:amd64
    libdrm2:amd64
    libegl1-mesa:amd64
    libencode-locale-perl
    libepoxy0:amd64
    libffi6:amd64
    libfile-basedir-perl
    libfile-desktopentry-perl
    libfile-listing-perl
    libfile-mimeinfo-perl
    libflac8:amd64
    libfont-afm-perl
    libfontconfig1:amd64
    libfontenc1:amd64
    libfreetype6:amd64
    libgail-common:amd64
    libgail18:amd64
    libgbm1:amd64
    libgdbm3:amd64
    libgdk-pixbuf2.0-0:amd64
    libgdk-pixbuf2.0-common
    libgl1-mesa-dri:amd64
    libgl1-mesa-glx:amd64
    libglapi-mesa:amd64
    libglib2.0-0:amd64
    libglib2.0-data
    libgmp10:amd64
    libgnutls30:amd64
    libgomp1:amd64
    libgraphite2-3:amd64
    libgsm1:amd64
    libgtk-3-0:amd64
    libgtk-3-bin
    libgtk-3-common
    libgtk2.0-0:amd64
    libgtk2.0-bin
    libgtk2.0-common
    libharfbuzz0b:amd64
    libhogweed4:amd64
    libhtml-form-perl
    libhtml-format-perl
    libhtml-parser-perl
    libhtml-tagset-perl
    libhtml-tree-perl
    libhttp-cookies-perl
    libhttp-daemon-perl
    libhttp-date-perl
    libhttp-message-perl
    libhttp-negotiate-perl
    libice6:amd64
    libicu57:amd64
    libidn2-0:amd64
    libio-html-perl
    libio-socket-ssl-perl
    libipc-system-simple-perl
    libisl15:amd64
    libjack-jackd2-0:amd64
    libjbig0:amd64
    libjpeg62-turbo:amd64
    libjson-glib-1.0-0:amd64
    libjson-glib-1.0-common
    liblcms2-2:amd64
    libldap-2.4-2:amd64
    libldap-common
    libllvm3.9:amd64
    libltdl7:amd64
    liblwp-mediatypes-perl
    liblwp-protocol-https-perl
    libmailtools-perl
    libmp3lame0:amd64
    libmpc3:amd64
    libmpdec2:amd64
    libmpfr4:amd64
    libnet-dbus-perl
    libnet-http-perl
    libnet-smtp-ssl-perl
    libnet-ssleay-perl
    libnettle6:amd64
    libnghttp2-14:amd64
    libnuma1:amd64
    libogg0:amd64
    libopenjp2-7:amd64
    libopus0:amd64
    liborc-0.4-0:amd64
    libp11-kit0:amd64
    libpango-1.0-0:amd64
    libpangocairo-1.0-0:amd64
    libpangoft2-1.0-0:amd64
    libpciaccess0:amd64
    libperl5.24:amd64
    libpixman-1-0:amd64
    libpng16-16:amd64
    libpolkit-agent-1-0:amd64
    libpolkit-backend-1-0:amd64
    libpolkit-gobject-1-0:amd64
    libproxy1v5:amd64
    libpsl5:amd64
    libpulse0:amd64
    libpulsedsp:amd64
    libpython3-stdlib:amd64
    libpython3.5-minimal:amd64
    libpython3.5-stdlib:amd64
    libreadline7:amd64
    librest-0.7-0:amd64
    librsvg2-2:amd64
    librsvg2-common:amd64
    librtmp1:amd64
    libsamplerate0:amd64
    libsasl2-2:amd64
    libsasl2-modules:amd64
    libsasl2-modules-db:amd64
    libsensors4:amd64
    libshine3:amd64
    libsm6:amd64
    libsnappy1v5:amd64
    libsndfile1:amd64
    libsoup-gnome2.4-1:amd64
    libsoup2.4-1:amd64
    libsoxr0:amd64
    libspeex1:amd64
    libspeexdsp1:amd64
    libsqlite3-0:amd64
    libssh2-1:amd64
    libssl1.1:amd64
    libswresample2:amd64
    libtasn1-6:amd64
    libtdb1:amd64
    libtext-iconv-perl
    libthai-data
    libthai0:amd64
    libtheora0:amd64
    libtie-ixhash-perl
    libtiff5:amd64
    libtimedate-perl
    libtwolame0:amd64
    libtxc-dxtn-s2tc:amd64
    libunistring0:amd64
    liburi-perl
    libva-drm1:amd64
    libva-x11-1:amd64
    libva1:amd64
    libvdpau-va-gl1:amd64
    libvdpau1:amd64
    libvorbis0a:amd64
    libvorbisenc2:amd64
    libvpx4:amd64
    libwavpack1:amd64
    libwayland-client0:amd64
    libwayland-cursor0:amd64
    libwayland-egl1-mesa:amd64
    libwayland-server0:amd64
    libwebp6:amd64
    libwebpmux2:amd64
    libwebrtc-audio-processing1:amd64
    libwrap0:amd64
    libwww-perl
    libwww-robotrules-perl
    libx11-protocol-perl
    libx11-xcb1:amd64
    libx264-148:amd64
    libx265-95:amd64
    libxaw7:amd64
    libxcb-dri2-0:amd64
    libxcb-dri3-0:amd64
    libxcb-glx0:amd64
    libxcb-present0:amd64
    libxcb-render0:amd64
    libxcb-shape0:amd64
    libxcb-shm0:amd64
    libxcb-sync1:amd64
    libxcb-xfixes0:amd64
    libxcomposite1:amd64
    libxcursor1:amd64
    libxdamage1:amd64
    libxfixes3:amd64
    libxft2:amd64
    libxi6:amd64
    libxinerama1:amd64
    libxkbcommon0:amd64
    libxml-parser-perl
    libxml-twig-perl
    libxml-xpathengine-perl
    libxml2:amd64
    libxmu6:amd64
    libxpm4:amd64
    libxrandr2:amd64
    libxrender1:amd64
    libxshmfence1:amd64
    libxt6:amd64
    libxtst6:amd64
    libxv1:amd64
    libxvidcore4:amd64
    libxxf86dga1:amd64
    libxxf86vm1:amd64
    libzvbi-common
    libzvbi0:amd64
    lsb-release
    mesa-va-drivers:amd64
    mesa-vdpau-drivers:amd64
    mime-support
    openssl
    perl
    perl-modules-5.24
    perl-openssl-defaults:amd64
    policykit-1
    publicsuffix
    pulseaudio
    pulseaudio-utils
    python-apt-common
    python3
    python3-apt
    python3-minimal
    python3.5
    python3.5-minimal
    readline-common
    rename
    rtkit
    sgml-base
    shared-mime-info
    sudo
    tcpd
    ucf
    unattended-upgrades
    unzip
    va-driver-all:amd64
    vdpau-driver-all:amd64
    x11-common
    x11-utils
    x11-xserver-utils
    xdg-user-dirs
    xdg-utils
    xkb-data
    xml-core
    xz-utils

    The binary files are meant to be found at /opt/google/cros-containers/. For example,

    root@chrome-os-linux:~# cat /usr/bin/gnome-www-browser 
    #!/bin/bash
    /opt/google/cros-containers/bin/garcon --client --url "$@"
    root@chrome-os-linux:~#

    Obviously, these files are not found in the container that I just launched. These files are provided by Chrome OS of a the Chromebook.

    I did not find binaries in the container to launch a Linux terminal application. I assume would be found at /opt/google/cros-containers/bin/ as well.

    The Chrome OS deb package repository

    Here is the repository,

    root@chrome-os-linux:~# cat /etc/apt/sources.list.d/cros.list 
    deb https://storage.googleapis.com/cros-packages stretch main
    root@chrome-os-linux:~#

    And here are the details of the packages,

    $ curl https://storage.googleapis.com/cros-packages/dists/stretch/main/binary-amd64/Packages 
    Package: cros-adapta
    Version: 0.1
    Architecture: all
    Recommends: libgtk2.0-0, libgtk-3-0
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-adapta/cros-adapta_0.1_all.deb
    Size: 792
    SHA256: 885783a862f75fb95e0d389c400b9463c9580a84e9ec54c1ed2c8dbafa1ccbc5
    SHA1: 23cbf5f11724d971592da9db9a17b2ae1c28dfad
    MD5sum: 27fdba7a27c84caa4014a69546a83a6b
    Description: Chromium OS GTK Theme This package provides symlinks
     which link the bind-mounted theme into the correct location in the
     container.
    Homepage: https://chromium.googlesource.com/chromiumos/third_party/cros-adapta/
    Built-Using: Bazel
    
    Package: cros-apt-config
    Version: 0.12
    Architecture: all
    Depends: apt-transport-https
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-apt-config/cros-apt-config_0.12_all.deb
    Size: 7358
    SHA256: d6d21bdf348e6510a9c933f8aacde7ac4054b6e2f56d5e13e9772800fab13e9e
    SHA1: 51b23541fc8029725966bf45f0a98075cbb01dfa
    MD5sum: b3de74124b2947e0ad819416ce7eed78
    Description: APT config for Chromium OS integration. This package
     installs the keyring for the Chromium OS integration apt repo, the
     source list, and APT preferences.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-garcon
    Version: 0.10
    Architecture: all
    Depends: desktop-file-utils, xdg-utils
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-garcon/cros-garcon_0.10_all.deb
    Size: 1330
    SHA256: 32430b920770a8f6d5e0f271de340e87afb32bd9c2a4ecc4e470318e37033672
    SHA1: 46f24826d9a0eaab8ec1617d173c48f15fedd937
    MD5sum: 4ab2fa3b50ec42bddf6aeeb93c1ef202
    Description: Chromium OS Garcon Bridge. This package provides the
     systemd unit files for Garcon, the bridge to Chromium OS.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-guest-tools
    Version: 0.12
    Architecture: all
    Depends: cros-garcon, cros-sommelier
    Recommends: bash-completion, cros-apt-config, cros-sommelier-config,
     cros-sudo-config, cros-systemd-overrides, cros-ui-config,
     cros-unattended-upgrades, cros-wayland, curl, dbus-x11, pulseaudio,
     unzip, vim
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-guest-tools/cros-guest-tools_0.12_all.deb
    Size: 10882
    SHA256: 5f0a2521351b22fe3b537431dec59740c6cc96771372432fe3c7a88a5939884d
    SHA1: d37aab929c0c7011dd6b730bdc2052d7e232d577
    MD5sum: 5d9fafa14a4f88108f716438c45cf390
    Description: Metapackage for Chromium OS integration. This package has
     dependencies on all other packages necessary for Chromium OS
     integration.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-sommelier
    Version: 0.11
    Architecture: all
    Depends: libpam-systemd
    Recommends: x11-utils, x11-xserver-utils, xkb-data
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-sommelier/cros-sommelier_0.11_all.deb
    Size: 1552
    SHA256: 522fe94157708d1a62c42a404bcffe537205fd7ea7b0d4a1ed98de562916c146
    SHA1: ec51d2d8641d9234ccffc0d61a03f8f467205c73
    MD5sum: 8ed001a623ae74302d7046e4187a71c7
    Description: sommelier base package. This package installs unitfiles
     and support scripts for sommelier.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-sommelier-config
    Version: 0.11
    Architecture: all
    Depends: libpam-systemd, cros-sommelier
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-sommelier-config/cros-sommelier-config_0.11_all.deb
    Size: 1246
    SHA256: edbba3817fd3cdb41ea2f008ea4279f2e276580d5b1498c942965c3b00b4bff1
    SHA1: 762ca85f3f9cea87566f42912fd6077c0071e740
    MD5sum: 767a8a8c9b336ed682b95d9dd49fbde5
    Description: sommelier config for Chromium OS integration. This
     package installs default configuration for sommelier. that is ideal
     for integration with Chromium OS.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-sudo-config
    Version: 0.10
    Architecture: all
    Depends: sudo
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-sudo-config/cros-sudo-config_0.10_all.deb
    Size: 810
    SHA256: d9c1e2b677dadd1dd20da8499538d9ee2e4c2bc44b16de8aaed0f1e747f371a3
    SHA1: 07b961e847112da07c6a24b9f154be6fed13cca1
    MD5sum: 37f54f1e727330ab092532a5fc5300fe
    Description: sudo config for Chromium OS integration. This package
     installs default configuration for sudo to allow passwordless sudo
     access for the sudo group.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-systemd-overrides
    Version: 0.10
    Architecture: all
    Depends: systemd
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-systemd-overrides/cros-systemd-overrides_0.10_all.deb
    Size: 10776
    SHA256: 7b960a84d94be0fbe5b4969c7f8e887ccf3c2adf2b2dc10b5cb4856d30eeaab5
    SHA1: 06dc91e9739fd3d70fa54051a1166c2dfcc591e2
    MD5sum: 16033ff279b2f282c265d5acea3baac6
    Description: systemd overrides for running under Chromium OS. This
     package overrides the default behavior of some core systemd units.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-ui-config
    Version: 0.11
    Architecture: all
    Depends: cros-adapta, dconf-cli, fonts-croscore, fonts-roboto
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-ui-config/cros-ui-config_0.11_all.deb
    Size: 1280
    SHA256: bc1c5513ab67c003a6c069d386a629935cd345b464d13b1dd7847822f98825f3
    SHA1: 4193bd92f9f05085d480de09f2c15fe93542f272
    MD5sum: 7e95b56058030484b6393d05767dea04
    Description: UI integration for Chromium OS This package installs
     default configuration for GTK+ that is ideal for integration with
     Chromium OS.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-unattended-upgrades
    Version: 0.10
    Architecture: all
    Depends: unattended-upgrades
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: misc
    Filename: pool/main/c/cros-unattended-upgrades/cros-unattended-upgrades_0.10_all.deb
    Size: 1008
    SHA256: 33057294098edb169e03099b415726a99fb1ffbdf04915a3acd69f72cf4c84e8
    SHA1: ec575f7222c5008487c76e95d073cc81107cad0b
    MD5sum: ae30c3a11da61346a710e4432383bbe0
    Description: Unattended upgrades config. This package installs an
     unattended upgrades config for Chromium OS guest containers.
    Homepage: https://chromium.org
    Built-Using: Bazel
    
    Package: cros-wayland
    Version: 0.10
    Architecture: all
    Maintainer: The Chromium OS Authors <chromium-os-dev@chromium.org>
    Priority: optional
    Section: x11
    Filename: pool/main/c/cros-wayland/cros-wayland_0.10_all.deb
    Size: 886
    SHA256: 06d26a150e69bda950b0df166328a2dae60ac0a0840f432b26dc127b842dd1ef
    SHA1: 48bd118c497b0a4090b126d7c3d8ec3aacced504
    MD5sum: a495d16e5212535571adfd7820b733c2
    Description: Wayland extras for virtwl in Chromium OS. This package
     provides config files and udev rules to improve the Wayland experience
     under CrOS.
    Homepage: https://chromium.org
    Built-Using: Bazel

    Conclusion

    It is quite neat that Chrome OS uses machine containers with LXD to maintain a Linux installation.

    Apart from the apparent benefits of the Chromebook users, it makes sense to have a look at the implementation in order to figure out how to create a sort of lightweight Virtualbox clone (ability to run a desktop environment of any Linux distribution) that uses containers and LXD.

    on May 11, 2018 01:14 PM

    May 09, 2018

    The Kubuntu Team is happy to announce that Kubuntu 18.04 LTS has been released, featuring the beautiful KDE Plasma 5.12 LTS : simple by default, powerful when needed.

    Codenamed “Bionic Beaver”, Kubuntu 18.04 continues our tradition of giving you Friendly Computing by integrating the latest and greatest open source technologies into a high-quality, easy-to-use Linux distribution.

    The team has been hard at work through this cycle, introducing new features and fixing bugs.

    Under the hood, there have been updates to many core packages, including a new 4.15-based kernel, KDE Frameworks 5.44, Plasma 5.12 LTS and KDE Applications 17.12.3

    Kubuntu has seen some exciting improvements, with newer versions of Qt, updates to major packages like Krita, Kstars, KDE-Connect, Firefox and LibreOffice, and stability improvements to KDE Plasma. And we have new applications that we’re so proud of: latte-dock, Falkon, kio-gdrive and Peruse, a KDE comic reader. Kio-gdrive enables KIO-aware applications (such as Dolphin, Kate or Gwenview) to access and edit Google Drive files on the cloud.

    We’ve made some notable changes since 16.04 LTS. VLC is the default media player, and Cantata Qt5 the default music player. Muon is now shipped by default for those who prefer a package manager as an alternative to the Plasma Discover software store, both of which have seen major improvements.

    We’ve made some important but smaller changes since 16.04 LTS and 17.10. Kubuntu switches to a dark Breeze Plasma theme by default from 18.04, made some changes in default settings and now offer a minimal install option on the ISO. This removes KDE PIM applications, Libreoffice, Cantata and mpd, and some additional internet and media applications. At present a full Plasma Desktop is left in place, plus basic applications and utilities. Firefox as a browser, and VLC as a media player, are also retained.

    Double-click is now default to open files. To change back to single-click – System Settings: Mouse Controls

    For a list of other application updates, upgrading notes and known bugs be sure to read our release notes.

    Download 18.04 or read about how to upgrade from 17.10.

    Note: Upgrades from 16.04 LTS may not be enabled until a few days after the 18.04.1 release expected in late July.

    on May 09, 2018 05:36 AM

    May 08, 2018

    Cue the Cosmic Cuttlefish

    Mark Shuttleworth

    With our castor Castor now out for all to enjoy, and the Twitterverse delighted with the new minimal desktop and smooth snap integration, it’s time to turn our attention to the road ahead to 20.04 LTS, and I’m delighted to say that we’ll kick off that journey with the Cosmic Cuttlefish, soon to be known as Ubuntu 18.10.

    Each of us has our own ideas of how the free stack will evolve in the next two years. And the great thing about Ubuntu is that it doesn’t reflect just one set of priorities, it’s an aggregation of all the things our community cares about. Nevertheless I thought I’d take the opportunity early in this LTS cycle to talk a little about the thing I’m starting to care more about than any one feature, and that’s security.

    If I had one big thing that I could feel great about doing, systematically, for everyone who uses Ubuntu, it would be improving their confidence in the security of their systems and their data. It’s one of the very few truly unifying themes that crosses every use case.

    It’s extraordinary how diverse the uses are to which the world puts Ubuntu these days, from the heart of the mainframe operation in a major financial firm, to the raspberry pi duck-taped to the back of a prototype something in the middle of nowhere, from desktops to clouds to connected things, we are the platform for ambitions great and small. We are stewards of a shared platform, and one of the ways we respond to that diversity is by opening up to let people push forward their ideas, making sure only that they are excellent to each other in the pushing.

    But security is the one thing that every community wants – and it’s something that, on reflection, we can raise the bar even higher on.

    So without further ado: thank you to everyone who helped bring about Bionic, and may you all enjoy working towards your own goals both in and out of Ubuntu in the next two years.

    on May 08, 2018 02:45 PM

    Flisol Bogotá 2018

    Jhosman Lizarazo

    Flisol Bogota 2018

    On April 28, we celebrated in the best way the Latin American Free Software Installation Festival (FLISoL) in Bogotá. FLISoL is the biggest event for the dissemination of Free Software in Latin America in which more than 20 countries together around 240 events recorded for this 2018 In Bogota part will take place in theFundación Tecnológica Autónoma de Bogotá FABA (Carrera 14 N° 80 – 35) from 9 a. m. Saturday April 28 this year with free entry. The Flisol Bogotá 2018, is one of the largest in Latin America with the largest number of attendees.

    The Latin American Festival of Installation of Free Software is designed for students, academics, businessmen, workers, civil servants, enthusiasts and the general public to raise awareness of our philosophy, scope, progress and development around Free Software, and share these citizens using ICT freedoms and opportunities that this provides. In Colombia it is the 14th time it is held in Bogota since 2005.

    On this occasion, the festival allowed us to observe that free culture, beyond being a space for the installation of software distributions, operating systems (Linux) or programs that promote the opening of the Internet and the empowerment of people with respect to technology, is a form to contribute and build societies.

    There were more than 1,600 people who participated in more than 90 activities that included lectures, workshops, panels, spaces for children -FLISoL Kids-, a cinema of free culture, music, origami and much more.

    We had two international speakers of whom we are very grateful to have shared this wonderful experience: Nuritzi Sanchez President, Board of Directors at GNOME Foundation; Founding Member, Ecosystem Team Manager at Endless and Jorge Luis Batista Automatic engineer, Cuban enthusiast of Open Street Map.

    We greatly appreciate the support provided by the International Community of Ubuntu and also the Canonical team for making this possible, and we look forward to continue helping in Colombia more and more in the future.

    ads

    We also had an installation area where the largest number of operating systems were Ubuntu Bionic 18.04 LTS

     

    The stand of the Community Ubuntu Colombia:

    All Festival information is on the website: www.flisolbogota.org and you can see the programation in: www.flisolbogota.org/programacion 

    Any additional information may be requested at the info@flisolbogota.org mail or in our social networks Twitter:flisol_bogota or Facebook: Flisol Bogota

    Photos of the Event: https://www.flickr.com/search/?text=flisol%20bogota&sort=date-posted-desc

    You can see All reactions of Twitter, photos and videos in:

    on May 08, 2018 01:49 AM

    May 07, 2018

    My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

    pkg-security team

    I improved the packaging of openvas-scanner and openvas-manager so that they mostly work out of the box with a dedicated redis database pre-configured and with certificates created in the postinst. I merged a patch for cross-build support in mac-robber and another patch from chkrootkit to avoid unexpected noise in quiet mode.

    I prepared an update of openscap-daemon to fix the RC bug #896766 and to update to a new upstream release. I pinged the package maintainer to look into the autopkgtest failure (that I did not introduce). I sponsored hashcat 4.1.0.

    Distro Tracker

    While it slowed down, I continued to get merge requests. I merged two of them for some newcomer bugs:

    I reviewed a merge request suggesting to add a “team search” feature.

    I did some work of my own too: I fixed many exceptions that have been seen in production with bad incoming emails and with unexpected maintainer emails. I also updated the contributor guide to match the new workflow with salsa and with the new pre-generated database and its associated helper script (to download it and configure the application accordingly). During this process I also filed a GitLab issue about the latest artifact download URL not working as advertised.

    I filed many issues (#13 to #19) for things that were only stored in my personal TODO list.

    Misc Debian work

    Bug Reports. I filed bug #894732 on mk-build-deps to filter build dependencies to include/install based on build profiles. For reprepro, I always found the explanation about FilterList very confusing (bug #895045). I filed and fixed a bug on mirrorbrain with redirection to HTTPS URLs.

    I also investigated #894979 and concluded that the CA certificates keystore file generated with OpenJDK 9 is not working properly with OpenJDK 8. This got fixed in ca-certificates-java.

    Sponsorship. I sponsored pylint-plugin-utils 0.2.6-2.

    Packaging. I uploaded oca-core (still in NEW) and ccextractor for Freexian customers. I also uploaded python-num2words (dependency for oca-core). I fixed the RC bug #891541 on lua-posix.

    Live team. I reviewed better handling of missing host dependency on live-build and reviewed a live-boot merge request to ensure that the FQDN returned by DHCP was working properly in the initrd.

    Thanks

    See you next month for a new summary of my activities.

    No comment | Liked this article? Click here. | My blog is Flattr-enabled.

    on May 07, 2018 09:17 PM
    Time for a laptop upgrade. Encryption was still not the default for the new Dell XPS 13 Developer Edition (9370) that shipped with Ubuntu 16.04 LTS, so I followed my own notes from 3 years ago together with the official documentation to convert the unencrypted OEM Ubuntu installation to LUKS during the weekend. This only took under 1h altogether.

    On this new laptop model, EFI boot was already in use, Secure Boot was enabled and the SSD had GPT from the beginning. The only thing I wanted to change thus was the / to be encrypted.

    Some notes for 2018 to clarify what is needed and what is not needed:
    • Before luksipc, remember to resize existing partitions to have 10 MB of free space at the end of the / partition, and also create a new partition of eg 1 GB size partition for /boot.
    • To get the code and compile luksipc on Ubuntu 16.04.4 LTS live USB, just apt install git build-essential is needed. cryptsetup package is already installed.
    • After luksipc finishes and you've added your own passphrase and removed the initial key (slot 0), it's useful to cryptsetup luksOpen it and mount it still under the live session - however, when using ext4, the mounting fails due to a size mismatch in ext4 metadata! This is simple to correct: sudo resize2fs /dev/mapper/root. Nothing else is needed.
    • I mounted both the newly encrypted volume (to /mnt) and the new /boot volume (to /mnt2 which I created), and moved /boot/* from the former to latter.
    • I edited /etc/fstab of the encrypted volume to add the /boot partition
    • Mounted as following in /mnt:
      • mount -o bind /dev dev
      • mount -o bind /sys sys
      • mount -t proc proc proc
    • Then:
      • chroot /mnt
      • mount -a # (to mount /boot and /boot/efi)
      • Edited files /etc/crypttab (added one line: root UUID none luks) and /etc/grub/default (I copied over my overkill configuration that specifies all of cryptopts and cryptdevice some of which may be obsolete, but at least one of them and root=/dev/mapper/root is probably needed).
      • Ran grub-install ; update-grub ; mkinitramfs -k all -c (notably no other parameters were needed)
      • Rebooted.
    • What I did not need to do:
      • Modify anything in /etc/initramfs-tools.
    If the passphrase input shows on your next boot, but your correct passphrase isn't accepted, it's likely that the initramfs wasn't properly updated yet. I first forgot to run the mkinitramfs command and faced this.
    on May 07, 2018 11:08 AM

    May 06, 2018

    My annual Dropbox renewal date was coming up, and I thought to myself “I’m working with servers all the time. I shouldn’t need to pay someone else for this.” I was also knee deep in a math course, so I felt like procrastinating.

    I’m really happy with the result, so I thought I would explain it for anyone else who wants to do the same. Here’s what I was aiming for:

    • Safe, convenient archiving for big files.
    • Instant sync between devices for stuff I’m working on.
    • Access over LAN from home, and over the Internet from anywhere else.
    • Regular, encrypted offsite backups.
    • Compact, low power hardware that I can stick in a closet and forget about.
    • Some semblance of security, at least so a compromised service won’t put the rest of the system at risk.

    The hardware

    I dabbled with a BeagleBoard that I used for an embedded systems course, and I pondered a Raspberry Pi with a case. I decided against both of those, because I wanted something with a bit more wiggle room. And besides, I like having a BeagleBoard free to mess around with now and then.

    In the end, I picked out an Intel NUC, and I threw in an old SSD and a stick of RAM:

    It’s tiny, it’s quiet, and it looks okay too! (Just find somewhere to hide the power brick). My only real complaint is the wifi hardware doesn’t work with older Linux kernels, but that wasn’t a big deal for my needs and I’m sure it will work in the future.

    The software

    I installed Ubuntu Core 16, which is delightful. Installing it is a bit surprising for the uninitiated because there isn’t really an install process: you just clone the image to the drive you want to boot from and you’re done. It’s easier if you do this while the drive is connected to another computer. (I didn’t feel like switching around SATA cables in my desktop, so I needed to write a different OS to a flash drive, boot from that on the NUC, transfer the Ubuntu Core image to there, then dd that image to the SSD. Kind of weird for this use case).

    Now that I figured out how to run it, I’ve been enjoying how this system is designed to minimize the time you need to spend with your device connected to a screen and keyboard like some kind of savage. There’s a simple setup process (configure networking, log in to your Ubuntu One account), and that’s it. You can bury the thing somewhere and SSH to it from now on. In fact, you’re pretty much forced to: you don’t even get a login prompt. Chances are you won’t need to SSH to the system anyway since it keeps itself up to date. As someone who obsesses over loose threads, I’m finding this all very

    Although, with that in mind, one important thing: if you haven’t played with Ubuntu for a while, head over to login.ubuntu.com and make sure your SSH keys are up to date. The first time I set it up, I realized I had a bunch of obsolete SSH keys in my account and I had no way to reach the system from the laptop I was using. Fixing that meant changing Ubuntu Core’s writable files from another operating system. (I would love to know if there is a better way).

    The other software

    Okay, using Ubuntu Core is probably a bit weird when I want to run all these servers and I’m probably a little picky, but it’s so elegant! And, happily, there are Snap packages for both Nextcloud and Syncthing. I ended up using both.

    I really like how files you can edit are tucked away in /writable. For this guide,  I always refer to things by their full paths under /writable. I found thinking like that spared me getting lost in files that I couldn’t change, and it helped to emphasize the nature of this system.

    DNS

    Before I get to the fun stuff, there were some networking conundrums I needed to solve.

    First, public DNS. My router has some buttons if you want to use a dynamic DNS service, but I just rolled my own thing. To start off, I added some additional records for my DNS pointing at my home IP address. My web host has an API for editing DNS rules, so I set up a dynamic DNS after everything else was working, but I will get to that further along.

    Next, my router didn’t support hairpinning (or NAT Loopback), so requests to core.example.com were still resolving to my public IP address, which means way too many hops for sending data around. My ridiculous solution: I’ll run my own DNS server, darnit.

    To get started, check the network configuration in /writable/system-data/etc/netplan/00-snapd-config.yaml. You’ll want to make sure the system requests a static IP address (I used 192.168.1.2) and uses its own nameservers. Mine looks like this:

    network:
      ethernets:
        eth0:
          dhcp4: false
          dhcp6: false
          addresses: [192.168.1.2/24, '2001:1::2/64']
          gateway4: 192.168.1.1
          nameservers:
            addresses: [8.8.8.8, 8.8.4.4]
      version: 2

    After changing the Netplan configuration, use sudo netplan generate to update the system.

    For the actual DNS server, we can install an unofficial snap that provides dnsmasq:

    $ snap install dnsmasq-escoand

    You’ll want to edit  /writable/system-data/etc/hosts so the service’s domains resolve to the devices’s local IP address:

    127.0.0.1 localhost.localdomain localhost
    ::1 localhost6.localdomain6 localhost6
    
    192.168.1.2 core.example.com
    fe80::96c6:91ff:fe1a:6581 core.example.com

    Now it’s safe to go into your router’s configuration, reserve an IP address for this device, and set it as your DNS server:

    And that solved it.

    To check, run tracepath from another computer on your network and the result should be something simple like this:

    $ tracepath core.example.com
     1?: [LOCALHOST] pmtu 1500
     1: core.example.com 0.789ms reached
     1: core.example.com 0.816ms reached
     Resume: pmtu 1500 hops 1 back 1

    While you’re looking at the router, you may as well forward some ports, too. By default you need TCP ports 80 and 443 for Nextcloud, and 22000 for Syncthing.

    Nextcloud

    The Nextcloud snap is fantastic. It already works out of the box: it adds a system service for its copy of Apache on port 80, and it comes with a bunch of scripts for setting up common things like SSL certificates. I wanted to use an external hard drive for its data store, so I needed to configure the mount point for that and grant the necessary permissions for the snap to access removable media.

    Let’s set up that mount point first. These are configured with Systemd mount units, so we’ll want to create a file like /writable/system-data/etc/systemd/system/media-data1.mount. You need to tell it how to identify the storage device. (I always give them nice volume labels when I format them so it’s easy to use that). Note that the name of the unit file must correspond to the full name of the mount point:

    [Unit]
    Description=Mount unit for data1
    
    [Mount]
    What=/dev/disk/by-label/data1
    Where=/media/data1
    Type=ext4
    
    [Install]
    WantedBy=multi-user.target
    
    

    One super cool thing here is you can start and stop the mount unit just like any other system service:

    $ sudo systemctl daemon-reload
    $ sudo systemctl start media-data1.mount
    $ sudo systemctl enable media-data1.mount

    Now let’s set up Nextcloud. The code repository for the Nextcloud snap has lots of documentation if you need.

    $ snap install nextcloud
    $ snap connect nextcloud:removable-media :removable-media
    $ sudo snap run nextcloud.manual-install USERNAME PASSWORD
    $ snap stop nextcloud

    Before we do anything else we need to tell Nextcloud to store its data in /media/data1/nextcloud/, and allow access through the public domain from earlier. To do that, edit /writable/system-data/var/snap/nextcloud/current/nextcloud/config/config.php:

    <?php
    $CONFIG = array (
     'apps_paths' =>
     array (
     …
     ),
     …
     'trusted_domains' =>
     array (
     0 => 'localhost',
     1 => 'core.example.com'
     ),
     'datadirectory' => '/media/data1/nextcloud/data',
     …
    );

    Move the existing data directory to the new location, and restart the service:

    $ snap stop nextcloud
    $ sudo mkdir /media/data1/nextcloud
    $ sudo mv /writable/system-data/var/snap/nextcloud/common/nextcloud/data /media/data1/nextcloud/
    $ snap start nextcloud

    Now you can enable HTTPS. There is a lets-encrypt option (for letsencrypt.org), which is very convenient:

    $ sudo snap run nextcloud.enable-https lets-encrypt -d
    $ sudo snap run nextcloud.enable-https lets-encrypt

    At this point you should be able to reach Nextcloud from another computer on your network, or remotely, using the same domain.

    Syncthing

    If you aren’t me, you can probably stop here and use Nextcloud, but I decided Nextcloud wasn’t quite right for all of my files, so I added Syncthing to the mix. It’s like a peer to peer Dropbox, with a somewhat more geeky interface. You can link your devices by globally unique IDs, and they’ll find the best way to connect to each other and automatically sync files between your shared folders. It’s very elegant, but I wasn’t sure about using it without some kind of central repository. This way my systems will sync between each other when they can, but there’s one central device that is always there, ready to send or receive the newest versions of everything.

    Syncthing has a snap, but it is a bit different from Nextcloud, so the package needed a few extra steps. Syncthing, like Dropbox, runs one instance for each user, instead of a monolithic service that serves many users. So, it doesn’t install a system service of its own, and we’ll need to figure that out. First, let’s install the package:

    $ snap install syncthing
    $ snap connect syncthing:home :home
    $ snap run syncthing

    Once you’re satisfied, you can stop syncthing. That isn’t very useful yet, but we needed to run it once to create a configuration file.

    So, first, we need to give syncthing a place to put its data, replacing “USERNAME” with your system username:

    $ sudo mkdir /media/data1/syncthing
    $ sudo chown USERNAME:USERNAME /media/data1/syncthing

    Unfortunately, you’ll find that the syncthing application doesn’t have access to /media/data1, and its snap doesn’t support the removable-media interface, so it’s limited to your home folder. But that’s okay, we can solve this by creating a bind mount. Let’s create a mount unit in /writable/system-data/etc/systemd/system/home-USERNAME-syncthing.mount:

    [Unit]
    Description=Mount unit for USERNAME-syncthing
    
    [Mount]
    What=/media/data1/syncthing/USERNAME
    Where=/home/USERNAME/syncthing
    Type=none
    Options=bind
    
    [Install]
    WantedBy=multi-user.target

    (If you’re wondering, yes, systemd figures out that it needs to mount media-data1 before it can create this bind mount, so don’t worry about that).

    $ sudo systemctl daemon-reload
    $ sudo systemctl start home-USERNAME-syncthing.mount
    $ sudo systemctl enable home-USERNAME-syncthing.mount

    Now update Syncthing’s configuration and tell it to put all of its shared folders in that directory. Open /home/USERNAME/snap/syncthing/common/syncthing/config.xml in your favourite editor, and make sure you have something like this:

    <configuration version="27">
      <folder id="default" label="Default Folder" path="/home/USERNAME/syncthing/Sync" type="readwrite" rescanIntervalS="60" fsWatcherEnabled="false" fsWatcherDelayS="10" ignorePerms="false" autoNormalize="true">
        …
      </folder>
      <device id="…" name="core.example.com" compression="metadata" introducer="false" skipIntroductionRemovals="false" introducedBy="">
        <address>dynamic</address>
        <paused>false</paused>
        <autoAcceptFolders>false</autoAcceptFolders>
      </device>
      <gui enabled="true" tls="false" debugging="false">
        <address>192.168.1.2:8384</address>
        …
      </gui>
      <options>
        <defaultFolderPath>/home/USERNAME/syncthing</defaultFolderPath>
      </options>
    </configuration>

    With those changes, Syncthing will create new folders inside /home/USERNAME/syncthing, you can move the default “Sync” folder there as well, and its web interface will be accessible over your local network at http://192.168.1.2:8384. (I’m not enabling TLS here, for two reasons: it’s just the local network, and Nextcloud enables HSTS for the core.example.com domain, so things get confusing when you try to access it like that).

    You can try snap run syncthing again, just to be sure.

    Now we need to add a service file so Syncthing runs automatically. We could create a service that has the User field filled in and it always runs as a certain user, but for this yupe of service it doesn’t hurt to set it up as a template unit. Happily, Syncthing’s documentation provides a unit file we can borrow, so we don’t need to do much thinking here. You’ll need to create a file called /writable/system-data/etc/systemd/system/syncthing@.service:

    [Unit]
    Description=Syncthing - Open Source Continuous File Synchronization for %I
    Documentation=man:syncthing(1)
    After=network.target
    
    [Service]
    User=%i
    ExecStart=/usr/bin/snap run syncthing -no-browser -logflags=0
    Restart=on-failure
    SuccessExitStatus=3 4
    RestartForceExitStatus=3 4
    
    [Install]
    WantedBy=multi-user.target

    Note that our Exec line is a little different than theirs, since we need it to run syncthing under the snap program.

    $ sudo systemctl daemon-reload
    $ sudo systemctl start syncthing@USERNAME.service
    $ sudo systemctl enable syncthing@USERNAME.service

    And there you have it, we have Syncthing! The web interface for the Ubuntu Core system is only accessible over your local network, but assuming you forwarded port 22000 on your router earlier, you should be able to sync with it from anywhere.

    If you install the Syncthing desktop client (snap install syncthing in Ubuntu, dnf install syncthing-gtk in Fedora), you’ll be able to connect your other devices to each other. On each device that you connect to this one, make sure you set core.example.com as an Introducer. That way they will discover each other through it, which saves a bit of time.

    Once your devices are all connected, it’s a good idea to go to Syncthing’s web interface at http://192.168.1.2:8384 and edit the settings for each device. You can enable “Auto Accept” so whenever a device shares a new folder with core.example.com, it will be accepted automatically.

    Nextcloud + Syncthing

    There is one last thing I did here. Syncthing and Nextcloud have some overlap, but I found myself using them for pretty different sorts of tasks. I use Nextcloud for media files and archives that I want to store on a single big hard drive, and occasionally stream over the network; and I use Syncthing for files that I want to have locally on every device.

    Still, it would be nice if I could have Nextcloud’s web UI and sharing options with Syncthing’s files. In theory we could bind mount Syncthing’s data directory into Nextcloud’s data directory, but the Nextcloud and Syncthing services run as different users. So, that probably won’t go particularly well.

    Instead, it works quite well to mount Syncthing’s data directory using SSH.

    First, in Nextcloud, go to the Apps section and enable the “External storage support” app.

    Now you need to go to Admin, and “External storages”, and allow users to mount external storage.

    Finally, go to your Personal settings, choose “External storages”, add a folder named Syncthing, and tell it connect over SFTP. Give it the hostname of the system that has Syncthing (so, core.example.com), the username for the user that is running Syncthing (USERNAME), and the path to Syncthing’s data files (/home/USERNAME/syncthing). It will need an SSH key pair to authenticate.

    When you click Generate keys it will create a key pair. You will need to copy and paste the public key (which appears in the text field) to /home/USERNAME/.ssh/authorized_keys.

    If you try the gear icon to the right, you’ll find an option to enable sharing for the external storage, which is very useful here. Now you can use Nextcloud to view, share, or edit your files from Syncthing.

    Backups

    I spun tires for a while with backups, but eventually I settled on Restic. It is fast, efficient, and encrypted. I’m really impressed with it.

    Unfortunately, the snap for Restic doesn’t support strict confinement, which means it won’t work on Ubuntu Core.  So I cheated. Let’s set this up under the root user.

    You can find releases of Restic as prebuilt binaries. We’ll also need to install a snap that includes curl. (Or you can download the file on another system and transfer it with scp, but this blog post is too long already).

    $ snap install demo-curl
    $ snap run demo-curl.curl -L "https://github.com/restic/restic/releases/download/v0.8.3/restic_0.8.3_linux_amd64.bz2" | bunzip2 > restic
    $ chmod +x restic
    $ sudo mkdir /root/bin
    $ sudo cp restic /root/bin

    We need to figure out the environment variables we want for Restic. That depends on what kind of storage service you’re using. I created a file with those variables at /root/restic-MYACCOUNT.env. For Backblaze B2, mine looked like this:

    #!/bin/sh
    
    export RESTIC_REPOSITORY="b2:core-example-com--1"
    export B2_ACCOUNT_ID="…"
    export B2_ACCOUNT_KEY="…"
    export RESTIC_PASSWORD="…"

    Next, make a list of the files you’d like to back up in /root/backup-files.txt:

    /media/data1/nextcloud/data/USERNAME/files
    /media/data1/syncthing/USERNAME
    /writable/system-data/

    I added a couple of quick little helper scripts to handle the most common things you’ll be doing with Restic:

    /root/bin/restic-MYACCOUNT.sh

    #!/bin/sh
    
    . /root/restic-MYACCOUNT.env
    /root/bin/restic $@

    Use this as a shortcut to run restic with the correct environment variables.

    /root/bin/backups-push.sh

    #!/bin/sh
    
    RESTIC="/root/bin/restic-MYACCOUNT.sh"
    RESTIC_ARGS="--cache-dir /root/.cache/restic"
    
    ${RESTIC} ${RESTIC_ARGS} backup --files-from /root/backup-files.txt --exclude ".stversions" --exclude-if-present ".backup-ignore" --exclude-caches

    This will ignore any directory that contains a file named “.backup-ignore”. (So to stop a directory from being backed up, you can run touch /path/to/the/directory/.backup-ignore). This is a great way to save time if you have some big directories that don’t really need to be backed up, like a directory full of, um,  Linux ISOs shifty eyes.

    /root/bin/backups-clean.sh

    #!/bin/sh
    
    RESTIC="/root/bin/restic-MYACCOUNT.sh"
    RESTIC_ARGS="--cache-dir /root/.cache/restic"
    
    ${RESTIC} ${RESTIC_ARGS} forget --keep-daily 7 --keep-weekly 8 --keep-monthly 12 --prune
    ${RESTIC} ${RESTIC_ARGS} check

    This will periodically remove old snapshots, prune unused blocks, and then check for errors.

    Make sure all of those scripts are executable:

    $ sudo chmod +x /root/bin/restic-MYACCOUNT.sh
    $ sudo chmod +x /root/bin/restic-push.sh
    $ sudo chmod +x /root/bin/restic-clean.sh

    We still need to add systemd stuff, but let’s try this thing first!

    $ sudo /root/bin/restic-MYACCOUNT.sh init
    $ sudo /root/bin/backups-push.sh
    $ sudo /root/bin/restic-MYACCOUNT.sh snapshots

    Have fun playing with Restic, try restoring some files, note that you can list all the files in a snapshot and restore specific ones. It’s a really nice little backup tool.

    It’s pretty easy to get systemd helping here as well. First let’s add our service file. This is a different kind of system service because it isn’t a daemon. Instead, it is a oneshot service. We’ll save it as /writable/system-data/etc/systemd/system/backups-task.service.

    [Unit]
    Description=Regular system backups with Restic
    
    [Service]
    Type=oneshot
    ExecStart=/bin/sh /root/bin/backups-push.sh
    ExecStart=/bin/sh /root/bin/backups-clean.sh

    Now we need to schedule it to run on a regular basis. Let’s create a systemd timer unit for that: /writable/system-data/etc/systemd/system/backups-task.timer.

    [Unit]
    Description=Run backups-task daily
    
    [Timer]
    OnCalendar=09:00 UTC 
    Persistent=true
    
    [Install]
    WantedBy=timers.target

    One gotcha to notice here: with newer versions of systemd, you can use time zones like PDT or America/Vancouver for the OnCalendar entry, and you can test how that will work using systemd-analyze calendar "09:00 America/Vancouver. Alas, that is not the case in Ubuntu Core 16, so you’ll probably have the best luck using UTC and calculating timezones yourself.

    Now that you have your timer and your service, you can test the service by starting it:

    $ sudo systemctl start backups-task.service
    $ sudo systemctl status backups-task.service

    If all goes well, enable the timer:

    $ sudo systemctl start backups-task.timer
    $ sudo systemctl enable backups-task-timer

    To see your timer, you can use systemctl list-timers:

    $ sudo systemctl list-timers
    …
    Sat 2018-04-28 09:00:00 UTC 3h 30min left Fri 2018-04-27 09:00:36 UTC 20h ago backups-task.timer backups-task.service
    …

    Some notes on security

    Some people (understandably) dislike running this kind of web service on port 80. Nextcloud’s Apache instance runs on port 80 and port 443 by default, but you can change that using snap set nextcloud ports.http=80 ports.https=443.  However, you may need to generate a self-signed SSL certificate in that case.

    Nextcloud (like any daemon installed by Snappy) runs as root, but, as a snap, it is confined to a subset of the system. There is some official documentation about security and sandboxing in Ubuntu Core if you are interested. You can always run sudo snap run --shell nextcloud.occ to get an idea of what it has access to.

    If you feel paranoid about how we gave Nextcloud access to all removable media, you can create a bind mount from /writable/system-data/var/snap/nextcloud/common/nextcloud to /media/data1/nextcloud, like we did for Syncthing, and snap disconnect nextcloud:removable-media. Now it only has access to those files on the other end of the bind mount.

    Conclusion

    So that’s everything!

    This definitely isn’t a tiny amount of setup. It took an afternoon. (And it’ll probably take two or three years to pay for itself). But I’m impressed by how smoothly it all went, and with a few exceptions where I was nudged into loopy workarounds,  it feels simple and reproducible. If you’re looking at hosting more of your own files, I would happily recommend something like this.

    on May 06, 2018 04:38 AM

    May 05, 2018

    En esta ocasión, Francisco MolineroFrancisco Javier Teruelo y Marcos Costales, junto a los invitados Sergi Quiles, Paco Estrada (Compilando Podcast), Alejandro López (Slimbook), analizamos la tercera UbuCon Europe celebrada esta semana en Xixón.

    Capítulo 6º de la segunda temporada

    El podcast esta disponible para escuchar en:
    on May 05, 2018 01:12 PM

    May 03, 2018

    Climbing Mount Rainier

    Benjamin Mako Hill

    Mount Rainier is an enormous glaciated volcano in Washington state. It’s  4,392 meters tall (14,410 ft) and extraordinary prominent. The mountain is 87 km (54m) away from Seattle. On clear days, it dominates the skyline.

    Drumheller Fountain and Mt. Rainier on the University of Washington CampusDrumheller Fountain and Mt. Rainier on the University of Washington Campus (Photo by Frank Fujimoto)

    Rainier’s presence has shaped the layout and structure of Seattle. Important roads are built to line up with it. The buildings on the University of Washington’s campus, where I work, are laid out to frame it along the central promenade. People in Seattle typically refer to Rainier simply as “the mountain.”  It is common to here Seattlites ask “is the mountain out?”

    Having grown up in Seattle, I have an deep emotional connection to the mountain that’s difficult to explain to people who aren’t from here. I’ve seen Rainier thousands of times and every single time it takes my breath away. Every single day when I bike to work, I stop along UW’s “Rainier Vista” and look back to see if the mountain is out. If it is, I always—even if I’m running late for a meeting—stop for a moment to look at it. When I lived elsewhere and would fly to visit Seattle, seeing Rainier above the clouds from the plane was the moment that I felt like I was home.

    Given this connection, I’ve always been interested in climbing Mt. Rainier.  Doing so typically takes at least a couple days and is difficult. About half of people who attempt typically fail to reach the top. For me, climbing rainier required an enormous amount of training and gear because, until recently, I had no experience with mountaineering. I’m not particularly interested in climbing mountains in general. I am interested in Rainier.

    On Tuesday, Mika and I made our first climbing attempt and we both successfully made it to the summit. Due to the -15°C (5°F) temperatures and 88kph (55mph) winds at the top, I couldn’t get a picture at the top. But I feel like I’ve built a deeper connection with an old friend.


    Other than the picture from UW campus, photos were all from my climb and taken by (in order): Jennifer Marie, Jonathan Neubauer, Mika Matsuzaki, Jonathan Neubauer, Jonathan Neubauer, Mika Matsuzaki, and Jake Holthaus.

    on May 03, 2018 10:38 PM

    Scam alert

    Mark Shuttleworth

    Am writing briefly to say that I believe a scam or pyramid scheme is currently using my name fraudulently in South Africa. I am not going to link to the websites in question here, but if you are being pitched a make-money-fast story that refers to me and crypto-currency, you are most likely being targeted by fraudsters.

    on May 03, 2018 01:00 PM

    There have recently been a couple of highly-publicized (at least in the security community) issues with two tech giants logging passwords in plaintext. First, GitHub found they were logging plaintext passwords on password reset. Then, Twitter found they were logging all plaintext passwords. Let me begin by saying that I have no insider knowledge of either bug, and I have never worked at either Twitter or GitHub, but I enjoy randomly speculating on the internet, so I thought I would speculate on this. (Especially since the /r/netsec thread on the Twitter article is amazingly full of misconceptions.)

    A Password Primer

    A few commenters on /r/netsec seem amazed that Twitter ever sees the plaintext password. They seem to believe that the hashing (or “encryption” for some users) occurs on the client. Nope. In very few places have I ever seen any kind of client-side hashing (password managers being a notable exception).

    In the case of both GitHub and Twitter, you can look at the HTTP requests (using the Chrome inspector, Burp Suite, mitmproxy, or any number of tools) and see your plaintext password being sent to the server. Now, that’s not to say it’s on the wire in plaintext, only in the HTTP requests. Both sites use proper TLS implementations to tunnel the login, so a passive observer on the wire just sees encrypted traffic. However, inside that encrypted traffic, your password sits in plaintext.

    Once the plaintext password arrives at the application server, your salted & hashed password is retrieved from the database, the same salt & hash algorithm is applied to the plaintext passwords, and the two results are compared. If they’re the same, you’re in, otherwise you get the nice “Login failed” screen. In order for this to work, the server must use the same input to both of the hash algorithms, and those inputs are the salt (from the database) and the plaintext password. So yes, the server sees your plaintext password.

    Yes, it’s possible to do client-side hashing, but it’s complicated, and requires sending the salt from the server to the client (or using a deterministic salt), and possibly slow on mobile devices, and there’s lots of reasons companies don’t want to do it. Approximately the only security improvement is avoiding logging plaintext passwords (which is, unfortunately, exactly what happened here).

    Large Scale Software

    So another trope is “this should have been caught in code review.” Yeah, it turns out code review is not perfect, and nobody has a full overview of every line of code in the application. This isn’t the space program or aircraft control systems, where the code is frozen and reviewed. In most tech companies (as far as I can tell), releases are cut all the time with a handful of changes that were reviewed in isolation and occasionally have strange interactions. It does not surprise me at all for something like this to happen.

    How it Might Have Happened

    I’d like to reiterate: this is purely speculation. I don’t know any details at either company, and I suspect Twitter found their error because someone saw the GitHub news and said “we should double check our logs.”

    Some people seem to think the login looked something like this:

    1
    2
    3
    4
    
    def login(username, password):
        log(username + " has password " + password)
        stored = get_stored_password(username)
        return hash(password) == stored
    

    This seems fairly obvious, and I’d like to think it would be quickly caught by the developer themselves, let alone any kind of code review. However, it’s far more likely that something like this is at play:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    
    def login(username, password):
        service_request = {
            'service': 'login',
            'environment': get_environment(),
            'username': username,
            'password': password,
        }
        result = make_service_request(service_request)
        return result.ok()
    
    def make_service_request(request_definition):
        if request_definition['environment'] != 'prod':
            log('making service request: ' + repr(request_definition))
        backend = get_backend(request_definition['service'])
        return backend.issue_request(request_definition)
    
    def get_environment():
        return os.getenv('ENVIRONMENT')
    

    They might even have a test like this:

    1
    2
    3
    4
    
    def test_make_service_request_no_logs_in_prod():
        fake_request = {'environment': 'prod'}
        make_service_request(fake_request)
        assertNotCalled(log)
    

    All of this would look great (well, acceptable, this is a blog post, not a real service) under code review. We log the requests in our test environment for debugging purposes. It’s never obvious that a login request is being logged, and in the environment prod it’s not. But maybe one day our service grows and we start deploying in multiple regions, and so we rename environments. What was prod becomes prod-us and we add prod-eu. All of a sudden, our code that has not been logging passwords starts logging passwords, and it didn’t even take a code push, just an environment variable to change!

    In reality, their code is probably much more complex and even harder to see the pattern. I have spent multiple days in a team of multiple engineers trying to find one singular bug. We could produce it via black-box testing (i.e., pentest) but could not find it in the source code. It turned out to be a misconfigured dependency injection caused by strange inheritance rules.

    Yes, it’s bad that GitHub and Twitter had these bugs. I don’t mean to apologize for them. But they handled them responsibly, and the whole community has had a chance to learn a lesson. If GitHub had not disclosed, I suspect Twitter would not have noticed for much longer. Other organizations are probably also checking.

    Every organization will have security issues. It’s how you handle them that counts.

    on May 03, 2018 07:00 AM

    May 02, 2018

    Debugging the debugger

    Chris Coulson

    I use gdb quite often, but until recently I’ve never really needed to understand how it works or debug it before. I thought I’d document a recent issue I decided to take a look at – perhaps someone else will find it interesting or useful.

    We run the rust testsuite when building rustc packages in Ubuntu. When preparing updates to rust 1.25 recently for Ubuntu 18.04 LTS, I hit a bunch of test failures on armhf which all looked very similar. Here’s an example test failure:

    ---- [debuginfo-gdb] debuginfo/borrowed-c-style-enum.rs stdout ----
    
    NOTE: compiletest thinks it is using GDB with native rust support
    executing "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/stage2/bin/rustc" "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/src/test/debuginfo/borrowed-c-style-enum.rs" "-L" "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo" "--target=armv7-unknown-linux-gnueabihf" "-C" "prefer-dynamic" "-o" "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf" "-Crpath" "-Zmiri" "-Zunstable-options" "-Lnative=/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/native/rust-test-helpers" "-g" "-L" "/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf.gdb.aux"
    ------stdout------------------------------
    
    ------stderr------------------------------
    
    ------------------------------------------
    NOTE: compiletest thinks it is using GDB version 8001000
    executing "/usr/bin/gdb" "-quiet" "-batch" "-nx" "-command=/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.debugger.script"
    ------stdout------------------------------
    GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
    Copyright (C) 2018 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
    and "show warranty" for details.
    This GDB was configured as "arm-linux-gnueabihf".
    Type "show configuration" for configuration details.
    For bug reporting instructions, please see:
    <http://www.gnu.org/software/gdb/bugs/>.
    Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.
    For help, type "help".
    Type "apropos word" to search for commands related to "word".
    Breakpoint 1 at 0xcc4: file /<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/src/test/debuginfo/borrowed-c-style-enum.rs, line 61.
    
    Program received signal SIGSEGV, Segmentation fault.
    0xf77c9f4e in ?? () from /lib/ld-linux-armhf.so.3
    
    ------stderr------------------------------
    /<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.debugger.script:10: Error in sourced command file:
    No symbol 'the_a_ref' in current context
    
    ------------------------------------------
    
    error: line not found in debugger output: $1 = borrowed_c_style_enum::ABC::TheA
    status: exit code: 0
    command: "/usr/bin/gdb" "-quiet" "-batch" "-nx" "-command=/<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.debugger.script"
    stdout:
    ------------------------------------------
    GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
    Copyright (C) 2018 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
    and "show warranty" for details.
    This GDB was configured as "arm-linux-gnueabihf".
    Type "show configuration" for configuration details.
    For bug reporting instructions, please see:
    <http://www.gnu.org/software/gdb/bugs/>.
    Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.
    For help, type "help".
    Type "apropos word" to search for commands related to "word".
    Breakpoint 1 at 0xcc4: file /<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/src/test/debuginfo/borrowed-c-style-enum.rs, line 61.
    
    Program received signal SIGSEGV, Segmentation fault.
    0xf77c9f4e in ?? () from /lib/ld-linux-armhf.so.3
    
    ------------------------------------------
    stderr:
    ------------------------------------------
    /<<BUILDDIR>>/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.debugger.script:10: Error in sourced command file:
    No symbol 'the_a_ref' in current context
    
    ------------------------------------------
    
    thread '[debuginfo-gdb] debuginfo/borrowed-c-style-enum.rs' panicked at 'explicit panic', tools/compiletest/src/runtest.rs:2891:9
    note: Run with `RUST_BACKTRACE=1` for a backtrace.
    

    The failing tests are all running some commands in gdb, and the inferior (tracee) is crashing inside the dynamic loader (/lib/ld-linux-armhf.so.3) before running any rust code.

    I managed to recreate this test failure on an armhf box, but when I installed the debug symbols for the dynamic loader (contained in the libc6-dbg package) so that I could attempt to debug these crashes, the failing tests all started to pass.

    A quick search on the internet shows that I’m not the first person to hit this issue – for example, this bug reported in April 2016. According to the comments, the workaround is the same – installing the debug symbols for the dynamic loader (by installing the libc6-dbg package). This obviously isn’t right and I don’t particularly like walking away from something like this without understanding it, so I decided to spend some time trying to figure out what is going on.

    This first thing I did was to load the missing debug symbols manually in gdb after hitting the crash, in order to hopefully get a useful backtrace:

    $ gdb build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf
    ...
    (gdb) run                                           
    Starting program: /home/ubuntu/src/rustc/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf
    
    Program received signal SIGSEGV, Segmentation fault.
    0xf77c9f4e in ?? () from /lib/ld-linux-armhf.so.3
    (gdb) info sharedlibrary
    From        To          Syms Read   Shared Object Library                                                
    0xf77c7a40  0xf77dadd0  Yes (*)     /lib/ld-linux-armhf.so.3                                             
    0xf771ce90  0xf778e288  No          /home/ubuntu/src/rustc/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/../../stage2/lib/rustlib/armv7-unknown-linux-gnueabihf/lib/libstd-42d13165275d0302.so
    0xf76c91f0  0xf76d394c  No          /lib/arm-linux-gnueabihf/libgcc_s.so.1
    0xf75dad80  0xf7687a90  No          /lib/arm-linux-gnueabihf/libc.so.6
    0xf75b1a14  0xf75b2410  No          /lib/arm-linux-gnueabihf/libdl.so.2
    0xf759c810  0xf759edf0  No          /lib/arm-linux-gnueabihf/librt.so.1
    0xf757a210  0xf7585214  No          /lib/arm-linux-gnueabihf/libpthread.so.0
    (*): Shared library is missing debugging information.
    (gdb) add-symbol-file ~/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so 0xf77c7a40
    add symbol table from file "/home/ubuntu/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so" at
            .text_addr = 0xf77c7a40
    (y or n) y
    Reading symbols from /home/ubuntu/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so...done.
    (gdb) bt full
    #0  dl_main (phdr=<optimized out>, phnum=<optimized out>, user_entry=<optimized out>, auxv=<optimized out>) at rtld.c:2275
            cnt = 1
            afct = 0x0
            head = <optimized out>
            ph = <optimized out>
            mode = <optimized out>
            main_map = <optimized out>
            file_size = 4294899100
            file = <optimized out>
            has_interp = <optimized out>
            i = <optimized out>
            prelinked = <optimized out>
            rtld_is_main = <optimized out>
            tcbp = <optimized out>
            __PRETTY_FUNCTION__ = <error reading variable __PRETTY_FUNCTION__ (Cannot access memory at address 0x15810)>
            first_preload = <optimized out>
            r = <optimized out>
            rtld_ehdr = <optimized out>
            rtld_phdr = <optimized out>
            cnt = <optimized out>
            need_security_init = <optimized out>
            count_modids = <optimized out>
            preloads = <optimized out>
            npreloads = <optimized out>
            preload_file = <error reading variable preload_file (Cannot access memory at address 0x157fc)>
            rtld_multiple_ref = <optimized out>
            was_tls_init_tp_called = <optimized out></details>
    #1  0xf77d76d0 in _dl_sysdep_start (start_argptr=start_argptr@entry=0xfffef6b1, dl_main=0xf77c872d <dl_main>) at ../elf/dl-sysdep.c:253
            phdr = <optimized out>
            phnum = <optimized out>
            user_entry = 4197241
            av = <optimized out>
    #2  0xf77c8260 in _dl_start_final (arg=0xfffef6b1) at rtld.c:414
            start_addr = <optimized out>
            start_addr = <optimized out>
    #3  _dl_start (arg=0xfffef6b1) at rtld.c:521
            entry = <optimized out>
    #4  0xf77c7b90 in ?? () from /lib/ld-linux-armhf.so.3
            library_path = <error reading variable library_path (Cannot access memory at address 0x28920)>
            version_info = <error reading variable version_info (Cannot access memory at address 0x28918)>
            any_debug = <error reading variable any_debug (Cannot access memory at address 0x28914)>
            _dl_rtld_libname = <error reading variable _dl_rtld_libname (Cannot access memory at address 0x298a8)>
            _dl_rtld_libname2 = <error reading variable _dl_rtld_libname2 (Cannot access memory at address 0x298b4)>
            tls_init_tp_called = <error reading variable tls_init_tp_called (Cannot access memory at address 0x29898)>
            audit_list = <error reading variable audit_list (Cannot access memory at address 0x298a4)>
            preloadlist = <error reading variable preloadlist (Cannot access memory at address 0x2891c)>
            _dl_skip_args = <error reading variable _dl_skip_args (Cannot access memory at address 0x2994c)>
            audit_list_string = <error reading variable audit_list_string (Cannot access memory at address 0x29968)>
            __stack_chk_guard = <error reading variable __stack_chk_guard (Cannot access memory at address 0x28968)>
            _rtld_global = <error reading variable _rtld_global (Cannot access memory at address 0x29060)>
            _rtld_global_ro = <error reading variable _rtld_global_ro (Cannot access memory at address 0x28970)>
            _dl_argc = <error reading variable _dl_argc (Cannot access memory at address 0x28910)>
            __GI__dl_argv = <error reading variable __GI__dl_argv (Cannot access memory at address 0x29894)>
            __pointer_chk_guard_local = <error reading variable __pointer_chk_guard_local (Cannot access memory at address 0x28964)>
    

    You can grab the glibc source and see that the dynamic loader ends up here in elf/rtld.c:

    if (__glibc_unlikely (GLRO(dl_naudit) > 0))
      {
        struct link_map *head = GL(dl_ns)[LM_ID_BASE]._ns_loaded;
        /* Do not call the functions for any auditing object.  */
        if (head->l_auditing == 0)
          {
            struct audit_ifaces *afct = GLRO(dl_audit);
            for (unsigned int cnt = 0; cnt < GLRO(dl_naudit); ++cnt)
              {
                if (afct->activity != NULL) // ##CRASHES HERE##
                  afct->activity (&head->l_audit[cnt].cookie, LA_ACT_CONSISTENT);
    
                afct = afct->next;
              }
          }
      }
    

    The reason for the crash is that afct is NULL:

    (gdb) p $_siginfo
    $1 = {si_signo = 11, si_errno = 0, si_code = 1, _sifields = {_pad = {0, 56, 19628232, 19628288, -156661788, 0, 80, 19811416, -156663808, -157316581, 104, 1073741824, 19811416, 96, 19811408, 80, -156661788, 14, 
          19551104, 96, 104, 13358248, 19551584, 19552160, 19552232, 19811488, 32, 128, 64}, _kill = {si_pid = 0, si_uid = 56}, _timer = {si_tid = 0, si_overrun = 56, si_sigval = {sival_int = 19628232, 
            sival_ptr = 0x12b80c8}}, _rt = {si_pid = 0, si_uid = 56, si_sigval = {sival_int = 19628232, sival_ptr = 0x12b80c8}}, _sigchld = {si_pid = 0, si_uid = 56, si_status = 19628232, si_utime = 19628288, 
          si_stime = -156661788}, _sigfault = {si_addr = 0x0}, _sigpoll = {si_band = 0, si_fd = 56}}}
    (gdb) p afct
    $2 = (struct audit_ifaces *) 0x0
    

    A quick look through the dynamic loader code shows that this condition should be impossible to hit.

    As the crash doesn’t happen with debug symbols, I thought I would attempt to debug it without the symbols. First of all, I set a breakpoint at the start of dl_main by specifying it at offset 0xcec in the .text section:

    $ gdb build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf
    ...
    (gdb) starti
    Starting program: /home/ubuntu/src/rustc/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf
    
    Program stopped.
    0xf77c7b80 in ?? () from /lib/ld-linux-armhf.so.3
    (gdb) info sharedlibrary
    From        To          Syms Read   Shared Object Library
    0xf77c7a40  0xf77dadd0  Yes (*)     /lib/ld-linux-armhf.so.3
    (*): Shared library is missing debugging information.
    (gdb) break *0xf77c872c
    Breakpoint 1 at 0xf77c872c
    (gdb) cont
    Continuing.
    
    Program received signal SIGSEGV, Segmentation fault.
    0xf77da458 in ?? () from /lib/ld-linux-armhf.so.3
    

    Huh? It’s now crashed at a different place, without hitting our breakpoint at the start of dl_main. Loading the debug symbols again shows us where:

    (gdb) add-symbol-file ~/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so 0xf77c7a40
    add symbol table from file "/home/ubuntu/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so" at
            .text_addr = 0xf77c7a40
    (y or n) y
    Reading symbols from /home/ubuntu/libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so...done.
    (gdb) bt
    #0  ?? () at ../sysdeps/arm/armv7/multiarch/memcpy_impl.S:654 from /lib/ld-linux-armhf.so.3
    #1  0xf77c871e in handle_ld_preload (preloadlist=<optimized out>, main_map=0x0) at rtld.c:848
    #2  0x00000000 in ?? ()
    Backtrace stopped: previous frame identical to this frame (corrupt stack?)
    

    This doesn’t make much sense, but the fact that setting a breakpoint has altered the program flow is our first clue.

    On Linux, gdb interacts with the inferior using the ptrace system call. The next thing I wanted to try was running gdb in strace in order to capture the ptrace syscalls, so that I could compare differences afterwards and see if I could find any more clues.

    I created the following simple gdb command file:

    file /home/ubuntu/src/rustc/rustc-1.25.0+dfsg1+llvm/build/armv7-unknown-linux-gnueabihf/test/debuginfo/borrowed-c-style-enum.stage2-armv7-unknown-linux-gnueabihf
    run
    quit
    

    I then ran gdb with this file inside strace, with the symbols for the dynamic loader installed. Here’s the log up until the point at which gdb calls PTRACE_CONT:

    $ strace -t -eptrace gdb -quiet -batch -nx -command=~/test.script
    13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=21136, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
    ...
    13:08:35 ptrace(PTRACE_GETREGS, 21137, NULL, 0xffd6afec) = 0
    13:08:35 ptrace(PTRACE_GETSIGINFO, 21137, NULL, {si_signo=SIGTRAP, si_code=SI_USER, si_pid=21137, si_uid=1000}) = 0
    13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21137, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} ---
    13:08:35 ptrace(PTRACE_CONT, 21137, 0x1, SIG_0) = 0
    13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21137, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} ---
    13:08:35 ptrace(PTRACE_GETREGS, 21137, NULL, 0xffd6afec) = 0
    13:08:35 ptrace(PTRACE_GETSIGINFO, 21137, NULL, {si_signo=SIGTRAP, si_code=SI_USER, si_pid=21137, si_uid=1000}) = 0
    13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21138, si_uid=1000, si_status=SIGSTOP, si_utime=0, si_stime=0} ---
    13:08:35 ptrace(PTRACE_SETOPTIONS, 21138, NULL, PTRACE_O_TRACESYSGOOD) = 0
    13:08:35 ptrace(PTRACE_SETOPTIONS, 21138, NULL, PTRACE_O_TRACEFORK) = 0
    13:08:35 ptrace(PTRACE_SETOPTIONS, 21138, NULL, PTRACE_O_TRACEFORK|PTRACE_O_TRACEVFORKDONE) = 0
    13:08:35 ptrace(PTRACE_CONT, 21138, NULL, SIG_0) = 0
    13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21138, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} ---
    13:08:35 ptrace(PTRACE_GETEVENTMSG, 21138, NULL, [21139]) = 0
    13:08:35 ptrace(PTRACE_KILL, 21139)     = 0
    13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=21139, si_uid=1000, si_status=SIGKILL, si_utime=0, si_stime=0} ---
    13:08:35 ptrace(PTRACE_SETOPTIONS, 21138, NULL, PTRACE_O_EXITKILL) = 0
    13:08:35 ptrace(PTRACE_KILL, 21138)     = 0
    13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21138, si_uid=1000, si_status=SIGCHLD, si_utime=0, si_stime=0} ---
    13:08:35 ptrace(PTRACE_KILL, 21138)     = 0
    13:08:35 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=21138, si_uid=1000, si_status=SIGKILL, si_utime=0, si_stime=0} ---
    13:08:35 ptrace(PTRACE_SETOPTIONS, 21137, NULL, PTRACE_O_TRACESYSGOOD|PTRACE_O_TRACEFORK|PTRACE_O_TRACEVFORK|PTRACE_O_TRACECLONE|PTRACE_O_TRACEEXEC|PTRACE_O_TRACEVFORKDONE|PTRACE_O_EXITKILL) = 0
    13:08:35 ptrace(PTRACE_GETREGSET, 21137, NT_PRSTATUS, [{iov_base=0xffd6b3b4, iov_len=72}]) = 0
    13:08:35 ptrace(PTRACE_GETVFPREGS, 21137, NULL, 0xffd6b298) = 0
    13:08:35 ptrace(PTRACE_GETREGSET, 21137, NT_PRSTATUS, [{iov_base=0xffd6b36c, iov_len=72}]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0x411efc, [NULL]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0x411efc, [NULL]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18bf00]) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9a44, 0x4c18de01) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9ef8, [0xf00cbf00]) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9ef8, 0xf00cde01) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cbf00]) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb5b8, 0x603cde01) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639bf00]) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb220, 0x4639de01) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5310, [0xbf00e71c]) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5310, 0xde01e71c) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bbf00]) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5bb0, 0x6d7bde01) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5d90, [0xe681bf00]) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5d90, 0xe681de01) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77c9a44, [0x4c18de01]) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9a44, 0x4c18bf00) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb5b8, [0x603cde01]) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb5b8, 0x603cbf00) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77cb220, [0x4639de01]) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb220, 0x4639bf00) = 0
    13:08:35 ptrace(PTRACE_PEEKTEXT, 21137, 0xf77d5bb0, [0x6d7bde01]) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5bb0, 0x6d7bbf00) = 0
    13:08:35 ptrace(PTRACE_CONT, 21137, 0x1, SIG_0) = 0
    

    First of all, notice that there are several PTRACE_POKEDATA calls. These are used by gdb to write to memory locations in the process that we’re debugging, eg, to set breakpoints. For more information about how breakpoints work in gdb, this blog post has some good information. Basically, gdb writes an invalid instruction to the breakpoint location and this causes a SIGTRAP when executed, which is intercepted by gdb. When you continue over the breakpoint, gdb writes the original instruction back, single-steps over it, re-writes the invalid instruction and then continues execution.

    This is an obvious way in which gdb can interfere with our process and make it crash, so I focused on these calls. I’ve filtered them out below:

    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9a44, 0x4c18de01) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9ef8, 0xf00cde01) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb5b8, 0x603cde01) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb220, 0x4639de01) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5310, 0xde01e71c) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5bb0, 0x6d7bde01) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5d90, 0xe681de01) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77c9a44, 0x4c18bf00) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb5b8, 0x603cbf00) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77cb220, 0x4639bf00) = 0
    13:08:35 ptrace(PTRACE_POKEDATA, 21137, 0xf77d5bb0, 0x6d7bbf00) = 0
    

    Notice that the first 7 of these write the same 2-byte sequence – 0xde01. These are breakpoints in code that is running in Thumb mode (see arm_linux_thumb_le_breakpoint in gdb/arm-linux-tdep.c in the gdb source code). 0xde01 in Thumb mode is an undefined instruction.

    (Note that the write to 0xf77d5310 is actually a breakpoint at 0xf77d5312, as 0xde01 appears in the 2 higher order bytes and this is little-endian).

    We aren’t inserting any breakpoints ourselves – these breakpoints are set automatically by gdb to monitor various events in the dynamic loader during startup. This is something I wasn’t aware of before debugging this.

    It may be useful to know how gdb determines the addresses on which to set breakpoints at startup. The dynamic loader exports various events as SystemTap probes, and data about these is stored in the .note.stapsdt ELF section. We can inspect this using readelf:

    $ readelf -n /lib/ld-linux-armhf.so.3
    
    Displaying notes found in: .note.gnu.build-id                                                            
      Owner                 Data size       Description                          
      GNU                  0x00000014       NT_GNU_BUILD_ID (unique build ID bitstring)                      
        Build ID: 3f3b9b4bfea2654f2cedf6db2d120b4e3a39ea7e                      
    
    Displaying notes found in: .note.stapsdt                                                                 
      Owner                 Data size       Description 
      stapsdt              0x00000032       NT_STAPSDT (SystemTap probe descriptors)                         
        Provider: rtld                                           
        Name: init_start                                
        Location: 0x00002a44, Base: 0x00017b9c, Semaphore: 0x00000000                                        
        Arguments: -4@.L1204 4@[r7, #52]                      
      stapsdt              0x0000002e       NT_STAPSDT (SystemTap probe descriptors)                         
        Provider: rtld                                  
        Name: init_complete                                              
        Location: 0x00002ef8, Base: 0x00017b9c, Semaphore: 0x00000000                                        
        Arguments: -4@.L1207 4@r4                                           
      stapsdt              0x0000002e       NT_STAPSDT (SystemTap probe descriptors)                         
        Provider: rtld
        Name: map_failed
        Location: 0x00004220, Base: 0x00017b9c, Semaphore: 0x00000000
        Arguments: -4@[sp, #20] 4@r5
      stapsdt              0x00000035       NT_STAPSDT (SystemTap probe descriptors)
        Provider: rtld
        Name: map_start
        Location: 0x000045b8, Base: 0x00017b9c, Semaphore: 0x00000000
        Arguments: -4@[r7, #252] 4@[r7, #72]
      stapsdt              0x0000003c       NT_STAPSDT (SystemTap probe descriptors)
        Provider: rtld
        Name: map_complete
        Location: 0x0000e020, Base: 0x00017b9c, Semaphore: 0x00000000
        Arguments: -4@[fp, #20] 4@[r7, #36] 4@r4
      stapsdt              0x00000036       NT_STAPSDT (SystemTap probe descriptors)
        Provider: rtld
        Name: reloc_start
        Location: 0x0000e09e, Base: 0x00017b9c, Semaphore: 0x00000000
        Arguments: -4@[fp, #20] 4@[r7, #36]
      stapsdt              0x0000003e       NT_STAPSDT (SystemTap probe descriptors)
        Provider: rtld
        Name: reloc_complete
        Location: 0x0000e312, Base: 0x00017b9c, Semaphore: 0x00000000
        Arguments: -4@[fp, #20] 4@[r7, #36] 4@r4
      stapsdt              0x00000037       NT_STAPSDT (SystemTap probe descriptors)
        Provider: rtld
        Name: unmap_start
        Location: 0x0000ebb0, Base: 0x00017b9c, Semaphore: 0x00000000
        Arguments: -4@[r7, #104] 4@[r7, #80]
      stapsdt              0x0000003a       NT_STAPSDT (SystemTap probe descriptors)
        Provider: rtld
        Name: unmap_complete
        Location: 0x0000ed90, Base: 0x00017b9c, Semaphore: 0x00000000
        Arguments: -4@[r7, #104] 4@[r7, #80]
      stapsdt              0x00000029       NT_STAPSDT (SystemTap probe descriptors)
        Provider: rtld
        Name: setjmp
        Location: 0x0001201c, Base: 0x00017b9c, Semaphore: 0x00000000
        Arguments: 4@r0 -4@r1 4@r14
      stapsdt              0x00000029       NT_STAPSDT (SystemTap probe descriptors)
        Provider: rtld
        Name: longjmp
        Location: 0x00012088, Base: 0x00017b9c, Semaphore: 0x00000000
        Arguments: 4@r0 -4@r1 4@r4
      stapsdt              0x00000031       NT_STAPSDT (SystemTap probe descriptors)
        Provider: rtld
        Name: longjmp_target
        Location: 0x000120ba, Base: 0x00017b9c, Semaphore: 0x00000000
        Arguments: 4@r0 -4@r1 4@r14
    

    GDB uses this information to map events to breakpoint addresses. You can read a bit more about gdb’s linker interface here, and more about userspace SystemTap probes here.

    With a base address of 0xf77c7000, we can look at the PTRACE_POKEDATA calls and see that the addresses map to these probes:

    • 0xf77c9a44 => init_start
    • 0xf77c9ef8 => init_complete
    • 0xf77cb5b8 => map_start
    • 0xf77cb220 => map_failed
    • 0xf77d5312 => reloc_complete
    • 0xf77d5bb0 => unmap_start
    • 0xf77d5d90 => unmap_complete

    This is consistent with the probe_info array in gdb/solib-svr4.c in the gdb source code.

    I then ran gdb inside strace again, this time without the symbols for the dynamic loader installed. Here’s the log up until the point at which the inferior process crashes:

    $ strace -t -eptrace gdb -quiet -batch -nx -command=~/test.script
    13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=21098, si_uid=1000, si_status=0, si_utime=0, si_stime=0} ---
    ...
    13:01:50 ptrace(PTRACE_GETREGS, 21099, NULL, 0xffb84a9c) = 0                                                                                                                                                       
    13:01:50 ptrace(PTRACE_GETSIGINFO, 21099, NULL, {si_signo=SIGTRAP, si_code=SI_USER, si_pid=21099, si_uid=1000}) = 0
    13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21099, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} ---
    13:01:50 ptrace(PTRACE_CONT, 21099, 0x1, SIG_0) = 0                                                      
    13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21099, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} ---
    13:01:50 ptrace(PTRACE_GETREGS, 21099, NULL, 0xffb84a9c) = 0
    13:01:50 ptrace(PTRACE_GETSIGINFO, 21099, NULL, {si_signo=SIGTRAP, si_code=SI_USER, si_pid=21099, si_uid=1000}) = 0
    13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21100, si_uid=1000, si_status=SIGSTOP, si_utime=0, si_stime=0} ---
    13:01:50 ptrace(PTRACE_SETOPTIONS, 21100, NULL, PTRACE_O_TRACESYSGOOD) = 0
    13:01:50 ptrace(PTRACE_SETOPTIONS, 21100, NULL, PTRACE_O_TRACEFORK) = 0
    13:01:50 ptrace(PTRACE_SETOPTIONS, 21100, NULL, PTRACE_O_TRACEFORK|PTRACE_O_TRACEVFORKDONE) = 0
    13:01:50 ptrace(PTRACE_CONT, 21100, NULL, SIG_0) = 0
    13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21100, si_uid=1000, si_status=SIGTRAP, si_utime=0, si_stime=0} ---
    13:01:50 ptrace(PTRACE_GETEVENTMSG, 21100, NULL, [21101]) = 0
    13:01:50 ptrace(PTRACE_KILL, 21101)     = 0
    13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=21101, si_uid=1000, si_status=SIGKILL, si_utime=0, si_stime=0} ---
    13:01:50 ptrace(PTRACE_SETOPTIONS, 21100, NULL, PTRACE_O_EXITKILL) = 0
    13:01:50 ptrace(PTRACE_KILL, 21100)     = 0
    13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21100, si_uid=1000, si_status=SIGCHLD, si_utime=0, si_stime=0} ---
    13:01:50 ptrace(PTRACE_KILL, 21100)     = 0
    13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_KILLED, si_pid=21100, si_uid=1000, si_status=SIGKILL, si_utime=0, si_stime=0} ---
    13:01:50 ptrace(PTRACE_SETOPTIONS, 21099, NULL, PTRACE_O_TRACESYSGOOD|PTRACE_O_TRACEFORK|PTRACE_O_TRACEVFORK|PTRACE_O_TRACECLONE|PTRACE_O_TRACEEXEC|PTRACE_O_TRACEVFORKDONE|PTRACE_O_EXITKILL) = 0
    13:01:50 ptrace(PTRACE_GETREGSET, 21099, NT_PRSTATUS, [{iov_base=0xffb84e64, iov_len=72}]) = 0
    13:01:50 ptrace(PTRACE_GETVFPREGS, 21099, NULL, 0xffb84d48) = 0
    13:01:50 ptrace(PTRACE_GETREGSET, 21099, NT_PRSTATUS, [{iov_base=0xffb84e1c, iov_len=72}]) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0x411efc, [NULL]) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0x411efc, [NULL]) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77c9a44, [0x4c18bf00]) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77c9a44, [0x4c18bf00]) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9a44, 0xe7f001f0) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77c9ef8, [0xf00cbf00]) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77c9ef8, [0xf00cbf00]) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9ef8, 0xe7f001f0) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77cb5b8, [0x603cbf00]) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77cb5b8, [0x603cbf00]) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb5b8, 0xe7f001f0) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77cb220, [0x4639bf00]) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77cb220, [0x4639bf00]) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb220, 0xe7f001f0) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5310, [0xbf00e71c]) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5314, [0x4620e776]) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5310, [0xbf00e71c]) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5314, [0x4620e776]) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5310, [0xbf00e71c]) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5310, 0x1f0e71c) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5314, [0x4620e776]) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5314, 0x4620e7f0) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5bb0, [0x6d7bbf00]) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5bb0, [0x6d7bbf00]) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5bb0, 0xe7f001f0) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5d90, [0xe681bf00]) = 0
    13:01:50 ptrace(PTRACE_PEEKTEXT, 21099, 0xf77d5d90, [0xe681bf00]) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5d90, 0xe7f001f0) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9a44, 0x4c18bf00) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb5b8, 0x603cbf00) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb220, 0x4639bf00) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5bb0, 0x6d7bbf00) = 0
    13:01:50 ptrace(PTRACE_CONT, 21099, 0x1, SIG_0) = 0
    13:01:50 --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_TRAPPED, si_pid=21099, si_uid=1000, si_status=SIGSEGV, si_utime=0, si_stime=0} ---
    

    Focusing again on the PTRACE_POKEDATA calls:

    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9a44, 0xe7f001f0) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9ef8, 0xe7f001f0) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb5b8, 0xe7f001f0) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb220, 0xe7f001f0) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5310, 0x1f0e71c) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5314, 0x4620e7f0) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5bb0, 0xe7f001f0) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5d90, 0xe7f001f0) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77c9a44, 0x4c18bf00) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb5b8, 0x603cbf00) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77cb220, 0x4639bf00) = 0
    13:01:50 ptrace(PTRACE_POKEDATA, 21099, 0xf77d5bb0, 0x6d7bbf00) = 0
    

    We see writes to the same 7 addresses where breakpoints were set during the first run, but now there’s a different byte sequence and an extra write. This time, gdb is writing a 4-byte sequence – 0xe7f001f0 to 6 addresses. These are breakpoints for code running in ARM mode (see eabi_linux_arm_le_breakpoint in gdb/arm-linux-tdep.c in the gdb source code). The 2 writes to 0xf77d5310 and 0xf77d5314 are a single breakpoint at 0xf77d5312 (there are 2 writes because it is not on a 4-byte boundary).

    Checking the ARMv7 reference manual shows that 0xe7f001f0 is an undefined instruction in ARM mode. However, this byte sequence is decoded as the following valid instructions in Thumb mode:

    lsl r0, r6, #7
    b #-16
    

    So, it takes the contents of r6, does a logical shift left by 7, writes it to r0 and then does an unconditional branch backwards by 16 bytes. This is quite likely going to cause our program (in this case, the dynamic loader) to go off the rails and crash with a less than useful stacktrace, which is the behaviour we’re seeing.

    Why is this happening?

    The next step was to figure out why gdb is inserting the ARM breakpoint instruction sequence instead of the Thumb one. To do this, I needed to understand where the breakpoints are written, and grepping the source code suggests the PTRACE_POKEDATA calls happen in inf_ptrace_peek_poke in gdb/inf-ptrace.c (actually, you won’t find PTRACE_POKEDATA here – it’s PT_WRITE_D which is defined in /usr/include/sys/ptrace.h).

    Running gdb inside gdb with the dynamic loader debug symbols installed and setting a breakpoint on inf_ptrace_peek_poke shows me the call stack. Note that I set a breakpoint by line number, as inf_ptrace_peek_poke is inlined and it was the only way I could get the conditional breakpoint to work:

    $ gdb --args gdb --command=~/test.script
    ...                                                                                                                                                                
    (gdb) break ./gdb/inf-ptrace.c:578 if writebuf != 0x0                                                                                                                                                              
    Breakpoint 1 at 0x51218: file ./gdb/inf-ptrace.c, line 578.                                                                                                                                                        
    (gdb) run                                                                                                                                                                                                          
    Starting program: /usr/bin/gdb --command=\~/test.script                                                                                                                                                            
    Cannot parse expression `.L1207 4@r4'.                                                                                                                                                                             
    warning: Probes-based dynamic linker interface failed.                                                   
    Reverting to original interface.                                                                                                                                                                                   
    
    [Thread debugging using libthread_db enabled]                                                                                                                                                                      
    Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1".                                                                                                                                      
    GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git                                                         
    Copyright (C) 2018 Free Software Foundation, Inc.                                                        
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>                                                                                                                                      
    This is free software: you are free to change and redistribute it.                                                                                                                                                 
    There is NO WARRANTY, to the extent permitted by law.  Type "show copying"                                                                                                                                         
    and "show warranty" for details.                                                                                                                                                                                   
    This GDB was configured as "arm-linux-gnueabihf".                                                        
    Type "show configuration" for configuration details.                                                     
    For bug reporting instructions, please see:                                                              
    <http://www.gnu.org/software/gdb/bugs/>.                                                                 
    Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.
    For help, type "help".
    Type "apropos word" to search for commands related to "word".
    
    Breakpoint 1, inf_ptrace_xfer_partial (ops=<optimized out>, object=<optimized out>, annex=<optimized out>, readbuf=0x0, writebuf=0x6b29fc <arm_linux_thumb_le_breakpoint>     "\001\336", offset=4152138308, len=2,
        xfered_len=0xfffeee18) at ./gdb/inf-ptrace.c:578
    578     ./gdb/inf-ptrace.c: No such file or directory.
    (gdb) p/x offset
    $1 = 0xf77c9a44
    (gdb) bt
    #0  inf_ptrace_xfer_partial (ops=<optimized out>, object=<optimized out>, annex=<optimized out>, readbuf=0x0, writebuf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=2,
        xfered_len=0xfffeee18) at ./gdb/inf-ptrace.c:578
    #1  0x00457512 in linux_xfer_partial (ops=0x8306e0, object=<optimized out>, annex=0x0, readbuf=0x0, writebuf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=2, xfered_len=0xfffeee18)
        at ./gdb/linux-nat.c:4280
    #2  0x004576da in linux_nat_xfer_partial (ops=0x8306e0, object=TARGET_OBJECT_MEMORY, annex=<optimized out>, readbuf=0x0, writebuf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=2,
        xfered_len=0xfffeee18) at ./gdb/linux-nat.c:3908
    #3  0x005f64f4 in raw_memory_xfer_partial (ops=ops@entry=0x8306e0, readbuf=readbuf@entry=0x0, writebuf=writebuf@entry=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", memaddr=4152138308, len=len@entry=2,
        xfered_len=xfered_len@entry=0xfffeee18) at ./gdb/target.c:1064
    #4  0x005f6a98 in target_xfer_partial (ops=ops@entry=0x8306e0, object=object@entry=TARGET_OBJECT_RAW_MEMORY, annex=annex@entry=0x0, readbuf=readbuf@entry=0x0,
        writebuf=writebuf@entry=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308, len=<optimized out>, xfered_len=xfered_len@entry=0xfffeee18) at ./gdb/target.c:1298
    #5  0x005f7030 in target_write_partial (xfered_len=0xfffeee18, len=2, offset=<optimized out>, buf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", annex=0x0, object=TARGET_OBJECT_RAW_MEMORY, ops=0x8306e0)
        at ./gdb/target.c:1554
    #6  target_write_with_progress (ops=0x8306e0, object=object@entry=TARGET_OBJECT_RAW_MEMORY, annex=annex@entry=0x0, buf=buf@entry=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", offset=4152138308,
        len=len@entry=2, progress=progress@entry=0x0, baton=baton@entry=0x0) at ./gdb/target.c:1821
    #7  0x005f70d2 in target_write (len=2, offset=2, buf=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", annex=0x0, object=TARGET_OBJECT_RAW_MEMORY, ops=<optimized out>) at ./gdb/target.c:1847
    #8  target_write_raw_memory (memaddr=memaddr@entry=4152138308, myaddr=myaddr@entry=0x6b29fc <arm_linux_thumb_le_breakpoint> "\001\336", len=len@entry=2) at ./gdb/target.c:1473
    #9  0x00590c8e in default_memory_insert_breakpoint (gdbarch=<optimized out>, bp_tgt=0x9236f0) at ./gdb/mem-break.c:66
    #10 0x004de3aa in bkpt_insert_location (bl=0x923698) at ./gdb/breakpoint.c:12525
    #11 0x004e8426 in insert_bp_location (bl=bl@entry=0x923698, tmp_error_stream=tmp_error_stream@entry=0xfffef07c, disabled_breaks=disabled_breaks@entry=0xfffeefec,
        hw_breakpoint_error=hw_breakpoint_error@entry=0xfffeeff0, hw_bp_error_explained_already=hw_bp_error_explained_already@entry=0xfffeeff4) at ./gdb/breakpoint.c:2553
    #12 0x004e9556 in insert_breakpoint_locations () at ./gdb/breakpoint.c:2977
    #13 update_global_location_list (insert_mode=insert_mode@entry=UGLL_MAY_INSERT) at ./gdb/breakpoint.c:12177
    #14 0x004ea0a0 in update_global_location_list_nothrow (insert_mode=UGLL_MAY_INSERT) at ./gdb/breakpoint.c:12215
    #15 0x004ea484 in create_solib_event_breakpoint_1 (insert_mode=UGLL_MAY_INSERT, address=address@entry=4152138308, gdbarch=gdbarch@entry=0x0) at ./gdb/breakpoint.c:7555
    #16 create_solib_event_breakpoint (gdbarch=gdbarch@entry=0x928fb0, address=address@entry=4152138308) at ./gdb/breakpoint.c:7562
    #17 0x004497bc in svr4_create_probe_breakpoints (objfile=0x933888, probes=0xfffef148, gdbarch=0x928fb0) at ./gdb/solib-svr4.c:2089
    #18 svr4_create_solib_event_breakpoints (gdbarch=0x928fb0, address=<optimized out>) at ./gdb/solib-svr4.c:2173
    #19 0x00449c5c in enable_break (from_tty=<optimized out>, info=<optimized out>) at ./gdb/solib-svr4.c:2465
    #20 svr4_solib_create_inferior_hook (from_tty=<optimized out>) at ./gdb/solib-svr4.c:3057
    #21 0x0056bba6 in post_create_inferior (target=0x801084 <current_target>, from_tty=from_tty@entry=0) at ./gdb/infcmd.c:469
    #22 0x0056c736 in run_command_1 (args=<optimized out>, from_tty=0, run_how=RUN_NORMAL) at ./gdb/infcmd.c:665
    #23 0x00465334 in cmd_func (cmd=<optimized out>, args=<optimized out>, from_tty=<optimized out>) at ./gdb/cli/cli-decode.c:1886
    #24 0x006062a6 in execute_command (p=<optimized out>, p@entry=0x880f10 "run", from_tty=0) at ./gdb/top.c:630
    #25 0x00548760 in command_handler (command=0x880f10 "run") at ./gdb/event-top.c:583
    #26 0x00606a66 in read_command_file (stream=stream@entry=0x874ae0) at ./gdb/top.c:424
    #27 0x004684e2 in script_from_file (stream=stream@entry=0x874ae0, file=file@entry=0xfffef7d0 "~/test.script") at ./gdb/cli/cli-script.c:1592
    #28 0x004639bc in source_script_from_stream (file_to_open=0xfffef7d0 "~/test.script", file=0xfffef7d0 "~/test.script", stream=0x874ae0) at ./gdb/cli/cli-cmds.c:568
    #29 source_script_with_search (file=0xfffef7d0 "~/test.script", from_tty=<optimized out>, search_path=<optimized out>) at ./gdb/cli/cli-cmds.c:604
    #30 0x0058821a in catch_command_errors (command=0x463a89 <source_script(char const*, int)>, arg=0xfffef7d0 "~/test.script", from_tty=1) at ./gdb/main.c:379
    #31 0x00588ea0 in captured_main_1 (context=<optimized out>) at ./gdb/main.c:1125
    #32 captured_main (data=<optimized out>) at ./gdb/main.c:1147
    #33 gdb_main (args=<optimized out>) at ./gdb/main.c:1173
    #34 0x004343ac in main (argc=<optimized out>, argv=<optimized out>) at ./gdb/gdb.c:32
    

    Frame 9 (default_memory_insert_breakpoint) looks like it will probably be interesting to us. Taking a look at what it does:

    int
    default_memory_insert_breakpoint (struct gdbarch *gdbarch,
                      struct bp_target_info *bp_tgt)
    {
      CORE_ADDR addr = bp_tgt->placed_address;
      const unsigned char *bp;
      gdb_byte *readbuf;
      int bplen;
      int val;
    
      /* Determine appropriate breakpoint contents and size for this address.  */
      bp = gdbarch_sw_breakpoint_from_kind (gdbarch, bp_tgt->kind, &bplen);
    
      /* Save the memory contents in the shadow_contents buffer and then
         write the breakpoint instruction.  */
      readbuf = (gdb_byte *) alloca (bplen);
      val = target_read_memory (addr, readbuf, bplen);
      if (val == 0)
        {
           ...
          bp_tgt->shadow_len = bplen;
          memcpy (bp_tgt->shadow_contents, readbuf, bplen);
    
          val = target_write_raw_memory (addr, bp, bplen);
        }
    
      return val;
    }
    

    The call to gdbarch_sw_breakpoint_from_kind appears to return the bytes written for our breakpoint. gdbarch_sw_breakpoint_from_kind delegates to arm_sw_breakpoint_from_kind in gdb/arm-tdep.c. (The gdbarch_ functions provides a way for architecture independent code in gdb to call functions specific to the architecture associated with the target). Taking a look at what this does:

    static const gdb_byte *
    arm_sw_breakpoint_from_kind (struct gdbarch *gdbarch, int kind, int *size)
    {
      struct gdbarch_tdep *tdep = gdbarch_tdep (gdbarch);
    
      switch (kind)
        {
        case ARM_BP_KIND_ARM:
          *size = tdep->arm_breakpoint_size;
          return tdep->arm_breakpoint;
        case ARM_BP_KIND_THUMB:
          *size = tdep->thumb_breakpoint_size;
          return tdep->thumb_breakpoint;
        case ARM_BP_KIND_THUMB2:
          *size = tdep->thumb2_breakpoint_size;
          return tdep->thumb2_breakpoint;
        default:
          gdb_assert_not_reached ("unexpected arm breakpoint kind");
        }
    }
    

    So, arm_breakpoint_from_kind returns an ARM, Thumb or Thumb2 breakpoint instruction sequence depending on the value of kind. If we switch to frame 9, we should be able to inspect the value of kind:

    (gdb) f 9
    #9  0x00590c8e in default_memory_insert_breakpoint (gdbarch=<optimized out>, bp_tgt=0x9236f0) at ./gdb/mem-break.c:66
    66      ./gdb/mem-break.c: No such file or directory.
    (gdb) p bp_tgt->kind
    $2 = 2
    

    2 is ARM_BP_KIND_THUMB, so this appears to check out. Moving further up the stack, we find that kind is determined in frame 10 (bkpt_insert_location in gdb/breakpoint.c). Let’s have a look at what that does:

    static int
    bkpt_insert_location (struct bp_location *bl)
    {
      CORE_ADDR addr = bl->target_info.reqstd_address;
    
      bl->target_info.kind = breakpoint_kind (bl, &addr);
      bl->target_info.placed_address = addr;
    
      if (bl->loc_type == bp_loc_hardware_breakpoint)
        return target_insert_hw_breakpoint (bl->gdbarch, &bl->target_info);
      else
        return target_insert_breakpoint (bl->gdbarch, &bl->target_info);
    }
    

    This calls breakpoint_kind, which delegates to arm_breakpoint_kind_from_pc in gdb/arm-tdep.c via gdbarch_breakpoint_kind_from_pc. arm_breakpoint_kind_from_pc maps the breakpoint address to an instruction set and returns one of three values – ARM_BP_KIND_ARM, ARM_BP_KIND_THUMB or ARM_BP_KIND_THUMB2. From looking at arm_breakpoint_kind_from_pc, we can see the most interesting part is a call to arm_pc_is_thumb. Let’s have a look at how this works:

    int
    arm_pc_is_thumb (struct gdbarch *gdbarch, CORE_ADDR memaddr)
    {
      struct bound_minimal_symbol sym;
      char type;
      ...
    
      /* If bit 0 of the address is set, assume this is a Thumb address.  */
      if (IS_THUMB_ADDR (memaddr))
        return 1;
    

    So, first of all it checks whether bit 0 of the breakpoint address is set. Looking at the SystemTap probes in .note.stapsdt from our earlier readelf output, we can see that this is not the case for any probe. Following on:

      /* If the user wants to override the symbol table, let him.  */
      if (strcmp (arm_force_mode_string, "arm") == 0)
        return 0;
      if (strcmp (arm_force_mode_string, "thumb") == 0)
        return 1;
    
      /* ARM v6-M and v7-M are always in Thumb mode.  */
      if (gdbarch_tdep (gdbarch)->is_m)
        return 1;
    

    We’re not forcing the mode and this isn’t ARM v6-M or v7-M, so, continuing:

      /* If there are mapping symbols, consult them.  */
      type = arm_find_mapping_symbol (memaddr, NULL);
      if (type)
        return type == 't';
    

    arm_find_mapping_symbol tries to find a mapping symbol associated with the breakpoint address. Mapping symbols are a special type of symbol used to identify transitions between ARM and Thumb instruction sets (see this information). Breaking here in gdb shows that there isn’t a mapping symbol associated with the init_start probe:

    (gdb) break ./gdb/arm-tdep.c:434
    Breakpoint 2 at 0x43ec3c: file ./gdb/arm-tdep.c, line 434.
    (gdb) run
    ...
    
    Breakpoint 2, arm_pc_is_thumb (gdbarch=gdbarch@entry=0x928fb0, memaddr=memaddr@entry=4152138308) at ./gdb/arm-tdep.c:434
    434     ./gdb/arm-tdep.c: No such file or directory.
    (gdb) p/x memaddr
    $2 = 0xf77c9a44
    (gdb) p type
    $3 = 0 '\000'
    

    So, continuing to the next step:

      /* Thumb functions have a "special" bit set in minimal symbols.  */
      sym = lookup_minimal_symbol_by_pc (memaddr);
      if (sym.minsym)
        return (MSYMBOL_IS_SPECIAL (sym.minsym));
    

    lookup_minimal_symbol_by_pc tries to map the breakpoint address to a function symbol. MSYMBOL_IS_SPECIAL(sym.minsym) expands to sym.minsym->target_flag_1 and is 1 if bit 0 of the symbol’s target address is set, indicating that the function is called in Thumb mode (see arm_elf_make_msymbol_special in gdb/arm-tdep.c for where this is set). Breaking here in gdb shows that this succeeds:

    (gdb) break ./gdb/arm-tdep.c:439
    Breakpoint 3 at 0x43ec54: file ./gdb/arm-tdep.c, line 439.
    (gdb) cont
    Continuing.
    
    Breakpoint 3, arm_pc_is_thumb (gdbarch=gdbarch@entry=0x928fb0, memaddr=memaddr@entry=4152138308) at ./gdb/arm-tdep.c:439
    439     in ./gdb/arm-tdep.c
    (gdb) p sym.minsym
    $4 = (minimal_symbol *) 0x964278
    (gdb) p *sym.minsym
    $5 = {mginfo = {name = 0x952ff8 "dl_main", value = {ivalue = 5932, block = 0x172c, bytes = 0x172c <error: Cannot access memory at address 0x172c>, address = 5932, common_block = 0x172c, chain = 0x172c}, 
        language_specific = {obstack = 0x0, demangled_name = 0x0}, language = language_auto, ada_mangled = 0, section = 10}, size = 10516, filename = 0x949a48 "rtld.c", type = mst_file_text, created_by_gdb = 0, 
      target_flag_1 = 1, target_flag_2 = 0, has_size = 1, hash_next = 0x0, demangled_hash_next = 0x0}
    (gdb) p sym.minsym->target_flag_1
    $6 = 1
    

    It indicates that the init_start probe is in dl_main, and that it is called in Thumb mode.

    We can use readelf to inspect the symbol table and verify that this is correct:

    $ readelf -s libc6-syms/usr/lib/debug/lib/arm-linux-gnueabihf/ld-2.27.so | grep dl_main
        42: 0000172d 10516 FUNC LOCAL DEFAULT 11 dl_main
    

    Note that bit 0 of the target address is set.

    If lookup_minimal_symbol_by_pc fails, then we’re basically out of luck and arm_pc_is_thumb will return 0 (indicating that the breakpoint address is in an area that is executing ARM instructions). But this depends on the .symtab ELF section being present, so there is an obvious issue here as this is stripped from the binary in the build (and shipped in a separate debug object).

    I then ran gdb in gdb without the dynamic loader symbols installed and set a breakpoint on default_memory_insert_breakpoint:

    $ gdb --args gdb --batch --command=~/test.script
    (gdb) set debug-file-directory /home/ubuntu/libc6-syms/usr/lib/debug/:/usr/lib/debug/                                                                                                                             
    (gdb) break default_memory_insert_breakpoint(gdbarch*, bp_target_info*)                                  
    Breakpoint 1 at 0x190c34: file ./gdb/mem-break.c, line 39.                                                                                                                                                         
    (gdb) run                                                                                                                                                                                                          
    Starting program: /usr/bin/gdb --batch --command=\~/test.script                                                                                                                                                    
    Cannot parse expression `.L1207 4@r4'.                                                                                                                                                                             
    warning: Probes-based dynamic linker interface failed.                                                   
    Reverting to original interface.                                                                         
    
    [Thread debugging using libthread_db enabled]                                                                                                                                                                      
    Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1".                                                                                                                                      
    
    Breakpoint 1, default_memory_insert_breakpoint (gdbarch=0x9288c0, bp_tgt=0x922fe8) at ./gdb/mem-break.c:39
    39      ./gdb/mem-break.c: No such file or directory.
    (gdb) bt
    #0  default_memory_insert_breakpoint (gdbarch=0x9288c0, bp_tgt=0x922fe8) at ./gdb/mem-break.c:39         
    #1  0x004de3aa in bkpt_insert_location (bl=0x922f90) at ./gdb/breakpoint.c:12525
    #2  0x004e8426 in insert_bp_location (bl=bl@entry=0x922f90, tmp_error_stream=tmp_error_stream@entry=0xfffef07c, disabled_breaks=disabled_breaks@entry=0xfffeefec,
        hw_breakpoint_error=hw_breakpoint_error@entry=0xfffeeff0, hw_bp_error_explained_already=hw_bp_error_explained_already@entry=0xfffeeff4) at ./gdb/breakpoint.c:2553
    #3  0x004e9556 in insert_breakpoint_locations () at ./gdb/breakpoint.c:2977
    #4  update_global_location_list (insert_mode=insert_mode@entry=UGLL_MAY_INSERT) at ./gdb/breakpoint.c:12177
    #5  0x004ea0a0 in update_global_location_list_nothrow (insert_mode=UGLL_MAY_INSERT) at ./gdb/breakpoint.c:12215
    #6  0x004ea484 in create_solib_event_breakpoint_1 (insert_mode=UGLL_MAY_INSERT, address=address@entry=4152138308, gdbarch=gdbarch@entry=0x0) at ./gdb/breakpoint.c:7555
    #7  create_solib_event_breakpoint (gdbarch=gdbarch@entry=0x9288c0, address=address@entry=4152138308) at ./gdb/breakpoint.c:7562
    #8  0x004497bc in svr4_create_probe_breakpoints (objfile=0x933238, probes=0xfffef148, gdbarch=0x9288c0) at ./gdb/solib-svr4.c:2089
    #9  svr4_create_solib_event_breakpoints (gdbarch=0x9288c0, address=<optimized out>) at ./gdb/solib-svr4.c:2173
    #10 0x00449c5c in enable_break (from_tty=<optimized out>, info=<optimized out>) at ./gdb/solib-svr4.c:2465
    #11 svr4_solib_create_inferior_hook (from_tty=<optimized out>) at ./gdb/solib-svr4.c:3057
    #12 0x0056bba6 in post_create_inferior (target=0x801084 <current_target>, from_tty=from_tty@entry=0) at ./gdb/infcmd.c:469
    #13 0x0056c736 in run_command_1 (args=<optimized out>, from_tty=0, run_how=RUN_NORMAL) at ./gdb/infcmd.c:665
    #14 0x00465334 in cmd_func (cmd=<optimized out>, args=<optimized out>, from_tty=<optimized out>) at ./gdb/cli/cli-decode.c:1886
    #15 0x006062a6 in execute_command (p=<optimized out>, p@entry=0x874758 "run", from_tty=0) at ./gdb/top.c:630
    #16 0x00548760 in command_handler (command=0x874758 "run") at ./gdb/event-top.c:583
    #17 0x00606a66 in read_command_file (stream=stream@entry=0x875da8) at ./gdb/top.c:424
    #18 0x004684e2 in script_from_file (stream=stream@entry=0x875da8, file=file@entry=0xfffef7d0 "~/test.script") at ./gdb/cli/cli-script.c:1592
    #19 0x004639bc in source_script_from_stream (file_to_open=0xfffef7d0 "~/test.script", file=0xfffef7d0 "~/test.script", stream=0x875da8) at ./gdb/cli/cli-cmds.c:568
    #20 source_script_with_search (file=0xfffef7d0 "~/test.script", from_tty=<optimized out>, search_path=<optimized out>) at ./gdb/cli/cli-cmds.c:604
    #21 0x0058821a in catch_command_errors (command=0x463a89 <source_script(char const*, int)>, arg=0xfffef7d0 "~/test.script", from_tty=0) at ./gdb/main.c:379
    #22 0x00588ea0 in captured_main_1 (context=<optimized out>) at ./gdb/main.c:1125
    #23 captured_main (data=<optimized out>) at ./gdb/main.c:1147
    #24 gdb_main (args=<optimized out>) at ./gdb/main.c:1173
    #25 0x004343ac in main (argc=<optimized out>, argv=<optimized out>) at ./gdb/gdb.c:32
    (gdb) p/x bp_tgt->placed_address
    $1 = 0xf77c9a44
    (gdb) p bp_tgt->kind
    $2 = 4
    

    Sure enough, default_memory_insert_breakpoint is called this time with kind == 4 (ARM_BP_KIND_ARM) which seems to be incorrect. Setting a breakpoint in arm_pc_is_thumb again, we can verify that the reason for this is that the call to lookup_minimal_symbol_by_pc fails:

    (gdb) break ./gdb/arm-tdep.c:439                                                                                                                                                                                   
    Breakpoint 1 at 0x3ec54: file ./gdb/arm-tdep.c, line 439.                                                                                                                                                          
    (gdb) run
    Starting program: /usr/bin/gdb --command=\~/test.script                                                  
    Cannot parse expression `.L1207 4@r4'.                                                                   
    warning: Probes-based dynamic linker interface failed.                                                   
    Reverting to original interface.                    
    
    [Thread debugging using libthread_db enabled]       
    Using host libthread_db library "/lib/arm-linux-gnueabihf/libthread_db.so.1".
    GNU gdb (Ubuntu 8.1-0ubuntu3) 8.1.0.20180409-git
    Copyright (C) 2018 Free Software Foundation, Inc.
    License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
    This is free software: you are free to change and redistribute it.
    There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
    and "show warranty" for details.
    This GDB was configured as "arm-linux-gnueabihf".
    Type "show configuration" for configuration details.
    For bug reporting instructions, please see:
    <http://www.gnu.org/software/gdb/bugs/>.
    Find the GDB manual and other documentation resources online at:
    <http://www.gnu.org/software/gdb/documentation/>.
    For help, type "help".
    Type "apropos word" to search for commands related to "word".
    
    Breakpoint 1, arm_pc_is_thumb (gdbarch=0x928fb0, memaddr=4152138308) at ./gdb/arm-tdep.c:439
    439     ./gdb/arm-tdep.c: No such file or directory.
    (gdb) p/x memaddr
    $1 = 0xf77c9a44
    (gdb) p sym.minsym
    $2 = (minimal_symbol *) 0x0
    

    This results in arm_pc_is_thumb returning 0 and arm_breakpoint_kind_from_pc returning ARM_BP_KIND_ARM, which results in arm_sw_breakpoint_from_kind returning the wrong breakpoint instruction sequence.

    TL;DR

    GDB causes a crash in the dynamic loader (ld.so) on armv7 if ld.so has been stripped of its symbol table, because it is unable to correctly determine the appropriate instruction set when inserting probe event breakpoints.

    If you’ve got to the end of this (congratulations), then you’re probably going to be disappointed to hear that I’m not sure what the proper fix is – this isn’t really my area of expertise. For the rustc package, I just added a Build-Depends: libc6-dbg [armhf] as a workaround for now, and that might even be the correct fix. But, it’s certainly nicer to understand why it didn’t work in the first place.

    on May 02, 2018 09:02 PM

    One of the complains about the new streaming entertainment world is that it removes the collective experience of everyone watching the same programme on telly the night before then discussing it. At least in the international world I tend to live in that was never much of an option and instead it is a common topic of conversation now when I meet with people around the world to discuss the best series from the world of media. So allow me to recommend a couple which seem to have missed many people’s conciousness.

    On the original streaming media site BBC iPlayer radio there’s a whole new 6th series of Hitchhiker’s Guide to the Galaxy. 40 years in the making and still full of whimsical understated comedy about life. And best of all they’re repeating the 1st, 2nd and just started 3rd series of the show.

    Back in telly land I was reluctant to pay money for the privilage of spending my life watching telly but a student discount made Amazon Prime a tempting offer for my girlfriend.  I discovered Black Sails which is the best telly I’ve ever seen.  A prequal to Scottish classic Treasure Island with Captain Flint (who you’ll remember only appears in the original book as a parrot) and John Silver, it impressively mixes in real life pirates from 18th century Carribean.  The production qualities are superb, filming on water is always expensive (see Water World or Titanic or even Lost) and here they had to recreate several near full-size sailing boats.  The plotting is ideal with allegiances changing every episode or two in a mostly plausable way.  And it successfully ends before running out of energy.  I’m a fan.

    Meanwhile on Netflix I wasn’t especially interested in the new Star Trek but it turns out to include space tardegrades and therefor became much more exciting.

     

    Facebooktwittergoogle_pluslinkedinby feather
    on May 02, 2018 10:36 AM

    May 01, 2018

    Once again it’s been a while since we posted a general update, so here’s a changelog-style summary of what we’ve been up to.  As usual, this changelog preserves a reasonable amount of technical detail, but I’ve omitted changes that were purely internal refactoring with no externally-visible effects.

    Answers

    • Hide questions on inactive projects from the results of non-pillar-specific searches

    Blueprints

    • Optimise the main query on Person:+upcomingwork (#1692120)
    • Apply the spec privacy check on Person:+upcomingwork only to relevant specs (#1696519)
    • Move base clauses for specification searches into a CTE to avoid slow sequential scans

    Bugs

    • Switch to HTTPS for CVE references
    • Fix various failures to sync from Red Hat’s Bugzilla instance (#1678486)

    Build farm

    • Send the necessary set of archive signing keys to builders (#1626739)
    • Hide the virt/nonvirt queue portlets on BuilderSet:+index if they’d be empty
    • Add a feature flag which can be used to prevent dispatching any build under a given minimum score
    • Write files fetched from builders to a temporary name, and only rename them into place on success
    • Emit the build URL at the start of build logs

    Code

    • Fix crash when scanning a Git-based MP when we need to link a new RevisionAuthor to an existing Person (#1693543)
    • Add source ref name to breadcrumbs for Git-based MPs; this gets the ref name into the page title, which makes it easier to find Git-based MPs in browser history
    • Allow registry experts to delete recipes
    • Explicitly mark the local apt archive for recipe builds as trusted (#1701826)
    • Set +code as the default view on the code layer for (Person)DistributionSourcePackage
    • Improve handling of branches with various kinds of partial data
    • Add and export BranchMergeProposal.scheduleDiffUpdates (#483945)
    • Move “Updating repository…” notice above the list of branches so that it’s harder to miss (#1745161)
    • Upgrade to Pygments 2.2.0, including better formatting of *.md files (#1740903)
    • Sort cancelled-before-starting recipe builds to the end of the build history (#746140)
    • Clean up the {Branch,GitRef}:+register-merge UI slightly
    • Optimise merge detection when the branch has no landing candidates

    Infrastructure

    • Use correct method separator in Allow headers (#1717682)
    • Optimise lp_sitecustomize so that bin/py starts up more quickly
    • Add a utility to make it easier to run Launchpad code inside lxc exec
    • Convert lp-source-dependencies to git
    • Remove the post-webservice-GET commit
    • Convert build system to virtualenv and pip, unblocking many upgrades of dependencies
    • Use eslint to lint JavaScript files
    • Tidy up various minor problems in the top-level Makefile (#483782)
    • Offering ECDSA or Ed25519 SSH keys to Launchpad SSH servers no longer causes a hang, although it still isn’t possible to use them for authentication (#830679)
    • Reject SSH public keys that Twisted can’t load (#230144)
    • Backport GPGME file descriptor handling improvements to fix timeouts importing GPG keys (#1753019)
    • Improve OOPSes for jobs
    • Switch the site-wide search to Bing Custom Search, since Google Site Search has been discontinued
    • Don’t send email to direct recipients without active accounts

    Registry

    • Fix the privacy banner on PersonProduct pages
    • Show GPG fingerprints rather than collidable short key IDs (#1576142)
    • Fix PersonSet.getPrecachedPersonsFromIDs to handle teams with mailing lists
    • Optimise front page, mainly by gathering more statistics periodically rather than on the fly
    • Construct public keyserver links using HTTPS without an explicit port (#1739110)
    • Fall back to emailing the team owner if the team has no admins (#1270141)

    Snappy

    • Log some useful information from authorising macaroons while uploading snaps to the store, to make it easier to diagnose problems
    • Extract more useful error messages when snap store operations fail (#1650461, #1687068)
    • Send mail rather than OOPSing if refreshing snap store upload macaroons fails (#1668368)
    • Automatically retry snap store upload attempts that return 502 or 503
    • Initialise git submodules in snap builds (#1694413)
    • Make SnapStoreUploadJob retries go via celery and be much more responsive (#1689282)
    • Run snap builds in LXD containers, allowing them to install snaps as build-dependencies
    • Allow setting Snap.git_path directly on the webservice
    • Batch snap listing views (#1722562)
    • Fix AJAX update of snap builds table to handle all build statuses
    • Set SNAPCRAFT_BUILD_INFO=1 to tell snapcraft to generate a manifest
    • Only emit snap:build:0.1 webhooks from SnapBuild.updateStatus if the status has changed
    • Expose extended error messages (with external link) for snap build jobs (#1729580)
    • Begin work on allowing snap builds to install snapcraft as a snap; this can currently be set up via the API, and work is in progress to add UI and to migrate to this as the default (#1737994)
    • Add an admin option to disable external network access for snap builds
    • Export ISnapSet.findByOwner on the webservice
    • Prefer Snap.store_name over Snap.name for the “name” argument dispatched to snap builds
    • Pass build URL to snapcraft using SNAPCRAFT_IMAGE_INFO
    • Add an option to build source tarballs for snaps (#1763639)

    Soyuz (package management)

    • Stop SourcePackagePublishingHistory.getPublishedBinaries materialising rows outside the current batch; this fixes webservice timeouts for sources with large numbers of binaries (#1695113)
    • Implement proxying of PackageUpload binary files via the webapp, since DistroSeries:+queue now assumes that that works (#1697680)
    • Truncate signing key common-names to 64 characters (#1608615)
    • Allow setting a relative build score on live filesystems (#1452543)
    • Add signing support for vmlinux for use on ppc64el Opal (and compatible) firmware
    • Run live filesystem builds in LXD containers, allowing them to install snaps as build-dependencies
    • Accept a “debug” entry in live filesystem build metadata, which enables detailed live-build debugging
    • Accept and ignore options (e.g. [trusted=yes]) in sources.list lines passed via external_dependencies
    • Send proper email notifications about most failures to parse the .changes file (#499438)
    • Ensure that PPA .htpasswd salts are drawn from the correct alphabet (#1722209)
    • Improve DpkgArchitectureCache‘s timeline handling, and speed it up a bit in some cases (#1062638)
    • Support passing a snap channel into a live filesystem build through the environment
    • Add support for passing apt proxies to live-build
    • Allow anonymous launchpad.View on IDistributionSourcePackage
    • Handle queries starting with “ppa:” when searching the PPA vocabulary
    • Make PackageTranslationsUploadJob download librarian files to disk rather than memory
    • Send email notifications when an upload is signed with an expired key
    • Add Release, Release.gpg, and InRelease to by-hash directories
    • After publishing a custom file, mark its target suite as dirty so that it will be published (#1509026)

    Translations

    • Fix text_to_html to not parse HTML as a C format string
    • Fall back to the package name from AC_INIT when expanding $(PACKAGE) in translation configuration files if no other definition can be found

    Miscellaneous

    • Show a search icon for pickers where possible rather than “Choose…”
    on May 01, 2018 06:19 PM

    Congratulations to Ubuntu and Fedora on their latest releases.

    This Fedora 28 release is special because it is believed to be the first release in their long history to release exactly when it was originally scheduled.

    The Ubuntu 18.04 LTS release is the biggest release for the Ubuntu Desktop in 5 years as it returns to a lightly customized GNOME desktop. For reference, the biggest changes from vanilla GNOME are the custom Ambiance theme and the inclusion of the popular AppIndicator and Dock extensions (the Dock extension being a simplified version of the famous Dash to Dock). Maybe someday I could do a post about the smaller changes.

    I think one of the more interesting occurrences for fans of Linux desktops is that these releases of two of the biggest LInux distributions occurred within days of each other. I expect this alignment to continue (although maybe not quite as dramatically as this time) since the Fedora and Ubuntu beta releases will happen at similar times and I expect Fedora won’t slip far from its intended release dates again.

    on May 01, 2018 03:32 PM

    April 29, 2018

    I find myself talking about these pretty frequently, and it seems many people have never actually heard about them, so a blog post seems appropriate.

    Window managers traditionally present (for “advanced” users) “virtual” desktops and/or “multiple” desktops. Different window managers will have slightly different implementations and terminology, but typically I think of virtual desktops as being an MxN matrix of screen-sized desktops, and multiple desktops as being some number of disjoint MxN matrices. (In some cases there are only multiple 1×1 desktops) If you’re a MacOS user, I believe you’re limited to a linear array (say, 5 desktops), but even tvtwm back in the early 90s did matrices. In the late 90s Enlightenment offered a cool way of combining virtual and multiple desktops: As usual, you could go left/right/up/down to switch between virtual desktops, but in addition you had a bar across one edge of the screen which you could use to drag the current desktop so as to reveal the underlying desktop. Then you could do it again to see the next underlying one, etc. So you could peek and move windows between the multiple desktops.

    Now, if you are using a tiling window manager like dwm, wmii, or awesome, you may think you have the same kinds of virtual desktops. But in fact what you have is a clever ‘tagged view’ implementation. This lets you pretend that you have virtual desktops, but tagged views are far more powerful.

    In a tagged view, you define ‘tags’ (like desktop names), and assign one or more tags to each window. Your current screen is a view of one or more tags. In this way you can dynamically switch the set of windows displayed.

    For instance, you could assign tag ‘1’ or ‘mail’ to your mail window; ‘2’ or ‘web’ to your browser; ‘3’ or ‘work’ as well as ‘1’ to one terminal, and ‘4’ or ‘notes’ to another terminal. Now if you view tag ‘1’, you will see the mail and first terminal; if you view 1+2, you will see those plus your browser. If you view 2+3, you will see the browser and first terminal but not the mail window.

    As you can see, if you don’t know about this, you can continue to use tagged views as though they were simply multiple desktops. But you can’t get this flexibility with regular multiple or virtual desktops.

    (This may be a case where a video would be worth more than a bunch of text.)

    So in conclusion – when I complain about the primitive window manager on MacOS, I’m not just name-calling. A four-finger gesture to expose the 1xN virtual desktops just isn’t nearly as useful as being able to precisely control which windows I see together on the desktop.

    on April 29, 2018 04:54 PM

    April 27, 2018

    issue 132

    Full Circle Magazine

    This month:
    * Command & Conquer
    * How-To : Python, Freeplane, and Ubuntu Touch
    * Graphics : Inkscape
    * Everyday Linux
    * Researching With Linux
    * My Opinion
    * My Story
    * Book Review: Cracking Codes With Python
    * Ubuntu Games: Dwarf Fortress
    plus: News, Q&A, and much more.


    HELP SUPPORT FULL CIRCLE MAGAZINE:

     

     

     

     

     


    Cover for Issue 132 in EnglishEnglish
    (EPUB)
    on April 27, 2018 07:33 PM
    Thanks to all the hard work from our contributors, Lubuntu 18.04 LTS has been released! With the codename Bionic Beaver, Lubuntu 18.04 LTS is the 14th release of Lubuntu, with support until April of 2021. What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight X11 Desktop Environment (LXDE). The project’s goal […]
    on April 27, 2018 06:14 AM
    We are happy to announce the release of our latest version, Ubuntu Studio 18.04 Bionic Beaver! Unlike the other Ubuntu flavors, this release of Ubuntu Studio is not a Long-Term Suppport (LTS) release. As a regular release, it will be supported for 9 months. Although it is not a Long-Term Support release, it is still […]
    on April 27, 2018 05:11 AM