December 12, 2018

The Linux kernel I/O schedulers attempt to balance the need to get the best possible I/O performance while also trying to ensure the I/O requests are "fairly" shared among the I/O consumers.  There are several I/O schedulers in Linux, each try to solve the I/O scheduling issues using different mechanisms/heuristics and each has their own set of strengths and weaknesses.

For traditional spinning media it makes sense to try and order I/O operations so that they are close together to reduce read/write head movement and hence decrease latency.  However, this reordering means that some I/O requests may get delayed, and the usual solution is to schedule these delayed requests after a specific time.   Faster non-volatile memory devices can generally handle random I/O requests very easily and hence do not require reordering.

Balancing the fairness is also an interesting issue.  A greedy I/O consumer should not block other I/O consumers and there are various heuristics used to determine the fair sharing of I/O.  Generally, the more complex and "fairer" the solution the more compute is required, so selecting a very fair I/O scheduler with a fast I/O device and a slow CPU may not necessarily perform as well as a simpler I/O scheduler.

Finally, the types of I/O patterns on the I/O devices influence the I/O scheduler choice, for example, mixed random read/writes vs mainly sequential reads and occasional random writes.

Because of the mix of requirements, there is no such thing as a perfect all round I/O scheduler.  The defaults being used are chosen to be a good best choice for the general user, however, this may not match everyone's needs.   To clarify the choices, the Ubuntu Kernel Team has provided a Wiki page describing the choices and how to select and tune the various I/O schedulers.  Caveat emptor applies, these are just guidelines and should be used as a starting point to finding the best I/O scheduler for your particular need.
on December 12, 2018 09:55 AM

December 11, 2018

Public speaking is an art form. There are some amazing speakers, such as Lawrence Lessig, Dawn Wacek, Rory Sutherland, and many more. There are also some boring, rambling disasters that clog up meetups, conferences, and company events.

I don’t claim to be an expert in public speaking, but I have had the opportunity to do a lot of it, including keynotes, presentation sessions, workshops, tutorials, and more. Over the years I have picked up some best practices and I thought I would share some of them here. I would love to hear your recommendations too, so pop them in the comments.

1. Produce Clean Slides

Great talks are a mixture of simple, effective slides and a dynamic, engaging speaker. If one part of this combination is overloading you with information, the other part gets ignored.

The primary focus should be you and your words. Your #1 goal is to weave together an interesting story that captivates your audience. 

Your slides should simple provide a visual tool to help get your words over more effectively. Your slides are not the lead actress, they are the supporting actor.

Avoid extensive amounts of text and paragraphs. Focus on diagrams, pictures, and simple lists.

Good:

Bad:

Notice how I took my company logo off, just in case someone swipes it and think that I actually like to make slides like this. 🙂

Look at the slides of great speakers to get your creativity flowing.

2. Deliver Pragmatic Information

Keynotes are designed for the big ideas that set the stage for a conference. Regular talks are designed to get over key concepts that can help the audience expand their capabilities.

With both, give your audience information they can pragmatically use. How many times have you left a talk and thought, “Well, that was neat, but, er…how the hell do I start putting those concepts into action?

You don’t have to have all the answers, but you need to package up your ideas in a way that is easy to consume in the real world, not just on a stage.

Diagrams, lists, and step-by-step instructions work well. Make these higher level for the keynotes and more in-depth for the regular talks. Avoid abstract, generic ideas: they are unsatisfying and boring.

3. Build and Relieve Tension

Great movies and TV shows build a sense of tension (e.g. a character in a hostage situation) and the payoff is when that tension is relieved (e.g. the character gets rescued.)

Take a similar approach in your talks. Become vulnerable. Share times when you struggled, got things wrong, or made mistakes. Paint a picture of the low point and what was running through your mind.

Then, relieve the tension by sharing how you overcame it, bringing your audience along for the ride. This makes your presentation dynamic and interesting, and makes it clear that you are not perfect either, which helps build a closer connection with the audience. Speaking of which…

4. Loosen Up and Be Yourself

Far too many speakers deliver their presentations like they have a rod up their backside.

Formal presentations are boring. Presentations where the speaker feels comfortable in their own skin and is able to identify with the audience are much more interesting.

For example, I was delivering a presentation to a financial services firm a few months ago. I weaved in it stories about my family, my love of music, travel experiences, and other elements that made it more personal. After the session a number of audience members came over and shared how it was refreshing to see a more approachable presentation in a world that is typically so formal.

Your goal is to build a connection with your audience. To do this well they need to feel you are on the same level. Speak like them, share stories that relate to them, and they will give you their attention, which is all you can ask for.

5. Involve Your Audience (but not too much)

There is a natural barrier between you and your audience. We are wired up to know that the social context of a presentation means the speaker does the talking and the audience does the listening. If you violate this norm (such as heckling), you would be perceived as an asshole.

You need to break this barrier, but to never cede control to your audience. If you loose control and make the social norm for them to interrupt, your presentation will be riddled with audience noise.

Give them very specific ways to participate, such as:

  • Ask how they are doing at the beginning of a talk.
  • Throw out questions and invite them to put their hands up (or clap loudly.)
  • Invite someone to volunteer for something (such as a role play scenario.)
  • Take and answer questions.

6. Keep Your Ego in Check

We have all seen it. A speaker is welcomed to the stage and they constantly remind you about how great they are, the awards they have won, and how (allegedly) inspirational they are. In some cases this is blunt-force ego, in some cases it is a humblebrag. In both cases it sucks.

Be proud of your work and be great at it, but let the audience sing your praises, not you. Ego can have a damaging impact on your presentation and how you are perceived. This can drive a wedge between you and your audience.

7. Don’t Rush, but Stay on Time

We live in multi-cultural world in which we travel a lot. You are likely to have an audience from all over the world, speaking many different languages, and from a variety of backgrounds. Speaking at a million words a minute will make understanding you very difficult some people.

Speak at a comfortable pace, and don’t rush it. Now, some of you will be natural fast-talkers, and will need to practice this. Remember these?:

Well, we now all have them on our phones. Switch it on, practice, and ensure you always finish at least a few minutes before your allocated time. This will give you a buffer.

Running over your allocated time is a sure-fire way to annoy (a) the other speakers who may have to cut their time short, and (b) the event organizer who has to deal with overruns in the schedule. “But it only went over by a few minutes!” Sure, but when everyone does this, entire events get way behind schedule. Don’t be that person.

8. Practice and get Honest Feedback

We only get better when we practice and can see our blind spots. Both are essential for getting good at public speaking.

Start simple. Speak at your local meetups, community events, and other gatherings. Practice, get comfortable, and then file papers at conferences and other events. Keep practicing, and keep refining.

Critique is essential here. Ask close friends to sit in your talks and ask them for blunt feedback afterwards. What went well? What didn’t go well? Be explicit in inviting criticism and don’t overreact to them when you get it. You want critical feedback…about your slides, your content, your pacing, your hand gestures…the lot. I have had some very blunt feedback over the years and it has always improved my presentations.

9. Never Depend on Conference WiFi

It rarely works well, simple as that.

Oh, and your mobile hotspot may not work either as many conference centers often seem to be built in borderline faraday cages. Next…

10. Remember, it is just a Presentation

Some people get a little wacky when it comes to perfecting presentations and public speaking. I know some people who have spent weeks preparing and refining their talks, often getting into a tailspin about imperfections that need to be improved.

The most important thing to worry about is the content. Is it interesting? Is it valuable? Does it enrich your audience? People are not going to remember the minute details of how you said something, what your slides looked like, and what whether you blinked too much. They will remember the content and ideas: focus on that.

Oh, and a bonus 11th: turn off animations. They are great in the hands of an artisan, but for most of us they look tacky and awful.

I am purely scratching the surface here and I would love to hear your suggestions of public speaking tips and recommendations. Share them in the comments! Oh and be sure to join as a member, which is entirely free.

The post 10 Ways To Up Your Public Speaking Game appeared first on Jono Bacon.

on December 11, 2018 04:00 PM

Welcome to the Ubuntu Weekly Newsletter, Issue 556 for the weeks of November 25 – December 8, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on December 11, 2018 12:45 AM

December 10, 2018

I am excited to share that I will be heading to Tel Aviv later this month to speak at a few events. I wanted to share a few details here, and I hope to see you there!

DevOps Days Tel Aviv

Dev Ops Days Tel Aviv Tues 18 December 2018 + Wed 19 December 2018 at Tel Aviv Convention Center, 101 Rokach Blvd, Tel Aviv, Israel.

I am delivering the opening keynote on Tuesday 18th December 2018 at 9am.

Get Tickets

Meetup: Building Technical Communities That Scale

Thu 20th Dec 2018 at 9am at RISE, Ahad Ha’Am St 54, 54 Ahad Ha’Am Street, Tel Aviv-Yafo, Tel Aviv District, Israel.

I will be delivering a talk and participating in a panel (which includes Fred Simon, Chief Architect of JFrog, Shimon Tolts, CTO of Datree, and Demi Ben Ari, VP R&D of Panorays.)

Get Tickets (Space is limited, so grab tickets ASAP)

I popped a video about this online earlier this week. Check it out:

I hope to see many of you there!

The post Speaking Engagements in Tel Aviv in December appeared first on Jono Bacon.

on December 10, 2018 07:00 AM

December 09, 2018

I’ve heard a surprising “fact” repeated in the CHI and CSCW communities that receiving a best paper award at a conference is uncorrelated with future citations. Although it’s surprising and counterintuitive, it’s a nice thing to think about when you don’t get an award and its a nice thing to say to others when you do. I’ve thought it and said it myself.

It also seems to be untrue. When I tried to check the “fact” recently, I found a body of evidence that suggests that computing papers that receive best paper awards are, in fact, cited more often than papers that do not.

The source of the original “fact” seems to be a CHI 2009 study by Christoph Bartneck and Jun Hu titled “Scientometric Analysis of the CHI Proceedings.” Among many other things, the paper presents a null result for a test of a difference in the distribution of citations across best papers awardees, nominees, and a random sample of non-nominees.

Although the award analysis is only a small part of Bartneck and Hu’s paper, there have been at least two papers have have subsequently brought more attention, more data, and more sophisticated analyses to the question.  In 2015, the question was asked by Jaques Wainer, Michael Eckmann, and Anderson Rocha in their paper “Peer-Selected ‘Best Papers’—Are They Really That ‘Good’?

Wainer et al. build two datasets: one of papers from 12 computer science conferences with citation data from Scopus and another papers from 17 different conferences with citation data from Google Scholar. Because of parametric concerns, Wainer et al. used a non-parametric rank-based technique to compare awardees to non-awardees.  Wainer et al. summarize their results as follows:

The probability that a best paper will receive more citations than a non best paper is 0.72 (95% CI = 0.66, 0.77) for the Scopus data, and 0.78 (95% CI = 0.74, 0.81) for the Scholar data. There are no significant changes in the probabilities for different years. Also, 51% of the best papers are among the top 10% most cited papers in each conference/year, and 64% of them are among the top 20% most cited.

The question was also recently explored in a different way by Danielle H. Lee in her paper on “Predictive power of conference‐related factors on citation rates of conference papers” published in June 2018.

Lee looked at 43,000 papers from 81 conferences and built a regression model to predict citations. Taking into an account a number of controls not considered in previous analyses, Lee finds that the marginal effect of receiving a best paper award on citations is positive, well-estimated, and large.

Why did Bartneck and Hu come to such a different conclusions than later work?

Distribution of citations (received by 2009) of CHI papers published between 2004-2007 that were nominated for a best paper award (n=64), received one (n=12), or were part of a random sample of papers that did not (n=76).

My first thought was that perhaps CHI is different than the rest of computing. However, when I looked at the data from Bartneck and Hu’s 2009 study—conveniently included as a figure in their original study—you can see that they did find a higher mean among the award recipients compared to both nominees and non-nominees. The entire distribution of citations among award winners appears to be pushed upwards. Although Bartneck and Hu found an effect, they did not find a statistically significant effect.

Given the more recent work by Wainer et al. and Lee, I’d be willing to venture that the original null finding was a function of the fact that citations is a very noisy measure—especially over a 2-5 post-publication period—and that the Bartneck and Hu dataset was small with only 12 awardees out of 152 papers total. This might have caused problems because the statistical test the authors used was an omnibus test for differences in a three-group sample that was imbalanced heavily toward the two groups (nominees and non-nominees) in which their appears to be little difference. My bet is that the paper’s conclusions on awards is simply an example of how a null effect is not evidence of a non-effect—especially in an underpowered dataset.

Of course, none of this means that award winning papers are better. Despite Wainer et al.’s claim that they are showing that award winning papers are “good,” none of the analyses presented can disentangle the signalling value of an award from differences in underlying paper quality. The packed rooms one routinely finds at best paper sessions at conferences suggest that at least some additional citations received by award winners might be caused by extra exposure caused by the awards themselves. In the future, perhaps people can say something along these lines instead of repeating the “fact” of the non-relationship.


on December 09, 2018 08:20 PM

December 07, 2018

Lately at Crossbar.io, we have been PySide2 for an internal project. Last week it reached a milestone and I am now in the process of code cleanup and refactoring as we had to rush quite a few things for that deadline. We also create a snap package for the project, our previous approach was to ship the whole PySide2 runtime (170mb+) with the Snap, it worked but was a slow process, because each new snap build involved downloading PySide2 from PyPI and installing some deb dependencies.

So I decided to play with the content interface and cooked up a new snap that is now published to snap store. This definitely resulted in overall size reduction of the snap but at the same time opens a lot of different opportunities for app development on the Linux desktop.

I created a 'Hello World' snap that is just 8Kb in size since it doesn't include any dependencies with it as they are provided by the pyside2 snap. I am currently working on a very simple "sound recorder" app using PySide and will publish to the Snap store.

With pyside2 snap installed, we can probably export a few environment variables to make the runtime available outside of snap environment, for someone who is developing an app on their computer.
on December 07, 2018 05:11 PM

December 06, 2018

www.kde.org

Jonathan Riddell

It’s not uncommon to come across some dusty corner of KDE which hasn’t been touched in ages and has only half implemented features. One of the joys of KDE is being able to plunge in and fix any such problem areas. But it’s quite a surprise when a high profile area of KDE ends up unmaintained. www.kde.org is one such area and it was getting embarrassing. February 2016 we had a sprint where a new theme was rolled out on the main pages making the website look fresh and act responsively on mobiles but since then, for various failures of management, nothing has happened. So while the neon build servers were down for shuffling to a new machine I looked into why Plasma release announcements were updated but not Frameworks or Applications announcments. I’d automated Plasma announcements a while ago but it turns out the other announcements are still done manually, so I updated those and poked the people involved. Then of course I got stuck looking at all the other pages which hadn’t been ported to the new theme. On review there were not actually too many of them, if you ignore the announcements, the website is not very large.

Many of the pages could be just forwarded to more recent equivalents such as getting the history page (last update in 2003) to point to timeline.kde.org or the presentation slides page (last update for KDE 4 release) to point to a more up to date wiki page.

Others are worth reviving such as KDE screenshots page, press contacts, support page. The contents could still do with some pondering on what is useful but while they exist we shouldn’t pretend they don’t so I updated those and added back links to them.

While many of these pages are hard to find or not linked at all from www.kde.org they are still the top hits in Google when you search for “KDE presentation” or “kde history” or “kde support” so it is worth not looking like we are a dead project.

There were also obvious bugs that needed fixed for example the cookie-opt-out banner didn’t let you opt out, the font didn’t get loaded, the favicon was inconsistent.

All of these are easy enough fixes but the technical barrier is too high to get it done easily (you need special permission to have access to www.kde.org reasonably enough) and the social barrier is far too high (you will get complaints when changing something high profile like this, far easier to just let it rot). I’m not sure how to solve this but KDE should work out a way to allow project maintenance tasks like this be more open.

Anyway yay, www.kde.org is now new theme everywhere (except old announcements) and pages have up to date content.

There is a TODO item to track website improvements if you’re interested in helping, although it missed the main one which is the stalled port to WordPress, again a place it just needs someone to plunge in and do the work. It’s satisfying because it’s a high profile improvement but alas it highlights some failings in a mature community project like ours.

Facebooktwittergoogle_pluslinkedinby feather
on December 06, 2018 04:44 PM

S11E39 – The Thirty-Nine Steps

Ubuntu Podcast from the UK LoCo

This week we’ve been flashing devices and getting a new display. We discuss Huawei developing its own mobile OS, Steam Link coming to the Raspberry Pi, Epic Games laucnhing their own digital store and we round up the community news.

It’s Season 11 Episode 39 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on December 06, 2018 03:00 PM

S01E14 – Dos oito, aos oitenta

Podcast Ubuntu Portugal

Já com o pensamento em 2019, sem esquecer a quadra natalícia, neste episódio -que volta a sair à quinta-feira!!! – falamos sobre prendas, home automation e revivalismo. Já sabes: Ouve, subscreve e partilha!

Patrocínios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Atribuição e licenças

A imagem de capa: Nick Hobgood e está licenciada como CC BY-SA.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on December 06, 2018 01:13 PM

December 05, 2018

Banana Peels

Benjamin Mako Hill

Photo comic of seeing a banana peel in the road while on a bike.

Although it’s been decades since I last played, it’s still flashbacks to Super Mario Kart and pangs of irrational fear every time I see a banana peel in the road.

on December 05, 2018 04:25 AM

December 04, 2018

Deploying Swift

Colin Watson

Sometimes I want to deploy Swift, the OpenStack object storage system.

Well, no, that’s not true. I basically never actually want to deploy Swift as such. What I generally want to do is to debug some bit of production service deployment machinery that relies on Swift for getting build artifacts into the right place, or maybe the parts of the Launchpad librarian (our blob storage service) that use Swift. I could find an existing private or public cloud that offers the right API and test with that, but sometimes I need to test with particular versions, and in any case I have a terribly slow internet connection and shuffling large build artifacts back and forward over the relevant bit of wet string makes it painfully slow to test things.

For a while I’ve had an Ubuntu 12.04 VM lying around with an Icehouse-based Swift deployment that I put together by hand. It works, but I didn’t keep good notes and have no real idea how to reproduce it, not that I really want to keep limping along with manually-constructed VMs for this kind of thing anyway; and I don’t want to be dependent on obsolete releases forever. For the sorts of things I’m doing I need to make sure that authentication works broadly the same way as it does in a real production deployment, so I want to have Keystone too. At the same time, I definitely don’t want to do anything close to a full OpenStack deployment of my own: it’s much too big a sledgehammer for this particular nut, and I don’t really have the hardware for it.

Here’s my solution to this, which is compact enough that I can run it on my laptop, and while it isn’t completely automatic it’s close enough that I can spin it up for a test and discard it when I’m finished (so I haven’t worried very much about producing something that runs efficiently). It relies on Juju and LXD. I’ve only tested it on Ubuntu 18.04, using Queens; for anything else you’re on your own. In general, I probably can’t help you if you run into trouble with the directions here: this is provided “as is”, without warranty of any kind, and all that kind of thing.

First, install Juju and LXD if necessary, following the instructions provided by those projects, and also install the python-openstackclient package as you’ll need it later. You’ll want to set Juju up to use LXD, and you should probably make sure that the shells you’re working in don’t have http_proxy set as it’s quite likely to confuse things unless you’ve arranged for your proxy to be able to cope with your local LXD containers. Then add a model:

juju add-model swift

At this point there’s a bit of complexity that you normally don’t have to worry about with Juju. The swift-storage charm wants to mount something to use for storage, which with the LXD provider in practice ends up being some kind of loopback mount. Unfortunately, being able to perform loopback mounts exposes too much kernel attack surface, so LXD doesn’t allow unprivileged containers to do it. (Ideally the swift-storage charm would just let you use directory storage instead.) To make the containers we’re about to create privileged enough for this to work, run:

lxc profile set juju-swift security.privileged true
lxc profile device add juju-swift loop-control unix-char \
    major=10 minor=237 path=/dev/loop-control
for i in $(seq 0 255); do
    lxc profile device add juju-swift loop$i unix-block \
        major=7 minor=$i path=/dev/loop$i
done

Now we can start deploying things! Save this to a file, e.g. swift.bundle:

series: bionic
description: "Swift in a box"
applications:
  mysql:
    charm: "cs:mysql-62"
    channel: candidate
    num_units: 1
    options:
      dataset-size: 512M
  keystone:
    charm: "cs:keystone"
    num_units: 1
  swift-storage:
    charm: "cs:swift-storage"
    num_units: 1
    options:
      block-device: "/etc/swift/storage.img|5G"
  swift-proxy:
    charm: "cs:swift-proxy"
    num_units: 1
    options:
      zone-assignment: auto
      replicas: 1
relations:
  - ["keystone:shared-db", "mysql:shared-db"]
  - ["swift-proxy:swift-storage", "swift-storage:swift-storage"]
  - ["swift-proxy:identity-service", "keystone:identity-service"]

And run:

juju deploy swift.bundle

This will take a while. You can run juju status to see how it’s going in general terms, or juju debug-log for detailed logs from the individual containers as they’re putting themselves together. When it’s all done, it should look something like this:

Model  Controller  Cloud/Region     Version  SLA
swift  lxd         localhost        2.3.1    unsupported

App            Version  Status  Scale  Charm          Store       Rev  OS      Notes
keystone       13.0.1   active      1  keystone       jujucharms  290  ubuntu
mysql          5.7.24   active      1  mysql          jujucharms   62  ubuntu
swift-proxy    2.17.0   active      1  swift-proxy    jujucharms   75  ubuntu
swift-storage  2.17.0   active      1  swift-storage  jujucharms  250  ubuntu

Unit              Workload  Agent  Machine  Public address  Ports     Message
keystone/0*       active    idle   0        10.36.63.133    5000/tcp  Unit is ready
mysql/0*          active    idle   1        10.36.63.44     3306/tcp  Ready
swift-proxy/0*    active    idle   2        10.36.63.75     8080/tcp  Unit is ready
swift-storage/0*  active    idle   3        10.36.63.115              Unit is ready

Machine  State    DNS           Inst id        Series  AZ  Message
0        started  10.36.63.133  juju-d3e703-0  bionic      Running
1        started  10.36.63.44   juju-d3e703-1  bionic      Running
2        started  10.36.63.75   juju-d3e703-2  bionic      Running
3        started  10.36.63.115  juju-d3e703-3  bionic      Running

At this point you have what should be a working installation, but with only administrative privileges set up. Normally you want to create at least one normal user. To do this, start by creating a configuration file granting administrator privileges (this one comes verbatim from the openstack-base bundle):

_OS_PARAMS=$(env | awk 'BEGIN {FS="="} /^OS_/ {print $1;}' | paste -sd ' ')
for param in $_OS_PARAMS; do
    if [ "$param" = "OS_AUTH_PROTOCOL" ]; then continue; fi
    if [ "$param" = "OS_CACERT" ]; then continue; fi
    unset $param
done
unset _OS_PARAMS

_keystone_unit=$(juju status keystone --format yaml | \
    awk '/units:$/ {getline; gsub(/:$/, ""); print $1}')
_keystone_ip=$(juju run --unit ${_keystone_unit} 'unit-get private-address')
_password=$(juju run --unit ${_keystone_unit} 'leader-get admin_passwd')

export OS_AUTH_URL=${OS_AUTH_PROTOCOL:-http}://${_keystone_ip}:5000/v3
export OS_USERNAME=admin
export OS_PASSWORD=${_password}
export OS_USER_DOMAIN_NAME=admin_domain
export OS_PROJECT_DOMAIN_NAME=admin_domain
export OS_PROJECT_NAME=admin
export OS_REGION_NAME=RegionOne
export OS_IDENTITY_API_VERSION=3
# Swift needs this:
export OS_AUTH_VERSION=3
# Gnocchi needs this
export OS_AUTH_TYPE=password

Source this into a shell: for instance, if you saved this to ~/.swiftrc.juju-admin, then run:

. ~/.swiftrc.juju-admin

You should now be able to run openstack endpoint list and see a table for the various services exposed by your deployment. Then you can create a dummy project and a user with enough privileges to use Swift:

USERNAME=your-username
PASSWORD=your-password
openstack domain create SwiftDomain
openstack project create --domain SwiftDomain --description Swift \
    SwiftProject
openstack user create --domain SwiftDomain --project-domain SwiftDomain \
    --project SwiftProject --password "$PASSWORD" "$USERNAME"
openstack role add --project SwiftProject --user-domain SwiftDomain \
    --user "$USERNAME" Member

(This is intended for testing rather than for doing anything particularly sensitive. If you cared about keeping the password secret then you’d use the --password-prompt option to openstack user create instead of supplying the password on the command line.)

Now create a configuration file granting privileges for the user you just created. I felt like automating this to at least some degree:

touch ~/.swiftrc.juju
chmod 600 ~/.swiftrc.juju
sed '/^_password=/d;
     s/\( OS_PROJECT_DOMAIN_NAME=\).*/\1SwiftDomain/;
     s/\( OS_PROJECT_NAME=\).*/\1SwiftProject/;
     s/\( OS_USER_DOMAIN_NAME=\).*/\1SwiftDomain/;
     s/\( OS_USERNAME=\).*/\1'"$USERNAME"'/;
     s/\( OS_PASSWORD=\).*/\1'"$PASSWORD"'/' \
     <~/.swiftrc.juju-admin >~/.swiftrc.juju

Source this into a shell. For example:

. ~/.swiftrc.juju

You should now find that swift list works. Success! Now you can swift upload files, or just start testing whatever it was that you were actually trying to test in the first place.

This is not a setup I expect to leave running for a long time, so to tear it down again:

juju destroy-model swift

This will probably get stuck trying to remove the swift-storage unit, since nothing deals with detaching the loop device. If that happens, find the relevant device in losetup -a from another window and use losetup -d to detach it; juju destroy-model should then be able to proceed.

Credit to the Juju and LXD teams and to the maintainers of the various charms used here, as well as of course to the OpenStack folks: their work made it very much easier to put this together.

on December 04, 2018 01:37 AM

December 03, 2018

My home automation plans have been progressing and I'd like to share some observations I've made about planning a project like this, especially for those with larger houses.

With so many products and technologies, it can be hard to know where to start. Some things have become straightforward, for example, Domoticz can soon be installed from a package on some distributions. Yet this simply leaves people contemplating what to do next.

The quickstart

For a small home, like an apartment, you can simply buy something like the Zigate, a single motion and temperature sensor, a couple of smart bulbs and expand from there.

For a large home, you can also get your feet wet with exactly the same approach in a single room. Once you are familiar with the products, use a more structured approach to plan a complete solution for every other space.

The Debian wiki has started gathering some notes on things that work easily on GNU/Linux systems like Debian as well as Fedora and others.

Prioritize

What is your first goal? For example, are you excited about having smart lights or are you more concerned with improving your heating system efficiency with zoned logic?

Trying to do everything at once may be overwhelming. Make each of these things into a separate sub-project or milestone.

Technology choices

There are many technology choices:

  • Zigbee, Z-Wave or another protocol? I'm starting out with a preference for Zigbee but may try some Z-Wave devices along the way.
  • E27 or B22 (Bayonet) light bulbs? People in the UK and former colonies may have B22 light sockets and lamps. For new deployments, you may want to standardize on E27. Amongst other things, E27 is used by all the Ikea lamp stands and if you want to be able to move your expensive new smart bulbs between different holders in your house at will, you may want to standardize on E27 for all of them and avoid buying any Bayonet / B22 products in future.
  • Wired or wireless? Whenever you take up floorboards, it is a good idea to add some new wiring. For example, CAT6 can carry both power and data for a diverse range of devices.
  • Battery or mains power? In an apartment with two rooms and less than five devices, batteries may be fine but in a house, you may end up with more than a hundred sensors, radiator valves, buttons, and switches and you may find yourself changing a battery in one of them every week. If you have lodgers or tenants and you are not there to change the batteries then this may cause further complications. Some of the sensors have a socket for an optional power supply, battery eliminators may also be an option.

Making an inventory

Creating a spreadsheet table is extremely useful.

This helps estimate the correct quantity of sensors, bulbs, radiator valves and switches and it also helps to budget. Simply print it out, leave it under the Christmas tree and hope Santa will do the rest for you.

Looking at my own house, these are the things I counted in a first pass:

Don't forget to include all those unusual spaces like walk-in pantries, a large cupboard under the stairs, cellar, en-suite or enclosed porch. Each deserves a row in the table.

Sensors help make good decisions

Whatever the aim of the project, sensors are likely to help obtain useful data about the space and this can help to choose and use other products more effectively.

Therefore, it is often a good idea to choose and deploy sensors through the home before choosing other products like radiator valves and smart bulbs.

The smartest place to put those smart sensors

When placing motion sensors, it is important to avoid putting them too close to doorways where they might detect motion in adjacent rooms or hallways. It is also a good idea to avoid putting the sensor too close to any light bulb: if the bulb attracts an insect, it will trigger the motion sensor repeatedly. Temperature sensors shouldn't be too close to heaters or potential draughts around doorways and windows.

There are a range of all-in-one sensors available, some have up to six features in one device smaller than an apple. In some rooms this is a convenient solution but in other rooms, it may be desirable to have separate motion and temperature sensors in different locations.

Consider the dining and sitting rooms in my own house, illustrated in the floorplan below. The sitting room is also a potential 6th bedroom or guest room with sofa bed, the downstairs shower room conveniently located across the hall. The dining room is joined to the sitting room by a sliding double door. When the sliding door is open, a 360 degree motion sensor in the ceiling of the sitting room may detect motion in the dining room and vice-versa. It appears that 180 degree motion sensors located at the points "1" and "2" in the floorplan may be a better solution.

These rooms have wall mounted radiators and fireplaces. To avoid any of these potential heat sources the temperature sensors should probably be in the middle of the room.

This photo shows the proposed location for the 180 degree motion sensor "2" on the wall above the double door:

Summary

To summarize, buy a Zigate and a small number of products to start experimenting with. Make an inventory of all the products potentially needed for your home. Try to mark sensor locations on a floorplan, thinking about the type of sensor (or multiple sensors) you need for each space.

on December 03, 2018 08:44 AM

A guest post authored by Jennine Townsend, expert sysadmin and AWS aficionado

There were so many sessions at re:Invent! Now that it’s over, I want to watch some sessions on video, but which ones?

Of course I’ll pick out those that are specific to my interests, but I also want to know the sessions that had good buzz, so I made a list that’s kind of mashed together from sessions that I heard good things about on Twitter, with those that had lots of repeats and overflow sessions, figuring those must have been popular.

But I confess I left out some whole categories! There aren’t sessions for Alexa or DeepRacer (not that I’m not interested, they’re just not part of my re:Invent followup), and I don’t administer any Windows systems so I leave out most of those sessions.

Some sessions have YouTube links, some don’t (yet) have and may never have YouTube videos, since lots of (types of) sessions aren’t recorded. (But even there, if I search the topic and speakers, I bet I can often find an earlier talk.)

There’s not much of a ranking: keynotes at the top, sessions I heard good things about in the middle, then sessions that had lots of repeats. It’s only mildly specific to my interests, so I thought other people might find it helpful. It’s also not really finished, but I wanted to get started watching sessions this weekend!

Keynotes

Peter DeSantis Monday Night Live

Terry Wise Global Partner Keynote

Andy Jassy keynote

Werner Vogels keynote

DEV322 What’s New with the AWS CLI (Kyle Knapp, James Saryerwinnie)

SRV409 A Serverless Journey: AWS Lambda Under the Hood

CON362 Container Power Hour with Jess, Clare, and Abby

SRV325 Using DevOps, Microservices, and Serverless to Accelerate Innovation (David Richardson, Ken Exner, Deepak Singh)

SRV375 Lambda Layers and Runtime API (Danilo Poccia) - Chalk Talk

SRV338 Configuration Management and Service Discovery (mentions CloudMap) (Alex Casalboni, Ben Kehoe) - Chalk Talk

CON367 Introducing App Mesh (Kiran Meduri, Shubha Rao, James Straub)

SRV355 Best Practices for CI/CD with AWS Lambda and Amazon API Gateway (Chris Munns) (focuses on SAM, CodeStar, I believe) - Chalk Talk

DEV327 Advanced Infrastructure as Code Programming on AWS

SRV322 From Monolith to Modern Apps: Best Practices

CON301 Mastering Kubernetes on AWS

ARC202 Running Lean Architectures: How to Optimize for Cost Efficiency

DEV319 Continuous Integration Best Practices

AIM404 Build, Train, and Deploy ML Models Quickly and Easily with Amazon SageMaker

STG209 Amazon S3 Storage Management (Scott Hewitt) - Chalk Talk

ENT205 Executing a Large-Scale Migration to AWS (Joe Chung, Jonathan Allen, Mike Wittig)

DEV317 Advanced Continuous Delivery Best Practices

CON308 Building Microservices with Containers

ANT323 Build Your Own Log Analytics Solutions on AWS

ANT201 Big Data Analytics Architectural Patterns and Best Practices

DEV403 Automate Common Maintenance & Deployment Tasks Using AWS Systems Manager - Builders Session

DAT356 Which Database Should I Use? - Builders Session

DEV309 CI/CD for Serverless and Containerized Applications

ARC209 Architecture Patterns for Multi-Region Active-Active Applications

AIM401 Deep Learning Applications Using TensorFlow

SRV305 Inside AWS: Technology Choices for Modern Applications

SEC401 Mastering Identity at Every Layer of the Cake

SEC371 Incident Response in AWS - Builders Session

SEC322 Using AWS Lambda as a Security Team

NET404 Elastic Load Balancing: Deep Dive and Best Practices

DEV321 What’s New with AWS CloudFormation

DAT205 Databases on AWS: The Right Tool for the Right Job

Original article and comments: https://alestic.com/2018/12/aws-reinvent-jennine/

on December 03, 2018 12:00 AM

December 01, 2018

Migrating web servers

Julian Andres Klode

As of today, I migrated various services from shared hosting on uberspace.de to a VPS hosted by hetzner. This includes my weechat client, this blog, and the following other websites:

  • jak-linux.org
  • dep.debian.net redirector
  • mirror.fail

Rationale

Uberspace runs CentOS 6. This was causing more and more issues for me, as I was trying to run up-to-date weechat binaries. In the final stages, I ran weechat and tmux inside a debian proot. It certainly beat compiling half a system with linuxbrew.

The web performance was suboptimal. Webpages are served with Pound and Apache, TLS connection overhead was just huge, there was only HTTP/1.1, and no keep-alive.

Security-wise things were interesting: Everything ran as my user, obviously, whether that’s scripts, weechat, or mail delivery helpers. Ugh. There was also only a single certificate, meaning that all domains shared it, even if they were completely distinct like jak-linux.org and dep.debian.net

Enter Hetzner VPS

I launched a VPS at hetzner and configured it with Ubuntu 18.04, the latest Ubuntu LTS. It is a CX21, so it has 2 vcores, 4 GB RAM, 40 GB SSD storage, and 20 TB of traffic. For 5.83€/mo, you can’t complain.

I went on to build a repository of ansible roles (see repo on github.com), that configured the system with a few key characteristics:

  • http is served by nginx
  • certificates are per logical domain - each domain has a canonical name and a set of aliases; and the certificate is generated for them all
  • HTTPS is configured according to Mozilla’s modern profile, meaning TLSv1.2-only, and a very restricted list of ciphers. I can revisit that if it’s causing problems, but I’ve not seen huge issues.
  • Log files are anonymized to 24 bits for IPv4 addresses, and 32 bit for IPv6 addresses, which should allow me to identify an ISP, but not an individual user.

I don’t think the roles are particularly reusable for others, but it’s nice to have a central repository containing all the configuration for the server.

Go server to serve comments

When I started self-hosting the blog and added commenting via mastodon, it was via a third-party PHP script. This has been replaced by a Go program (GitHub repo). The new Go program scales a lot better than a PHP script, and provides better security properties due to AppArmor and systemd-based sandboxing; it even uses systemd’s DynamicUser.

Special care has been taken to have time outs for talking to upstream servers, so the program cannot hang with open connections and will respond eventually.

The Go binary is connected to nginx via a UNIX domain socket that serves FastCGI. The service is activated via systemd socket activation, allowing it to be owned by www-data, while the binary runs as a dynamic user. Nginx’s native fastcgi caching mechanism is enabled so the Go process is only contacted every 10 minutes at the most (for a given post). Nice!

Performance

Performance is a lot better than the old shared server. Pages load in up to half the time of the old one. Scalability also seems better: I tried various benchmarks, and achieved consistently higher concurrency ratings. A simple curl via https now takes 100ms instead of 200ms.

Performance is still suboptimal from the west coast of the US or other places far away from Germany, but got a lot better than before: Measuring from Oregon using webpagetest, it took 1.5s for a page to fully render vs ~3.4s before. A CDN would surely be faster, but would lose the end-to-end encryption.

Upcoming mail server

The next step is to enable email. Setting up postfix with dovecot is quite easy it turns out. Install them, tweak a few settings, setup SPF, DKIM, DMARC, and a PTR record, and off you go.

I mostly expect to read my email by tagging it on the server using notmuch somehow, and then syncing it to my laptop using muchsync. The IMAP access should allow some notifications or reading on the phone.

Spam filtering will be handled with rspamd. It seems to be the hot new thing on the market, is integrated with postfix as a milter, and handles a lot of stuff, such as:

  • greylisting
  • IP scoring
  • DKIM verification and signing
  • ARC verification
  • SPF verification
  • DNS lists
  • Rate limiting

It also has fancy stuff like neural networks. Woohoo!

As another bonus point: It’s trivial to confine with AppArmor, which I really love. Postfix and Dovecot are a mess to confine with their hundreds of different binaries.

I found it via uberspace, which plan on using it for their next uberspace7 generation. It is also used by some large installations like rambler.ru and locaweb.com.br.

I plan to migrate mail from uberspace in the upcoming weeks, and will post more details about it.

on December 01, 2018 10:40 PM
Forkstat is a simple utility I wrote a while ago that can trace process activity using the rather useful Linux NETLINK_CONNECTOR API.   Recently I have added two extra features that may be of interest:

1.  Improved output using some UTF-8 glyphs.  These are used to show process parent/child relationships and various process events, such as termination, core dumping and renaming.   Use the new -g (glyph) option to enable this mode. For example:


In the above example, the program "wobble" was started and forks off a child process.  The parent then renames itself to wibble (indicated by a turning arrow). The child then segfaults and generates a core dump (indicated by a skull and crossbones), triggering apport to investigate the crash.  After this, we observe NetworkManager creating a thread that runs for a very short period of time.   This kind of activity is normally impossible to spot while running conventions tools such as ps or top.

2. By default, forkstat will show the process name using the contents of /proc/$PID/cmdline.  The new -c option allows one to instead use the 16 character task "comm" field, and this can be helpful for spotting process name changes on PROC_EVENT_COMM events.

These are small changes, but I think they make forkstat more useful.  The updated forkstat will be available in Ubuntu 19.04 "Disco Dingo".
on December 01, 2018 12:47 PM

November 30, 2018

Snapcraft 3.0

Sergio Schvezov

The release notes for snapcraft 3.0 have been long overdue. For convenience I will reproduce them here too. Presenting snapcraft 3.0 The arrival of snapcraft 3.0 brings fresh air into how snap development takes place! We took the learnings from the main pain points you had when creating snaps in the past several years, and we we introduced those lessons into a brand new release - snapcraft 3.0! Build Environments As the cornerstone for behavioral change, we are introducing the concept of build environments.
on November 30, 2018 07:47 PM

Google Code-in in KDE

Valorie Zimmerman

So far, so good! We're having quite a variety of students and I'm happy to see new ones still joining. And surprising to me, we're still getting beginner tasks being done, which keep us busy.

New this round were some tutorials written by Pranam Lashkari. I hope we can expand on this next year, because I think a lot of students who are willing to do these tutorials one by one are learning a lot. His thought is that they can be added to our documentation after the contest is over. I think we can re-use some stuff that we've already written, for next year. What do you think?

In addition, I'm seeing loads of work -- yes, small jobs, but keep in mind these kids are 13 to 18 years old* - for the teams who were willing to write up tasks and tutor. It is a lot of work so I really appreciate all those mentors who have stepped forward.

I'm very glad we are participating this year. It isn't as wild and crazy as it was in the beginning, because there are now lots more orgs involved, so the kids have lots of options. Happy that the kids we have are choosing us!

-----------------

* Rules state: "13 to 17 years old and enrolled in a pre-university educational program" before enrolling. https://developers.google.com/open-source/gci/faq

on November 30, 2018 02:58 AM

November 29, 2018

S01E13 – Festa da boa

Podcast Ubuntu Portugal

Na ressaca da Festa do Software Livre da Moita 2018, com a Ubuntu Party de Paris à porta e já com os olhos postos da Ubucon Europe 2019 que vai realizar-se no nosso país, mais precisamente em Sintra, o que não falta neste episódio é festa! Este que é também um marco importante nossa história, por ser o primeiro episódio a ser publicado apenas 2 dias após a sua gravação… Já sabes: Ouve, subscreve e partilha!

Patrocínios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Atribuição e licenças

A imagem de capa: Richard e está licenciada como CC BY-ND.

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da CC0 1.0 Universal License.

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on November 29, 2018 11:35 PM

2018 is the 70th anniversary of the Universal Declaration of Human Rights.

Over the last few days, while attending the UN Forum on Business and Human Rights, I've had various discussions with people about the relationship between software freedom, business and human rights.

In the information age, control of the software, source code and data translates into power and may contribute to inequality. Free software principles are not simply about the cost of the software, they lead to transparency and give people infinitely more choices.

Many people in the free software community have taken a particular interest in privacy, which is Article 12 in the declaration. The modern Internet challenges this right, while projects like TAILS and Tor Browser help to protect it. The UN's 70th anniversary slogan Stand up 4 human rights is a call to help those around us understand these problems and make effective use of the solutions.

We live in a time when human rights face serious challenges. Consider censorship: Saudi Arabia is accused of complicity in the disappearance of columnist Jamal Khashoggi and the White House is accused of using fake allegations to try and banish CNN journalist Jim Acosta. Arjen Kamphuis, co-author of Information Security for Journalists, vanished in mysterious circumstances. The last time I saw Arjen was at OSCAL'18 in Tirana.

For many of us, events like these may leave us feeling powerless. Nothing could be further from the truth. Standing up for human rights starts with looking at our own failures, both as individuals and organizations. For example, have we ever taken offense at something, judged somebody or rushed to make accusations without taking time to check facts and consider all sides of the story? Have we seen somebody we know treated unfairly and remained silent? Sometimes it may be desirable to speak out publicly, sometimes a difficult situation can be resolved by speaking to the person directly or having a meeting with them.

Being at the United Nations provided an acute reminder of these principles. In parallel to the event, the UN were hosting a conference on the mine ban treaty and the conference on Afghanistan, the Afghan president arriving as I walked up the corridor. These events reflect a legacy of hostilities and sincere efforts to come back from the brink.

A wide range of discussions and meetings

There were many opportunities to have discussions with people from all the groups present. Several sessions raised issues that made me reflect on the relationship between corporations and the free software community and the risks for volunteers. At the end of the forum I had a brief discussion with Dante Pesce, Chair of the UN's Business and Human Rights working group.

Best free software resources for human rights?

Many people at the forum asked me how to get started with free software and I promised to keep adding to my blog. What would you regard as the best online resources, including videos and guides, for people with an interest in human rights to get started with free software, solving problems with privacy and equality? Please share them on the Libre Planet mailing list.

Let's not forget animal rights too

Are dogs entitled to danger pay when protecting heads of state?

on November 29, 2018 10:04 PM

S11E38 – Thirty-Eight Nooses

Ubuntu Podcast from the UK LoCo

This week we’ve been donating to Wikipedia and discuss Mark’s Snappy Adventure. We bring you some command line love and go over all your feedback.

It’s Season 11 Episode 38 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Martin Wimpress are connected and speaking to your brain.

In this week’s show:

youtube-dl -f best -ciw -o %(title)s.%(id)s.%(ext)s -v https://www.youtube.com/ubuntupodcast

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on November 29, 2018 03:00 PM

(English version below)

Essen > Paris > Xixón > Sintra!


É com um enorme prazer que anunciamos o local (e data) da próxima Ubucon Europe. Será de 10 a 13 de Outubro, no Centro Cultural Olga Cadaval!
Após várias sugestões, pedidos, pressões e uma grande vontade de receber a comunidade Ubuntu mundial no nosso país, decidimos criar as condições para que no próximo ano esta aconteça em terras lusitanas.
Posto isto, a LoCo Ubuntu PT vem, desta forma, convidar todos os interessados a estarem presentes na próxima Ubucon Europe.

O que são as Ubucon?

As Ubucon são convenções organizadas pela Comunidade Ubuntu.
Nelas acontecem palestras, conferências, workshops, demonstrações e eventos sociais.

O local:

O espaço escolhido é o ideal para o efeito, com um auditório para mais de 270 pessoas, 3 salas para workshops e um espaço de exposições. Nas imediações, abundam locais para experimentar a gastronomia local, onde serão organizados eventos sociais.

Alojamento:

Contamos oferecer uma vasta lista de alojamentos parceiros, desde o CouchSurfing até ao requinte dos hotéis de 5 estrelas, com condições preferenciais para todos os participantes.

Transportes:

Sintra é servida por comboio e autocarro regulares, mas é possível também recorrer a empresas de transfers e até mesmo fazer a viagem no eléctrico da praia das Maçãs.

Oradores:

Antes do final do ano, estará disponível a chamada para propostas de apresentações e workshops – brevemente serão disponibilizadas mais informações, fiquem atentos.

Teletrabalhadores & Nómadas Digitais:

Para quem precise de trabalhar, estão já a ser feitos contactos com espaços próximos de coworking, para não quebrar a produtividade de quem quiser vir uns dias mais cedo.

Chill:

Atendendo à meteorologia dos últimos 5 anos, estatisticamente, talvez seja possível dar um mergulho (ou surfar umas ondas) na belíssima praia da Maçãs…

Nota final:

Ambos os sites habitualmente utilizados para divulgar as Ubucon estão offline, mas estarão operacionais durante os próximos dias.

Por agora é tudo, estamos muito ocupados a preparar tudo para vos receber!
Fotos: Sérgio Pedro Tiago

Ubucon Europe 2019 – in Sintra!

Essen> Paris> Xixón> Sintra!

It is our great pleasure to announce the location (and dates) for the next Ubucon Europe.
It will be from the 10th to 13th October 2019, at the Centro Cultural Olga Cadaval (Sintra, Portugal).

After several suggestions, requests and a reasonable amount of enthusiastic pressure, added to our great wish to receive the world’s Ubuntu Community in our country, we decided to set the conditions for this great gathering in Lusitanian lands.
Therefore, Ubuntu PT Local Community is thus inviting all interested parties to attend the next Ubucon Europe!

What’s a Ubucon?

Ubucon are conventions organized by the Ubuntu Community.
They’re comprised of lectures, workshops, conferences, key notes, demonstrations and social events.

The venue:

The chosen location is ideal for this purpose, with an auditorium fitting more than 270 people, 3 rooms for workshops and an exhibition space. In the immediate vicinity there are plenty of places to try out the local cuisine and where social events will be organized.

Accommodation:

We offer a wide range of accommodation partners, from CouchSurfing to the refinement of 5 star hotels, with special conditions for all participants.

Transportation:

Sintra is served by regular trains and buses easily accessible from Lisbon’s international airport; it’s also possible to use shuttle companies and even make the trip from the Maçãs beach.

Speakers:

Before the end of the year, we’ll launch a Call for Proposals – for presentations, lectures, workshops – more information will be available soon. Stay tuned.

Remote workers & digital nomads:

For those who still need to work, arrangements are already being made with nearby coworking spaces – as not to break the productivity of those who want to come a few days earlier.

Chill:

Given the weather conditions of the last 5 years, statistically perhaps it will be possible to dive (or surf) in the beautiful Maçãs beach!…

Last notes:

Both websites commonly used to promote Ubucon are offline, but will be operational during the next few days.

It’s all for now – we are very busy arranging everything to welcome you!
Photos: Sérgio Pedro Tiago

on November 29, 2018 12:22 AM

November 26, 2018

Welcome to the Ubuntu Weekly Newsletter, Issue 555 for the week of November 18 – 24, 2018. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on November 26, 2018 09:17 PM

November 25, 2018

After 3 weeks of dedicated development time, the first Xfce Screensaver beta release is now available! With better event handling, a significantly upgraded preferences dialog, and a tidier codebase, the new version is nearly ready for prime time. 🎉

What’s New?

Features

  • All available configuration options are now available in the Preferences dialog, boosting the easily accessible options from 4 to 13!
  • Idle time is now based on the X11 Screensaver plugin instead of the GNOME Session Manager.
  • Xfce Screensaver now respects the xdg-screensaver state, inhibiting the screensaver when using apps like Caffeine or watching a fullscreen video.
  • Screensaver and lock screen functionality can easily now be toggled separately.

General

Preferences

  • Dropped unused configuration options [1, 2, 3]
  • Renamed all Xfconf properties for improved clarity and easier maintenance [1, 2]

Bug Fixes

  • Replaced Help link with a link to the Xfce Docs (Xfce #14877)
  • Added /usr/lib and /usr/libexec as trusted engine paths, enabling local installs on Linux with access to existing screensavers (Xfce #14883)
  • Fixed screen blanking and locking on FreeBSD and OpenBSD (Xfce #14846)
  • Fixed lock screen crash on laptop lid-close events (GTK #1466)
  • Fixed daemon crash when scrolling through available themes
  • Improved window size resizing for smaller displays
  • Renamed included screensavers to avoid conflicts with MATE Screensaver
  • Reduced flicker rate when multiple keyboard layouts are available (still not fully fixed, but greatly improved)

Build Improvements

  • Silenced warning: ar: ‘u’ modifier ignored since ‘D’ is the default
  • Fixed warning: Target given more than once in the same rule

Code Quality

  • Applied cpplint fixes and added cpplint configuration file
  • Cleaned up unused variables, trailing spaces, and deprecated code
  • Glade templates were cleaned up and organized for easier maintenance

Translation Updates

Albanian, Basque, Chinese (China), Chinese (Taiwan), Danish, French, Galician, Hebrew, Icelandic, Italian, Korean, Malay, Polish, Russian, Slovak, Turkish

Screenshots

Downloads

This is the first beta release of Xfce Screensaver. While still not recommended for production machines, this is a great time to test and report bugs so we can put together an awesome stable release soon.

Source tarball (md5sha1sha256)

Xubuntu users (18.04, 18.10, and 19.04) can grab the package from the Xubuntu QA Experimental PPA.

sudo add-apt-repository ppa:xubuntu-dev/experimental
sudo apt-get update
sudo apt-get install xfce4-screensaver

Remember to also remove or exit light-locker and start xfce4-screensaver (or log out and back in) and add support for xfce4-screensaver to the xflock4 script.

Thanks and enjoy!

on November 25, 2018 03:37 PM

November 24, 2018

Debian recently switched from Alioth to Salsa offering only Git hosting from now on and that simplifies the work of exiting contributors and also helps newcomers who are most likely already familiar with Git if they know at least one version control system. (Thanks to everyone involved in the transition!)

On Ubuntu’s side, most Ubuntu-specific packages and big part of Ubuntu’s infrastructure used to be maintained in Bazaar repositories in the past. Since then Git became the most widely used version control system but the Bazaar repositories did not fully disappear.

There are still hundreds of packages maintained in Bazaar in Ubuntu (packaging repositories in Bazaar by team) and Debian (lintian report) and maintaining them in Git instead could be easier in the long term.

Launchpad already supports Git and there are guidelines for converting Bazaar repositories to Git (1,2),  but if you would like to make the switch I suggest taking a look at bzr-git-mass-convert based on bzr fast-export (verifying the result with git-remote-bzr). It is a simple tool for merging multiple Bazaar branches to a single git repository set up for pushing it back to Launchpad.

We (at the Foundations Team) use it for migrating our repositories and there is also a wiki page for tracking the migration schedule of popular repositories.

on November 24, 2018 10:48 PM

November 23, 2018

Olá!

Trazemos ótimas novidades para a Comunidade! Acabamos de encomendar camisolas e t-shirts alusivos ao Ubuntu e à Comunidade Ubuntu Portugal e estamos a vendê-las para angariar fundos para as atividades do grupo. Mas não acaba aqui! Também temos crachás para venda.

Eis uma amostra do material que temos:

Quiçá uma prenda de Natal? Ou simplesmente um acessório altamente estiloso?

O custo das t-shirts é de 10€, das hoodies é de 25€ e os crachás custam 1€.

Para encomendares, preenche o formulário que encontrás neste link: https://tos.typeform.com/to/FU6int

Contamos com o teu contributo!

Saudações cibernauticas

on November 23, 2018 12:35 PM

For my hacking, I love to use the KDevelop IDE. Once in a while, I find myself working on a project that has different indentation styles depending on the filetype — in this case, C++ files, Makefiles, etc. use tabs, JavaScript and HTML files use 2 spaces. I haven’t found this to be straight-forward from KDevelop’s configuration dialog (though I just learnt that it does seem to be possible). I did find myself having to fix indentation before committing (annoying!) and even having to fix up the indentation of code committed by myself (embarrassing!). As that’s both stupid and repetitive work, it’s something I wanted to avoid. Here’s how it’s done using EditorConfig files:

  1. put a file called .editorconfig in the project’s root directory
  2. specify a default style and then a specialization for certain filetypes
  3. restart KDevelop

Here’s what my .editorconfig file looks like:


# EditorConfig is awesome: https://EditorConfig.org

# for the top-most EditorConfig file, set...
# root = true

# In general, tabs shown 2 spaces wide
[*]
indent_style = tab
indent_size = 2

# Matches multiple files with brace expansion notation
[*.{js,html}]
indent_style = space
indent_size = 2

This does the job nicely and has the following advantages:

  • It doesn’t affect my other projects, so I don’t have to run around in the configuration to switch when task-switching. (Editorconfigs cascade, so will be looked up up in the filesystem tree for fallback styles.
  • It works across different editors supporting the editorconfig standards, so not just KWrite, Kate, KDevelop, but also for entirely different products.
  • It allows me to spend less time on formalities and more time on actual coding (or diving).

(Thanks to Reddit.)

on November 23, 2018 08:30 AM

November 20, 2018

Today is the Ubuntu Appreciation Day in which we share our thanks to people in our community for making Ubuntu great.

This year, I want to thank you to Rudy (~cm-t)! Why? Because IMHO he is an incredible activist, helpful, funny, always with a smile. He prints passion in everything related to Ubuntu. A perfect example for everyone!



Thanks Rudy |o/
on November 20, 2018 05:53 PM

TDOR 2018

Rhonda D'Vine

Today is Transgender Day Of Remembrance. Today is a black day for trans people around the globe. We mourn the trans folks that aren't amongst us anymore due to hate crime violence against them. Reach out to the trans folks that are part of your life, that you know, ask them if they are in need of emotional support on this day. There are more trans folks getting killed for being trans than there are days in a year. Furthermost black trans women of color. If you feel strong enough you can read about it in this article.

Also, we are facing huge threats for our mere existence all over the world these days. If you follow any social media, check the hashtag #WontBeErased. The US government follows a path of Erasing Gender left and right, which also affects intersex people likewise and manifests the gender binary and gender separation even further, also hurting cis people. Now also in Ontario, Canada, gender identity gets erased, too. And Brazil, where next year's DebConf will be held, which already has the highest trans murders in the world, has elected Bolsonaro, a right wing extremist who is outspokenly gay antagonist and misogynist. And then there is Tanzania which started a hunt for LGBTIQ people. And those reports are only the tip of the iceberg. I definitely missed some other countries shit, like Ukraine (where next year's European Lesbian* Conference is taking place) or Austrian's government being right-winged and cutting the social system left and right so we are in need of Wieder Donnerstag (a weekly Thursday demonstration) again.

I'm currently drafting the announce mail to send out about the creation of the Debian Diversity Team which we finally formed. It is more important than ever to make it clear and visible that discrimination has no place within Debian, and that we in fact are a diverse community. I can understand the wish that it should focus on the visibility and welcoming aspects of the team, and especially to not make it look like it's a reaction to those world events. Which it isn't, this is in the works since two years now. And I totally agree with that. I just have a hard time to not add a solidarity message alongside mentioning that we are aware of the crap that's going on in the world and that we see your pain, and share it. So yes, the team has finally formed, but the announcement mail through debian-devel-announce about it is still pending. And we are in contact with the local team for next year's DebConf and following the news about Brazil to figure out how to make it as safe as possible for attendees, so that fear shouldn't be the guiding factor for you to not attend.

Stay strong, sending you hugs if wanted.

/personal | permanent link | Comments: 2 | Flattr this

on November 20, 2018 10:11 AM

When creating clang-tidy checks, it is common to extract parts of AST Matcher expressions to local variables. I expanded on this in a previous blog.

auto nonAwesomeFunction = functionDecl(
  unless(matchesName("^::awesome_"))
  );

Finder->addMatcher(
  nonAwesomeFunction.bind("addAwesomePrefix")
  , this);

Finder->addMatcher(
  callExpr(callee(nonAwesomeFunction)).bind("addAwesomePrefix")
  , this);

Use of such variables establishes an emergent extension API for re-use in the checks, or in multiple checks you create which share matcher requirements.

When attempting to match items inside a ForStmt for example, we might encounter the difference in the AST depending on whether braces are used or not.

#include <vector>

void foo()
{
    std::vector<int> vec;
    int c = 0;
    for (int i = 0; i < 100; ++i)
        vec.push_back(i);

    for (int i = 0; i < 100; ++i) {
        vec.push_back(i);
    }
}

In this case, we wish to match the push_back method inside a ForStmt body. The body item might be a CompoundStmt or the CallExpr we wish to match. We can match both cases with the anyOf matcher.

auto pushbackcall = callExpr(callee(functionDecl(hasName("push_back"))));

Finder->addMatcher(
    forStmt(
        hasBody(anyOf(
            pushbackcall.bind("port_call"), 
            compoundStmt(has(pushbackcall.bind("port_call")))
            ))
        )
    , this);

Having to list the pushbackcall twice in the matcher is suboptimal. We ca do better by defining a new API function which we can use in AST Matcher expressions:

auto hasIgnoringBraces = [](auto const& Matcher)
{
    return anyOf(
        Matcher, 
        compoundStmt(has(Matcher))
        );
};

With this in hand, we can simplify the original expression:

auto pushbackcall = callExpr(callee(functionDecl(hasName("push_back"))));

Finder->addMatcher(
    forStmt(
        hasBody(hasIgnoringBraces(
            pushbackcall.bind("port_call")
            ))
        ) 
    , this);

This pattern of defining AST Matcher API using a lambda function finds use in other contexts. For example, sometimes we want to find and bind to an AST node if it is present, ignoring its absense if is not present.

For example, consider wishing to match struct declarations and match a copy constructor if present:

struct A
{
};

struct B
{
    B(B const&);
};

We can match the AST with the anyOf() and anything() matchers.

Finder->addMatcher(
    cxxRecordDecl(anyOf(
        hasMethod(cxxConstructorDecl(isCopyConstructor()).bind("port_method")), 
        anything()
        )).bind("port_record")
    , this);

This can be generalized into an optional() matcher:

auto optional = [](auto const& Matcher)
{
    return anyOf(
        Matcher,
        anything()
        );
};

The anything() matcher matches, well, anything. It can also match nothing because of the fact that a matcher written inside another matcher matches itself.

That is, matchers such as

functionDecl(decl())
functionDecl(namedDecl())
functionDecl(functionDecl())

match ‘trivially’.

If a functionDecl() in fact binds to a method, then the derived type can be used in the matcher:

functionDecl(cxxMethodDecl())

The optional matcher can be used as expected:

Finder->addMatcher(
    cxxRecordDecl(
        optional(
            hasMethod(cxxConstructorDecl(isCopyConstructor()).bind("port_method"))
            )
        ).bind("port_record")
    , this);

Yet another problem writers of clang-tidy checks will find is that AST nodes CallExpr and CXXConstructExpr do not share a common base representing the ability to take expressions as arguments. This means that separate matchers are required for calls and constructions.

Again, we can solve this problem generically by creating a composition function:

auto callOrConstruct = [](auto const& Matcher)
{
    return expr(anyOf(
        callExpr(Matcher),
        cxxConstructExpr(Matcher)
        ));
};

which reads as ‘an Expression which is any of a call expression or a construct expression’.

It can be used in place of either in matcher expressions:

Finder->addMatcher(
    callOrConstruct(
        hasArgument(0, integerLiteral().bind("port_literal"))
        )
    , this);

Creating composition functions like this is a very convenient way to simplify and create maintainable matchers in your clang-tidy checks. A recently published RFC on the topic of making clang-tidy checks easier to write proposes some other conveniences which can be implemented in this manner.

on November 20, 2018 09:16 AM

Hitting a Break Point

Stephen Michael Kellat

Well, I had a weekend off sick. The time has come to put things in motion. Health concerns pushed up my timetable for what was discussed prior.

I am seeking support to be able to undertake freelance work. The first project would be to finally close out the Outernet/Othernet research work to get it submitted. Beyond that there would be technical writing as well as making creative works. Some of that would involve creating “digital library” collections but also helping others create print works instead.

Who could I help/serve? Unfortunately we have plenty of small, underfunded groups in my town. The American Red Cross no longer maintains a local office and the Salvation Army has no staff presence locally. Our county-owned airport verges on financial collapse and multiple units of government have difficulty staying solvent. There are plenty of needs to cover as long as someone had independent financial backing.

Besides, I owe some edits of Xubuntu documentation too.

It isn’t like “going on disability” as it is called in American parlance is immediate let alone simple. One of two sets of paperwork has to eventually go into a cave in Pennsylvania for centralized processing. I wish I were kidding but that cave is located near Slippery Rock. Both processes are backlogged only 12-18 months at last report. For making a change in the short term, that doesn’t even exist as an option on the table.

That’s why I’m asking for support. I’ve grown tired of spending multiple days at work depressed. Showing physical symptoms of depression in the workplace isn’t good either especially when it results in me missing work. When you can’t help people who are in the throes of despair frequently by their own fault, how much more futile can it get?

I set the goal on Liberapay lower than what I get now. While it would be a pay cut, I’d still be able to pay the bills. It is time to move to doing something constructive for society instead of merely fueling the machinery of government. For as often as I get asked how I sleep at night, I want to move past the answer being “terribly”.

The relevant Liberapay page is here. Folks like Pepper & Carrot use it. If the goal can be initially met by December 7th, I would be ready for the potential budget snafu at work like the three we already had at the start of the year.

I just look forward to some day being able to talk about doing good things instead of having to be cryptic due to security restrictions.

on November 20, 2018 02:55 AM

November 18, 2018

Full Circle Weekly News #115

Full Circle Magazine


Open Source Software: 20-Plus Years of Innovation
Source: https://www.linuxinsider.com/story/Open-Source-Software-20-Plus-Years-of-Innovation-85646.html

IBM Buys Linux & Open Source Software Distributor Red Hat For $34 Billion
Source: https://fossbytes.com/ibm-buys-red-hat-open-source-linux/

We (may) now know the real reason for that IBM takeover. A distraction for Red Hat to axe KDE
Source: https://www.theregister.co.uk/2018/11/02/rhel_deprecates_kde/

Ubuntu Founder Mark Shuttleworth Has No Plans Of Selling Canonical
Source: https://fossbytes.com/ubuntu-founder-mark-shuttleworth-has-no-plans-of-selling-canonical/

Mark Shuttleworth reveals Ubuntu 18.04 will get a 10-year support lifespan
Source: https://www.zdnet.com/article/mark-shuttleworth-reveals-ubuntu-18-04-will-get-a-10-year-support-lifespan/

Debian GNU/Linux 9.6 “Stretch” Released with Hundreds of Updates
Source: https://news.softpedia.com/news/debian-gnu-linux-9-6-stretch-released-with-hundreds-of-updates-download-now-523739.shtml

Fresh Linux Mint 19.1 Arrives This Christmas
Source: https://www.forbes.com/sites/jasonevangelho/2018/11/01/fresh-linux-mint-19-1-arrives-this-christmas/#6c64618d293d

Linux-friendly company System76 shares more open source Thelio computer details
Source: https://betanews.com/2018/10/26/system76-open-source-thelio-linux/

Linus Torvalds Says Linux 5.0 Comes in 2019, Kicks Off Development of Linux 4.20
Source: https://news.softpedia.com/news/linus-torvalds-is-back-kicks-off-the-development-of-linux-kernel-4-20-523622.shtml

Canonical Adds Spectre V4, SpectreRSB Fixes to New Ubuntu 18.04 LTS Azure Kernel
Source: https://news.softpedia.com/news/canonical-adds-spectre-v4-spectrersb-fixes-to-new-ubuntu-18-04-lts-azure-kernel-523533.shtml

Trivial Bug in X.Org Gives Root Permission on Linux and BSD Systems
Source: https://www.bleepingcomputer.com/news/security/trivial-bug-in-xorg-gives-root-permission-on-linux-and-bsd-systems/

Security Researcher Drops VirtualBox Guest-to-Host Escape Zero-Day on GitHub
Source: https://news.softpedia.com/news/security-researcher-drops-virtualbox-guest-to-host-escape-zero-day-on-github-523660.shtml

on November 18, 2018 05:01 PM

November 15, 2018

I've been spending a bit of time recently working on GNOME Settings. One part of this has been bringing some of the older panel code up to modern standards, one of which is making use of GtkBuilder templates.

I wondered if any of these changes would show in the stats, so I wrote a program to analyse each branch in the git repository and break down the code between C and GtkBuilder. The results were graphed in Google Sheets:



This is just the user accounts panel, which shows some of the reduction in C code and increase in GtkBuilder data:



Here's the breakdown of which panels make up the codebase:



I don't think this draws any major conclusions, but is still interesting to see. Of note:
  • Some of the changes make in 3.28 did reduce the total amount of code! But it was quickly gobbled up by the new Thunderbolt panel.
  • Network and Printers are the dominant panels - look at all that code!
  • I ignored empty lines in the files in case differing coding styles would make some panels look bigger or smaller. It didn't seem to make a significant difference.
  • You can see a reduction in C code looking at individual panels that have been updated, but overall it gets lost in the total amount of code.
I'll have another look in a few cycles when more changes have landed (I'm working on a new sound panel at the moment).
on November 15, 2018 08:05 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 209 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 1 hour (out of 10 hours allocated + 4 extra hours, thus keeping 13 extra hours for November).
  • Antoine Beaupré did 24 hours (out of 24 hours allocated).
  • Ben Hutchings did 19 hours (out of 15 hours allocated + 4 extra hours).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 12 hours (out of 30 hours allocated + 29.25 extra hours, thus keeping 47.25 extra hours for November).
  • Holger Levsen did 1 hour (out of 8 hours allocated + 19.5 extra hours, but he gave back the remaining hours due to his new role, see below).
  • Hugo Lefeuvre did 10 hours (out of 10 hours allocated).
  • Markus Koschany did 30 hours (out of 30 hours allocated).
  • Mike Gabriel did 4 hours (out of 8 hours allocated, thus keeping 4 extra hours for November).
  • Ola Lundqvist did 4 hours (out of 8 hours allocated + 8 extra hours, but gave back 4 hours, thus keeping 8 extra hours for November).
  • Roberto C. Sanchez did 15.5 hours (out of 18 hours allocated, thus keeping 2.5 extra hours for November).
  • Santiago Ruano Rincón did 10 hours (out of 28 extra hours, thus keeping 18 extra hours for November).
  • Thorsten Alteholz did 30 hours (out of 30 hours allocated).

Evolution of the situation

In November we are welcoming Brian May and Lucas Kanashiro back as contributors after they took some break from this work.

Holger Levsen is stepping down as LTS contributor but is taking over the role of LTS coordinator that was solely under the responsibility of Raphaël Hertzog up to now. Raphaël continues to handle the administrative side, but Holger will coordinate the LTS contributors ensuring that the work is done and that it is well done.

The number of sponsored hours increased to 212 hours per month, we gained a new sponsor (that shall not be named since they don’t want to be publicly listed).

The security tracker currently lists 27 packages with a known CVE and the dla-needed.txt file has 27 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on November 15, 2018 02:36 PM

November 11, 2018

Getting started – clang-tidy AST Matchers

Over the last few weeks I published some blogs on the Visual C++ blog about Clang AST Matchers. The series can be found here:

I am not aware of any similar series existing which covers creation of clang-tidy checks, and use of clang-query to inspect the Clang AST and assist in the construction of AST Matcher expressions. I hope the series is useful to anyone attempting to write clang-tidy checks. Several people have reported to me that they have previously tried and failed to create clang-tidy extensions, due to various issues, including lack of information tying it all together.

Other issues with clang-tidy include the fact that it relies on the “mental model” a compiler has of C++ source code, which might differ from the “mental model” of regular C++ developers. The compiler needs to have a very exact representation of the code, and needs to have a consistent design for the class hierarchy representing each standard-required feature. This leads to many classes and class hierarchies, and a difficulty in discovering what is relevant to a particular problem to be solved.

I noted several problems in those blog posts, namely:

  • clang-query does not show AST dumps and diagnostics at the same time<
  • Code completion does not work with clang-query on Windows
  • AST Matchers which are appropriate to use in contexts are difficult to discover
  • There is no tooling available to assist in discovery of source locations of AST nodes

Last week at code::dive in Wroclaw, I demonstrated tooling solutions to all of these problems. I look forward to video of that talk (and videos from the rest of the conference!) becoming available.

Meanwhile, I’ll publish some blog posts here showing the same new features in clang-query and clang-tidy.

clang-query in Compiler Explorer

Recent work by the Compiler Explorer maintainers adds the possibility to use source code tooling with the website. The compiler explorer contains new entries in a menu to enable a clang-tidy pane.

clang-tidy in Compiler Explorer

clang-tidy in Compiler Explorer

I demonstrated use of compiler explorer to use the clang-query tool at the code::dive conference, building upon the recent work by the compiler explorer developers. This feature will get upstream in time, but can be used with my own AWS instance for now. This is suitable for exploration of the effect that changing source code has on match results, and orthogonally, the effect that changing the AST Matcher has on the match results. It is also accessible via cqe.steveire.com.

It is important to remember that Compiler Explorer is running clang-query in script mode, so it can process multiple let and match calls for example. The new command set print-matcher true helps distinguish the output from the matcher which causes the output. The help command is also available with listing of the new features.

The issue of clang-query not printing both diagnostic information and AST information at the same time means that users of the tool need to alternate between writing

set output diag

and

set output dump

to access the different content. Recently, I committed a change to make it possible to enable both output and diag output from clang-query at the same time. New commands follow the same structure as the set output command:

enable output dump
disable output dump

The set output <feature> command remains as an “exclusive” setting to enable only one output feature and disable all others.

Dumping possible AST Matchers

This command design also enables the possibility of extending the features which clang-query can output. Up to now, developers of clang-tidy extensions had to inspect the AST corresponding to their source code using clang-query and then use that understanding of the AST to create an AST Matcher expression.

That mapping to and from the AST “mental model” is not necessary. New features I am in the process of upstreaming to clang-query enable the output of AST Matchers which may be used with existing bound AST nodes. The command

enable output matcher

causes clang-query to print out all matcher expressions which can be combined with the bound node. This cuts out the requirement to dump the AST in such cases.

Inspecting the AST is still useful as a technique to discover possible AST Matchers and how they correspond to source code. For example if the functionDecl() matcher is already known and understood, it can be dumped to see that function calls are represented by the CallExpr in the Clang AST. Using the callExpr() AST Matcher and dumping possible matchers to use with it leads to the discovery that callee(functionDecl()) can be used to determine particulars of the function being called. Such discoveries are not possible by only reading AST output of clang-query.

Dumping possible Source Locations

The other important discovery space in creation of clang-tidy extensions is that of Source Locations and Source Ranges. Developers creating extensions must currently rely on the documentation of the Clang AST to discover available source locations which might be relevant. Usually though, developers have the opposite problem. They have source code, and they want to know how to access a source location from the AST node which corresponds semantically to that line and column in the source.

It is important to make use a semantically relevant source location in order to make reliable tools which refactor at scale and without human intervention. For example, a cursory inspection of the locations available from a FunctionDecl AST node might lead to the belief that the return type is available at the getBeginLoc() of the node.

However, this is immediately challenged by the C++11 trailing return type feature, where the actual return type is located at the end. For a semanticallly correct location, you must currently use

getTypeSourceInfo()->getTypeLoc().getAs().getReturnLoc().getBeginLoc()

It should be possible to use getReturnTypeSourceRange(), but a bug in clang prevents that as it does not appreciate the trailing return types feature.

Once again, my new output feature of clang-query presents a solution to this discovery problem. The command

enable output srcloc

causes clang-query to output the source locations by accessor and caret corresponding to the source code for each of the bound nodes. By inspecting that output, developers of clang-tidy extensions can discover the correct expression (usually via the clang::TypeLoc heirarchy) corresponding to the source code location they are interested in refactoring.

Next Steps

I have made many more modifications to clang-query which I am in the process of upstreaming. My Compiler explorer instance is listed as the ‘clang-query-future’ tool, while the clang-query-trunk tool runs the current trunk version of clang-query. Both can be enabled for side-by-side comparison of the future clang-query with the exising one.

on November 11, 2018 10:46 PM

November 07, 2018

We are pleased to announce that the 3rd bugfix release of Plasma 5.14, 5.14.3, is now available in our backports PPA for Cosmic 18.10.

The full changelog for 5.14.3 can be found here.

Already released in the PPA is an update to KDE Frameworks 5.51.

To upgrade:

Add the following repository to your software sources list:

ppa:kubuntu-ppa/backports

or if it is already added, the updates should become available via your preferred update method.

The PPA can be added manually in the Konsole terminal with the command:

sudo add-apt-repository ppa:kubuntu-ppa/backports

and packages then updated with

sudo apt update
sudo apt full-upgrade

 

IMPORTANT

Please note that more bugfix releases are scheduled by KDE for Plasma 5.14, so while we feel these backports will be beneficial to enthusiastic adopters, users wanting to use a Plasma release with more stabilisation/bugfixes ‘baked in’ may find it advisable to stay with Plasma 5.13.5 as included in the original 18.10 Cosmic release.

Should any issues occur, please provide feedback on our mailing list [1], IRC [2], and/or file a bug against our PPA packages [3].

1. Kubuntu-devel mailing list: https://lists.ubuntu.com/mailman/listinfo/kubuntu-devel
2. Kubuntu IRC channels: #kubuntu & #kubuntu-devel on irc.freenode.net
3. Kubuntu ppa bugs: https://bugs.launchpad.net/kubuntu-ppa

on November 07, 2018 12:44 PM
I have been working with AWS in the last days and encounter some issues when using RDS.  Generally when you're working in development environment you have setup your database as Publicly accessible and this isn't an issue. But when you're working in Production. So we place the Amazon RDS database into a private subnet. What we need to do for connecting to the database using PgAdmin or other tool?

We're going to use one of the most common methods for doing this. You will need to launch an Amazon EC2 instance in the public subnet and then use it as jumping box.

So after you have your EC2, you will need to run the following command.
See explantion below

After this, you will need to configure your PgAdmin.
The host name will be your localhost, the port is the same you define in the above command.
Maintenance database will be your DB name and the username you have for connecting.

Hope this helps you connect to your databases.

on November 07, 2018 02:54 AM

November 04, 2018

Writing Up Plan B

Stephen Michael Kellat

With the prominence of things like Liberapay and Patreon as well as Snowdrift.coop, I have had to look at the tax implications of them all.  There is no single tax regime on this planet.  Developers and other freelancers who might make use of one of these services within the F/LOSS frame of reference are frequently not within the USA frame of reference.  That makes a difference.

 

I also have to state at the outset that this does not constitute legal advice.  I am not a lawyer.  I am most certainly not your lawyer.  If anything these recitals are my setting out my review of all this as being “Plan B” due to continuing high tensions surrounding being a federal civil servant in the United States.  With an election coming up Tuesday where one side treats it as a normal routine event while the other is regarding it as Ragnarok and is acting like humanity is about to face an imminent extinction event, changing things up in life may be worthwhile.

 

An instructive item to consider is Internal Revenue Service Publication 334 Tax Guide for Small Business (For Individuals Who Use Schedule C or C-EZ).  The current version can be found online at https://www.irs.gov/forms-pubs/about-publication-334.  Just because you receive money from people over the Internet does not necessarily mean it is free from taxation.  Generally the income a developer, freelance documentation writer, or a freelancer in general might receive from a Liberapay or Snowdrift.coop appears to fall under “gross receipts”.  

 

A recent opinion of the United States Tax Court (Felton v. Commissioner, T.C. Memo 2018-168) discusses the issue of “gift” for tax purposes rather nicely in comparison to what Liberapay mentions in its FAQ.  You can find the FAQ at https://liberapay.com/about/faq.  The opinion can be found at https://www.ustaxcourt.gov/ustcinop/OpinionViewer.aspx?ID=11789.  After reading the discussion in Felton, I remain assured that in the United States context anything received via Liberapay would have to be treated as gross receipts in the United States.  The rules are different in the European Union where Liberapay is based and that’s perfectly fine.  In the end I have to answer to the tax authorities in the United States.

 

The good part about reporting matters on Schedule C is that it preserves participation in Social Security and allows a variety of business expenses and deductions to be taken.  Regular wage-based employees pay into Social Security via the FICA tax.  Self-employed persons pay into Social Security via SECA tax.

 

Now, there are various works I would definitely ask for support if I left government.  Such includes:

 

  • Freelance documentation writing

  • Emergency Management/Homeland Security work under the aegis of my church

  • Podcast production

  • Printing & Publishing

 

For podcast production, general news reviews would be possible.  Going into actual entertainment programming would be nice.  There are ideas I’m still working out.

 

Printing & Publishing would involve getting small works into print on a more rapid cycle in light of an increasingly censored Internet.  As the case of Gab.ai shows, you can have one of your users do something horrible but not actually do anything as a site but still have all your hosting partners withdraw service so as to knock you offline.  Outside the context of the USA, total shutdowns of access to the Internet still occur from time to time in other countries.

 

Emergency Management comes under the helping works of the church.

 

As to documentation writing, I used to write documentation for Xubuntu.  I want to do that again.

 

As to the proliferation of codes of conduct that are appearing everywhere, I can only offer the following statement:

 

“I am generally required to obey the United States Constitution and laws of the United States of America, the Constitution of the State of Ohio and Ohio’s laws, and the orders of any commanding officers appointed to me as a member of the unorganized militia (Ohio Revised Code 5923.01(D), Title 10 United States Code Section 246(b)(2)).  Codes of Conduct adopted by projects and organizations that conflict with those legal responsibilities must either be disregarded or accommodations must otherwise be sought.”

 

So, that’s “Plan B”.  The dollar amounts remain flexible at the moment as I’m still waiting for matters to pan out at work.  If things turn sour at my job, I at least have plans to hit the ground running seeking contracts and otherwise staying afloat.

 

 

on November 04, 2018 11:21 PM

gentoo eix-update failure

Santiago Zarate

Summary

If you are having the following error on your Gentoo system:

 Can't open the database file '/var/cache/eix/portage.eix' for writing (mode = 'wb') 

Don’t waste your time, simply the /var/cache/eix directory is not present and/or writeable by the eix/portage use

mkdir -p /var/cache/eix
chmod +w /var/cache/eix*

Basic story is that eix will drop privileges to portage user when ran as root.

on November 04, 2018 12:00 AM

November 02, 2018

Red Hat and KDE

Jonathan Riddell

By a strange coincidence the news broke this morning that RHEL is deprecating KDE. The real surprise here is that RHEL supported KDE all.  Back in the 90s they were entirely against KDE and put lots of effort into our friendly rivals Gnome.  It made some sense since at the time Qt was under a not-quite-free licence and there’s no reason why a company would want to support another company’s lock in as well as shipping incompatible licences.  By the time Qt become fully free they were firmly behind Gnome.  Meanwhile Rex and a team of hard working volunteers packaged it anyway and gained many users.  When Red Hat was turned into the all open Fedora and the closed RHEL, Fedora was able to embrace KDE as it should but at some point the Fedora Next initiative again put KDE software in second place. Meanwhile RHEL did use Plasma 4 and hired a number of developers to help us in our time of need which was fabulous but all except one have left some time ago and nobody expected it to continue for long.

So the deprecation is not really new or news and being picked up by the news is poor timing for Red Hat, it’s unclear if they want some distraction from the IBM news or just The Register playing around.  The community has always been much better at supporting out software for their users, maybe now the community run EPEL archive can include modern Plasma 5 instead of being stuck on the much poorer previous release.

Plasma 5 is now lightweight and feature full.  We get new users and people rediscovering us every day who report it as the most usable and pleasant way to run their day.  From my recent trip in Barcelona I can see how a range of different users from university to schools to government consider Plasma 5 the best way to support a large user base.  We now ship on high end devices such as the KDE Slimbook down to the low spec value device of Pinebook.  Our software leads the field in many areas such as video editor Kdenlive, or painting app Krita or educational suite GCompris.  Our range of projects is wider than ever before with textbook project WikiToLearn allowing new ways to learn and we ship our own software through KDE Windows, Flatpak builds and KDE neon with Debs, Snaps and Docker images.

It is a pity that RHEL users won’t be there to enjoy it by default. But, then again, they never really were. KDE is collaborative, open, privacy aware and with a vast scope of interesting projects after 22 years we continue to push the boundaries of what is possible and fun.

Facebooktwittergoogle_pluslinkedinby feather
on November 02, 2018 04:36 PM
I have been working with docker in the last days, and encounter the syntax issue with gedit. Just pure plain text. So make a small search and found an easy way for fixing this. I found Jasper J.F. van den Bosch repository in GitHub and found the solution for this simple problem.
We need to download the docker.lang file, available here: https://github.com/ilogue/docker.lang/blob/master/docker.lang

After that, you go to the folder you save the file and do the following command.
sudo mv docker.lang /usr/share/gtksourceview-3.0/language-specs/ 
If this doesn't work you can try the following:

sudo mv docker.lang  ~/.local/share/gtksourceview-3.0/language-specs/
And that's all!

Screenshot of gedit with no docker lang


Screenshot of gedit with docker lang


on November 02, 2018 04:18 PM