May 29, 2016

Top Menu for Lubuntu

Lubuntu Blog

Thanks to the blog WebUpd8, there’s a new “trick” to add an app menu to the LXDE panel, just like Unity interface has. Check this nice tutorial in our Tips’n’Tricks page.
on May 29, 2016 12:39 PM
And first day of the Ubucon Paris!

When I arrived there were a lot of public in all areas.

Install Party area


I was attended a Quesh talk, an introduction to the community.

Quesh's talk


After that Didier told us about the Snappy packages. Looks great.

Didier's talk


Then I was to eat and I saw Nicolas in there. Nicolas is a so great guy. I was speaking with he a few hours.

Nicolas and me


And then, I speak a bit about the 1st uNav's anniversary :) And in there was a big big big surprise from the Ubuntu Party members :)) They come with uNav and Ubuntu presents and they were singing happy birdthay :') Because of 10 years of Ubucon Paris and 1 year of uNav :)) (You guys are the best!).

:')))


And after that, it was the dinner time. So many members in the same restaurant.

Dinner

Presents from Ubuntu Paris


This was a great first day event. And tomorrow will be the last day of the Ubucon Paris.

on May 29, 2016 11:11 AM

May 28, 2016


I have just released procenv version 0.46. Although this is a very minor release for the existing platforms (essentially 1 bug fix), this release now introduces support for a new platform...

Darwin

Yup - OS X now joins the ranks of supported platforms.

Although adding support for Darwin was made significantly easier as a result of the recent internal restructure of the procenv code, it did present a challenge: I don't own any Apple hardware. I could have borrowed a Macbook, but instead I decided to see this as a challenge:

  • Could I port procenv to Darwin without actually having a local Apple system?
 Well, you've just read the answer, but how did I do this?

Stage 1: Docker


Whilst surfing around I came across this interesting docker image:


It provides a Darwin toolchain that I could run under Linux. It didn't take very long to follow my own instructions on porting procenv to a new platform. But although I ended up with a binary, I couldn't actually run it, partly because Darwin uses a different binary file format to Linux: rather than ELF, it uses the Mach-O format.



Stage 2: Travis

The final piece of the puzzle for me was solved by Travis. I'd read the very good documentation on their site, but had initially assumed that you could only build Objective-C based projects on OSX with Travis. But a quick test proved my assumption to be incorrect: it didn't take much more than adding "osx" to the os list and "clang" to the compiler list in procenv's .travis.yml to have procenv building and running (it runs itself as part of its build) on OSX under Travis!

Essentially, the following YAML snippet from procenv's .travis.yml did most of the work:

language: c
compiler:
  - gcc
  - clang
os:
  - linux
  - osx



All that remained was to install the build-time dependencies to the same file with this additional snippet:

before_install:
  - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew update; fi
  - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew install expat check perl; fi


(Note that it seems Travis is rather picky about before_install - all code must be on a single line, hence the rather awkward-to-read "if; then ....; fi" tests).


Summary


Although I've never personally run procenv under OSX, I have got a good degree of confidence that it does actually work.

That said, it would be useful if someone could independently verify this claim on a real system!) Feel free to raise bugs, send code (or even Apple hardware :-) my way!



on May 28, 2016 07:44 PM
New Ubuntu release, then new awesome Ubucon Paris party!

The Ubuntu Hour, as pre-event of the Ubucon was in the night, then I rented a city bike and I visited Paris in a new way.

I already visited Paris in the past, but this city is always surprised you every time :))

1

2

3

4

5

6

7

8


In the afternoon, started the Ubuntu Hour. Finally Ines and Gonzalzo come too from Spain.

Ubuntu Hour Paris


It was a great small event, with good beer and food, but an unique and amazing company :D

Firefox for ever :))

And this weekend... the Ubucon!! :))

All pictures were done with a Meizu MX4 Ubuntu Edition, without edits.
on May 28, 2016 11:52 AM

May 27, 2016

If you are a Yokadi user or if you have used other todo list systems, you might have encountered this situation where you wanted to quickly add a set of tasks to a project. Using Yokadi you would repeatedly write t_add <project> <task title>. History and auto-completion on command and project names makes entering tasks faster, but it is still slower than the good old TODO file where you just write down one task per line.

t_medit is a command to get the best of both worlds. It takes the name of a project as an argument and starts the default editor with a text file containing a line for each task of the project.

Suppose you have a "birthday" project like this:

yokadi> t_list birthday
                             birthday
ID|Title               |U  |S|Age     |Due date
-----------------------------------------------------------------
1 |Buy food (grocery)  |0  |N|2m      |
2 |Buy drinks (grocery)|0  |N|2m      |
3 |Invite Bob (phone)  |0  |N|2m      |
4 |Invite Wendy (phone)|0  |N|2m      |
5 |Bake a yummy cake   |0  |N|2m      |
6 |Decorate living-room|0  |N|2m      |

Running t_medit birthday will start your editor with this content:

1 N @grocery Buy food
2 N @grocery Buy drinks
3 N @phone Invite Bob
4 N @phone Invite Wendy
5 N Bake a yummy cake
6 N Decorate living-room

By editing this file you can do a lot of things:

  • Change task titles, including adding or removing keywords
  • Change task status by changing the character in the second column to S (started) or D (done)
  • Remove tasks by removing their lines
  • Reorder tasks by reordering lines, this will change the task urgency so that they are listed in the defined order
  • Add new tasks by entering them prefixed with -

Let's say you modify the text like this:

2 N @grocery Buy drinks
1 N @grocery Buy food
3 D @phone Invite Bob
4 N @phone Invite Wendy & David
- @phone Invite Charly
5 N Bake a yummy cake
- S Decorate table
- Decorate walls

Then Yokadi will:

  • Give the "Buy drinks" task a more important urgency because it moved to the first line
  • Mark the "Invite Bob" task as done because its status changed from N to D
  • Change the title of task 4 to "@phone Invite Wendy & David"
  • Add a new task titled: "@phone Invite Charly"
  • Remove task 6 "Decorate living-room"
  • Add a started task titled: "Decorate table" (note the S after -)
  • Add a new task titled: "Decorate walls"

You can even quickly create a project, for example if you want to plan your holidays you can type t_medit holidays. This creates the "holidays" project and open an empty editor. Just type new tasks, one per line, prefixed with -. When you save and quit, Yokadi creates the tasks you entered.

One last bonus: if you use Vim, Yokadi ships with a syntax highlight file for t_medit:

t_medit syntax highlight

This should be in the upcoming 1.1.0 version, which I plan to release soon. If you want to play with it earlier, you can grab the code from the git repository. Hope you like it!

on May 27, 2016 11:02 PM

Come and join us for a most excellent Gathering of Halflings at Kubuntu Party 4, Friday 17th June 19:00 UTC.

The party theme is all about digging out those half finished projects we’ve all got lying around our Geekdoms, and fetching them along for a Show ‘n’ Tell. As ever, there will be party fun and games, an opportunity to kick back from all the contributing that we do, so join us and  enjoy good company and laughter.

Our last party Kubuntu party 3 proved to be another success, with further improvement and refinement upon the previous Kubuntu Party.

New to the Kubuntu Party scene? Fear not my intrepid guests, new friendships are merely but a few clicks away. Check out our previous story trail.

Kubuntu_Party_3.2

The lessons learned from party  2 had been implemented in party 3. Our main focus is on our guests and their topics of conversation. We didn’t try to incorporate too many things, but simply just let things flow and develop un-conference style. We kept to our plan of closing the party at 22:00 UTC, with a 30 minute over-run to allow people to finish up.  This worked really well and the feedback from the guests was really positive. For the next party we will tighten this over-time further to 15 minutes for close.

We had fun discussing many aspects of computing, including of course lots about Kubuntu. As the party progressed we got into a keyboard geek war, with various gaming keyboards, bluetooth devices, and some amazing back lighting.  However, there simply was nothing to compete with the bluetooth Laser projected keyboard and Mouse that Jim produced, it was awesome!

We also had great fun playing with an IRC Controlled Sphero Robot, a project that Rick Timmis has been working on. The party folks got chance to issue various motion and lighting commands to the Sphero spherical robot. Party goers were able to watch the Robot respond via Rick’s webcam in Big Blue Button.

Rick said

“It was also Awesome seeing that brightly coloured little ball, dashing back and forth at the behest of the party revelers.

It all got rather surreal when Marius broke out his VR Headset, a sophisticated version of the Google Cardboard. The headset enabled Marius to place one of his many (and I mean bags full) of mobile devices in the headset aperture, and vanish into an immersive 3D world.

What are you waiting for ? Book the party in your diary now.

Friday 17th June 19:00 UTC.

Details of our conference server will be posted to #kubuntu-podcast on irc.freenode.net at 18:30 UTC. Or you can follow us on Google+ Kubuntu Podcast and check in on the events page.

 

on May 27, 2016 08:34 PM

The latest Ubuntu LTS is out, so it’s time for an updated memory usage comparison.

1604MemoryCompare

Boots means it will boot to a desktop that you can move the mouse on and is fully loaded.  While Browser and Smooth means we can load my website in a reasonable amount of time.

Takeaways

Lubuntu is super efficient

Lubuntu is amazing in how much less memory it can boot in.  I believe it is still the only one with ZRam enabled by default, which certainly helps a bit.

I actually did the memory usage for ZRam to the nearest MB for fun.
The 32 bit version boots in 224 MB, and is smooth with Firefox at only 240MB!   The 64 bit boots at only 25 MB more (251), but is needs 384 MB to be smooth.

If you are memory limited, change flavors first, 32-bit won’t help that much

Looking just at “Browser and Smooth” because that’s a more clear use-case.  There is no significant memory  difference between the 32 and 64 bit varients of: Xubuntu,  Ubuntu Gnome, Ubuntu (Unity).

Lubuntu, Kubuntu, and Ubuntu Mate do have significant deltas, which let’s explore:
Kubuntu – If you are worried about memory requirements do not use.
Ubuntu Mate – It’s at most a 128MB loss, likely less.  (We did that to 128MB accuracy).
Lubuntu 64 bit is smooth at 384MB.  32 bit saves almost 144 MB!  If you are severally memory limited 32-bit Lubuntu becomes your only choice.

Hard Memory Limit
The 32-bit hard memory requirement is 224 MB. (Below that is panics)
The 64-bit hard memory requirement is 251 MB.  Both of these were tested with Lubuntu.

Check out the 14.04 Post.   I used Virt-Manager/KVM instead of Virtualbox for the 16.04 test.

Extras: Testing NotesSpreadsheet

on May 27, 2016 07:31 PM

May 26, 2016

S09E13 – Hollywood Nights - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Thirteen of Season Nine of the Ubuntu Podcast! Alan Pope, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again, but one of us is not!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on May 26, 2016 02:00 PM
<a href="https://www.flickr.com/photos/elkaypics/9059309621/">"Firefish in the sky" by  eLKayPics / Lutz Koch, CC BY-NC-ND 2.0</a>

Yes, like Jos, Lukas and Björn I (mostly known for my work on and maintenance of the LDAP backend) am leaving ownCloud Inc. Actually, the 20th of May was already my last day.

Flashback. In late 2011, Frank asked me whether I would like to join a new company that would back the ownCloud project. Back then, I was already contributing to the open source project. Being a passionate Linux user and an active member of the Kubuntu community, I got an amazing opportunity to work on free software on a full time basis. Eventually, my first day at ownCloud Inc. was in February 2012.

Belonging to Generation Y, I value most in work an opportunity to support the common good and make something sustainable. With ownCloud it is possible to give important benefit back to the world. With ownCloud being

we empower everyone to be an owner and stay in control of their data and concentrate on their main goals.

These 4 years with ownCloud Inc. were a great ride for me. Working in a distributed and international setup is not always an easy job, but nevertheless exciting and an extraordinarily fascinating one. I came to be a far better software developer, met fabulous people (with many of whom I became good friends), and together we made the ownCloud project and community grow vastly. Also, it's simply great to work side by side with people like Danimo, Jos and Frank who I met and befriended long before there was any bit of ownCloud.

Ka works and the world moves on
– Wolves of the Calla, Stephen King

I decided to quit because not everything in the ownCloud Inc. company world evolved as I imagined. It's not necessarily bad, only it became clear that for me it is time for a change. No regrets, I have had a splendid time at ownCloud Inc. and am proud to have it as a part of my life and identity.

Leaving the job does not mean leaving the community. I will still be around and help out, for example mentoring the LDAP Provider enhancement. My owncloud.com email address is no longer valid, please see the imprint for my personal one in case you want to contact me there.

Thank you, dear colleagues, and all the best!

on May 26, 2016 11:57 AM

May 25, 2016

Plasma Wayland ISO Checkup

Jonathan Riddell

My Plasma Wayland ISOs are building nicely fresh each day.  I asked Betty the fuzzy Guinea Pig to gave one a try today and there’s still obvious bugs like no text on task bar and the blue window bars are back but she’s generally impressed at how this is likely to be a good replacement for X in the near future.

Download 1.0GB ISO

Betty the Fuzzpig Tries Plasma Wayland

 

 

facebooktwittergoogle_pluslinkedinby feather
on May 25, 2016 05:26 PM

There was a time when mobile applications were geared towards entertainment to help pass the time. Mobile apps have come a long way over the past few years with apps that can change your life for the better. Apps can help improve your health, some can help you manage your money, and there are some that can help you manage your business more efficiently. Mobile apps are much more than just silly games, so take a look at some things apps can help you with in your life.

Business

If you run your own business, you know things can be overwhelming sometimes. There are many great mobile apps designed to make running your business easier. Some business related apps can assist with:

  • Offering easy-to-use design options
  • Collecting activity reports
  • Scheduling email campaigns
  • Storing and sharing presentations
  • Scanning and uploading business card details
  • Assigning assignments to staff
  • Time tracking for employees
  • Invoicing

If you need a well-designed mobile app for your business, choose a company such as Y media Labs that will deliver a quality product.

Sleep Trackers

If you’ve ever wondered how well you really sleep at night, there’s an app for that. With a sleep tracker app, you’ll be able to get a better idea about your sleeping habits, how well you sleep, alertness levels, and monitoring the impact of sleep loss. Some sleep trackers have features such as motion-tracking, smart alarm, and a sound recorder that detects snoring and sleep talking. A lot of sleep tracker apps connect to your activity tracker and will also count your steps and show you how active you are during the day.

Money Management

Some people would like to be more organized with their money, but sometimes that’s easier said then done. Money management apps assist with keeping a balanced budget and staying on top of your finances. If you’re looking for the most organization, choose one that will sync all of your accounts including your 401k, bank account, mutual funds, and IRA. With these apps, you can get account totals, budget your money, transactions, and expenses.

Fitness

Most people want to become more active to live a healthy life and there are numerous apps to assist with helping you get back on track. Fitness apps are great for people that need a little extra encouragement to walk extra steps or find easy-to-do workouts they can do at home. Certain apps allow you to log everything you eat during the day and calculate the amount of calories you’ve consumed. Other apps have very short workouts you can do each day that will easily fit into your schedule. Pair your favorite fitness apps with your activity tracker for a more detailed report of your overall fitness.

Navigation

Navigation apps are great because you won’t be required to have a separate GPS system in your car anymore. GPS apps aren’t just good for driving: they also provide great assistance on hikes as well. GPS apps will give you directions, show you maps all over the world, and allow you to customize places you frequent often so you can access them quickly.

Business can ho a long way by staying on top of modern technology and popular apps. You can also organize your life with popular apps and determine the best ways to get healthier. Apps are more than just games these days, so browse your app store and find the best ones that fit into your lifestyle.

The post Mobile Apps: Not Just for Games Anymore appeared first on deshack.

on May 25, 2016 08:17 AM
I've been a bit quiet online lately. A few weeks back, my father had a stroke, from which he seemed to at least partly recover. However, last week we found that he could not recover, and was in fact dying.

He died 12 May 2016. I wrote about that a bit here: http://genweblog.blogspot.com/2016/05/rest-in-peace-ted-cowan-1926-2016.html . I was holding his hand as he passed, as was my sister. We're both happy that he is free of his pain, but are both grieving that both our parents are now dead.

Grieving is strange. Sometimes life seems normal, but sometimes not. So I will help out when I have the energy and interest, and at other times, withdraw and recharge. Talking about this is fine in open channels or privately, if you want. This is not a sensitive subject; we'll all die in the end after all.
on May 25, 2016 12:19 AM

May 24, 2016

Need some Ubuntu decoration? Unixstickers, the largest e-commerce for Free Software and Open Source stickers and merchandise have partnered with Canonical, to start offering Ubuntu stickers!

ubuntu stickers and keyboard stickers

They are currently among the very few authorized sellers of Ubuntu swag in the world.

Throughout June, Unixstickers are offering Full Circle readers a 15% discount by using the code UBUNTU15

FULL DISCLAIMER: We make nothing from this. We’re just passing the offer along.

on May 24, 2016 05:58 PM

Autopilot: benefits of early release

Canonical Design Team

OpenStack is the leading open cloud platform, and Ubuntu is the world’s most popular operating system for OpenStack. Over the past two years we have created a tool that allows users to build an Ubuntu OpenStack cloud on their own hardware in a few simple steps: Autopilot.

This post covers the design process we followed on our journey from alpha to beta to release.

Alpha release: getting the basics right

We started by mapping out a basic Autopilot journey based on stakeholder requirements and designed a first cut of all the necessary steps to build a cloud:

  1. Choose the cloud configuration from a range of OpenStack optionsChoose cloud configuration
  1. Select the hardware the cloud should be built on
    Select the hardware
  1. View deployment status while the cloud is being built
    View deployment status
  1. Monitor the status and usage of the cloud
    Monitor Cloud

After the initial design phase Autopilot was developed and released as an alpha and a beta. This means that for over a year, there was a product to play around with, test and improve before it was made generally available.

Beta release: feedback and improvements

Providing a better overview: increased clarity in the dashboard

Almost immediately after the engineering team started building our new designs, we discovered that we needed to display an additional set of data on the storage graphs. On top of that, some guerilla testing sessions with Canonical engineers brought to light that the CPU and the storage graphs were easily misinterpreted.

dashboard-sketches

After some more competitive research and exploratory sketching, we decided to merge the graphs for each section by putting the utilisation on a vertical axis and the time on the horizontal axis. This seemed to improve the experience for our engineers, but we also wanted to validate with users in usability testing, so we tested the designs with eight participants that were potential Autopilot users. From this testing we learned to include more information on the axes and to include detailed information on hover.

The current graphs are quite an evolution compared to what we started with:
Improved dashboard graphs

Setting users up for success: information and help before the process begins

Before a user gets to the Autopilot wizard, they have to configure their hardware, install an application called MAAS to register machines and install Landscape to get access to Autopilot. A third tool called Juju is installed to help Autopilot behind the scenes.

All these bits of software work together to allow users to build their clouds; however, they are all developed as stand-alone products by different teams. This means that during the initial design phase, it was a challenge to map out the entire journey and get a good idea of how the different components work together.

Only when the Autopilot beta was released, was it finally possible for us to find some hardware and go through the entire journey ourselves, step by step. This really helped us to identify common roadblocks and points in the journey where more documentation or in-app explanation was required.

Increasing transparency of the process: helping users anticipate what they need and when configuration is complete

Following our walk-through, we identified a number of points in the Autopilot journey where contextual help was required. In collaboration with the engineering team we gathered definitions of technical concepts, technical requirement, and system restrictions.

Autopilot walk-through

Based on this info, we made adjustments to the UI. We designed a landing page  with a checklist and introduction copy, and we added headings, help text, and tooltips to the installation and dashboard page. We also included a summary panel on the configuration page, to guide users through the journey and provide instant feedback.

BR_step-by-step

GA release: getting Autopilot ready for the general public

Perhaps the most rewarding type of feedback we gathered from the beta release — our early customers liked Autopilot but wanted more features. From the first designs Autopilot has aimed to help users quickly set up a test cloud. But to use Autopilot to build a production cloud, additional features were required.

Testing without the hardware: try Autopilot on VMware

One of the biggest improvements for GA release was making it easy to try Autopilot, even for people that don’t have enough spare hardware to build a cloud. Our solution: try Autopilot using VMware!

Supporting customisation:  user-defined roles for selected hardware

In the alpha version a user could already select nodes, but in most enterprises users want more flexibility. Often there are different types of hardware for different roles in the cloud, so users don’t always want to automatically distribute all the OpenStack services over all the machines. We designed the ability to choose specific roles like storage or compute for machines, to allow users to make the most of their hardware.

Machine roles

Allowing users more control: a scalable cloud on monitored hardware

The first feature we added was the ability to add hardware to the cloud. This makes it possible to grow a small test cloud into a production sized solution. We also added the ability to integrate the cloud with Nagios, a common monitoring tool. This means if something happens on any of the cloud hardware, users would receive a notification through their existing monitoring system.

BR-Nagios

The benefits of early release

This month we are celebrating another  release of OpenStack Autopilot. In the two years since we started designing Autopilot, we have been able to add many improvements and it has been a great experience for us as designers to contribute to a maturing product.

We will continue to iterate and refine the features that are launched and we’re currently mapping the roadmap for the months ahead. Our goal remains for Autopilot to be a tool for users to maintain and upgrade an enterprise grade cloud that can be at the core of their operations.

 

on May 24, 2016 04:30 PM

Like many Ubuntu users, I drooled all over getting a System 76 laptop or desktop.  They are built by Ubuntu users for Ubuntu users.  I finally decided to buy one a few weeks ago.

Unboxing

Two Fridays ago, I ordered a System 76 Lemur and it finally came yesterday just 15 minutes before I had to go to work.  I only had time to photograph the unboxing and put it to charge.

The specs on this Lemur are:

Screenshot from 2016-05-21 16-36-24A few upgrades from the base.

The unopened box. I love the fact that it says, “Unleash your potential”. And about the sharp objects to open the packaging is true, you only need to use a sharp object to get the box itself open.

2016-05-23 12.42.21

The packaging See I mean?

The box I simply love how they designed in the inside. It makes me want to create!

Below are some of the images from the unboxing and all the images are here:

Initial Use Review

Just before I went to sleep,  I started it up.  The first time installation is simpler because Ubuntu is already installed and the laptop only needs to know what you need to have it they way you want.  I also ran my  restore script from the backup that I created from my desktop. I allowed it to run when I was sleeping.

Today when I logged in, I found an error in my script, fixed it, committed it, and pushed it on GitHub.  Because of this error, I had to install the needed programs, but that was no biggie.  I haven’t used so much yet (I’m writing this on my desktop) but it runs very smoothly and quietly.

Keyboard and Trackpad

I like the keyboard, it’s sized perfectly for the laptop and my hands.  Typing on it is very smooth.  The only bad thing is that it is not backlit.

Using the trackpad is also smooth and it’s prefect fit.  I also like that there is buttons for left and right click.

Fans

The fans are very, very quiet.  No other comment needed.

Body

It’s weight is 3 pounds, but I have no comment.  I love the finish on the laptop.  It’s a gray finish and I thought it was brighter.  I also love how the System 76 logo is raised a bit and centered.  The only problem that I have is that (to me) there is a bit too much flex on the top.

Hopefully in a week, I will have a post after one week of usage in order for me to talk about how the laptop runs.

on May 24, 2016 03:19 PM

With each moment that passes the technology industry becomes bigger and more integral to the fabric of daily life; there’s not a sector that hasn’t benefitted from it. Most people rely on technology every time they require entertainment, or to answer a question. One aspect of the tech industry that many could do without, though, is its apparent gender bias, and while there are more women undertaking roles than in previous years, there is still a long way to go before quotas are fulfilled, and women feel as though they’re truly a part of the tech sector.

The issue of diversity in the tech industry

There’s little getting around it; the tech industry is still frequently considered a male dominated sector, perhaps not helped by the fact that less women today then 20 years ago are choosing to join the profession, and some 56% are choosing to leave the profession within ten years.
While it’s certainly true that there are still a great many women choosing, and gaining, high-powered positions ranking above their male counterparts, many consider their appointment to be too little, too late. So, what is it that’s putting women off from applying for positions within the tech industry? There are certainly more than enough women that are qualified to undertake such roles, after all. Perhaps the biggest factor discouraging women from applying for jobs within the technology industry is its male dominated image; gender stereotypes have an incredible power over young girls choosing to study certain subjects, and they’re often put off so-called male courses such as math, science, and engineering. In addition, many women already believe that the industry is a sexist one, with 52% claiming that they’re aware of the gender pay-gap, and 73% discouraged from working in a typically sexist environment. These statistics are staggering, but the stereotype needn’t exist at all.

Regardless of the reasons why women are put off working within the technology industry one thing is for sure; the tech sector is quickly becoming the world’s top industry, with numerous careers branching out from its humble beginnings. Diversity is, and should be, a major concern for the industry moving forward. Is it possible to attract more women to work with technology?

An asset to the industry: Attracting women to the tech sector

While women may not be at the fore of the technology sector there are some incredibly strong and talented females heading the field, and inspiring young women to follow their dreams regardless of the odds seemingly stacked against them. Ginni Rometty, the CEO of IBM, for example, is an incredible role model for young girls hoping to break into formation technology and engineering, while the co-founder and chair of technology giant HTC is a woman, named Cher Wang. The truth is that women bring a whole range of assets to the technology industry, including the assurance of future talent, diversity and empowerment, and skills that may otherwise be overlooked. Technology impacts everybody, so surely it makes sense for women to hold the same power as men within the industry?

Much is being done to change the ways in which the technology sector operates, with improved education and training for employees, championed role models, the public challenging of negative stereotypes, and better mentoring opportunities being offered by a number of companies; this ensures that younger women, and school-aged girls, are encouraged into the industry from their youth, and inspired to do everything they can to achieve success. Indeed, there are a great many organizations currently raising concerns surrounding diversity in the tech industry, including Diversity Inc., which champions the rights of women and minority groups, as wel as celebrating their importance in a variety of sectors. CEO Luke Visconti is often quick to highlight industry failings and successness in his column, ‘Ask the White Guy’, while carefully unpicking arguments against the inclusion of certain groups in the workplace, and championing those inspirational few. Now is also the time to take note of the big companies pledging to make changes, including social media platform Pinterest, which very publically addressed its commitment to minorities, and pledged to increase its numbers of female and underrepresented employees during 2016. Such influence and dedication to change is difficult to ignore, but will it make any difference?

WHile much is being done to address the issue of gender-bias in the technology industry it is clear that there’s still a long way to go; women are still too easily discouraged from choosing ‘male dominated’ subjects that lead into such careers, and are under-supported in the roles they do manage to claim. Thanks to the influence of the women currently succeeding in the field, as well as organizations such as Diversity Inc., though, it is hoped that more women will pick up the mantle and make a name for themselves in the sector; their inclusion is long overdue, and most welcome.

The post Why more women are needed in the tech industry appeared first on deshack.

on May 24, 2016 12:39 PM

The versioning of the Ubuntu UI Toolkit

Ubuntu App Developer Blog

In the recent days there was lots of discussion about the versioning of the Ubuntu UI Toolkit. Finally we thought that the topic deserves a dedicated blog post to clarify the situation and resolve some misunderstandings.

Let’s start with  the background story.

The UITK releases, before we opened the 1.3 branch for development, was mainly targeting touch devices and their main objective was to offer more or less a complete API set for mobile application development. The versions prior to 1.3 were working on the desktop too, but they were clearly suboptimal for those use cases because for example they were missing mouse and keyboard capabilities

With the 1.3 development branch we set on a single goal. With this release the UITK will offer a feature complete API set for devices of all form factors with all kinds of capabilities. It means that applications built for the 1.3 UITK will work on a touchscreen device with a small display just as on a large screen with mouse and keyboard. It was a very ambitious plan, but absolutely realistic.

We have decided that we follow the "release early and release often" principle so developers will have time to adapt their applications to the new APIs. At the same time we promised that whatever API we release will be supported for at least one minor revision and we will follow a strict and developer friendly deprecation process if needed.

It means that even if the source code of the 1.3 UITK is not frozen, all APIs released in it are stable and safe to use.

So far we did keep our promise. There was not a single application in the store or in the archive  that suffered functional regression due to an intentional API break in the UITK. True, UITK has bugs. True, one can argue about if changing the color palette classifies to be an API change or not.  Not to mention the awkward situation when an application takes advantage of a bug in the UITK and loses that advantage when the bug gets fixed. Also we have seen broken applications because they were using private APIs and properties.

It is absolutely true that using a frozen API set is the safest for application developers. No doubt about it and I do hear the opinions that some developers wish to see a fully frozen 1.3 UITK. We do wish the same.

Now, let us visit this idea and check a bit around. I do promise that folding out the big picture will help everyoneunderstand why the UITK is developed in the way it is.

So, let us say we freeze the 1.3 UITK today.  In that case we need to open the 1.4 branch plus we would certainly open a Labs space. Before going any further let me list what kind of changes we do in the UITK codebase:

  1. Critical bug fixes. Right, I am sure that nobody argues the fact that once we found or reported a critical bug we have to push a fix to the supported releases as soon as possible. At this very moment we have a good number of open bug reports. About 80% of the merged branches and patches to the UITK code are bug fixes. With every OTA release we push out 10-20 critical bug fixes. It means that each bugfix needs to target both the frozen and the development branch, plus the labs space. From the point of bug fixes it is important that the supported branches of the UITK do not diverge too much. One may say 1.3 should be frozen, so no bug fixes should go there, eventually some showstoppers. However we have way too many of those fixes which we must land in 1.3 as well. Fragmenting the UITK and so the platform at this early stage might fire back later.

  2. Feature gaps for convergence. As we have stated many times, the convergence features are not yet completely implemented in the UITK. We do wish they were, but sadly  they are not. It means that almost every day we push something to the UITK codebase that makes that feature gap smaller. In case we freeze the 1.3 UITK we can push these convergence features only to the 1.4 and the labs space. That would mean that all core applications would need to migrate to the 1.4 UITK because they are the primary consumers of the convergence feature.

  3. UITK uses dynamic styling of components. The styles are loaded from a specified theme matching the version of the UITK module the component is imported from. This is necessary because themes implement UX including behavior and looks, so just like functions in the API developers may rely on theming when designing their apps, or even adding custom components. We are using the property cache to detect the version of the module. As we are not planning any API additions to StyledItem, moving to 1.4 would require us to declare a dummy property just to be able to detect that the component is imported from the 1.4 version. Introducing a property just to be able to differentiate doesn’t sound really professional. Yes, the version could be set in the component itself, but that would immediately break the symlink idea (second time) and beside that, noone guarantees that the version will be set prior to the style document name, so a dual-style loading can be eliminated. We had this API in the first version of the sub-theming, but was removed, and perhaps it was the only API break we did in 1.3 so far.

  4. Unit tests are also affected. They need to be duplicated at the least when components in 1.4 diverge in behavior and features - but even bugs in superclass A altered in 1.4 may affect component B which is not altered and still fail test cases. On the other hand Autopilot is not so flexible. While the CPOs (Custom Proxy Objects, the classes that represent QML components in Python test cases) basically do not care about the import versions, they do have problems with the API differences, and it is not so easy doing differentiation for the same component to detect which API can be used in what context. We’ve been discussing to try to move as many tests as we can to QTest (unit tests), however there are still tons of apps using Autopilot, and we have to provide and maintain CPOs for those.

  5. The upcoming Labs space will hold the components and APIs that we do not promise to be stable and are subject to change even in one minor version. We need this space to experiment with features and ideas that would not be possible in a stable branch.

If we look at this picture we will see immediately that the further we go with closing the feature gaps the more we diverge from the codebase of the frozen 1.3.  Note that code change does not mean API change! We are committed to stable APIs not to stable code. Freezing code is a luxurious privilege of very mature products. Implementing new features and fixing critical bugs in two different branches would mean that we need to fork the UITK. And that itself would bring issues which have not been  seen by many. A good example for this is the recently discovered incompatibility issue between the old style header and the refactored (to be implemented in C++) AdaptivePageLayout. To gain the performance improvements in 1.3 it’s necessary to change the component completely. Furthermore if only 1.4 started off with a rewritten AdaptivePageLayout fixing bugs would consume considerable time in two entirely different codebases at that point.

It is important to note that the UITK comes in a single package in a single library. Forking the UITK package is clearly not an option. The applications do not have control over their dependencies. Also creating multiple libraries for different versions is not an option either. Providing the UITK in a single plugin has some consequences. Many of the developers asked why there are no more frequent minor version bumps. The answer is simple. As long as all the versions come in a single plugin, each and every minor release will increase the memory consumption of the UITK. Bumping the UITK version 3-4 times a year would end up in a 10-12 times bigger memory footprint in just two years. We do not want that. And most probably when we “release” 1.4, we will need features from Qt 5.6, which means we need to bump imports in all our QML documents to 2.6. So it is a nice theory but it is not a working one.

To summarize the whole story, we are where we are for good reason. The way the UITK is versioned, packaged and provided to the application developers is not accidental. At the same time we do admit that after measuring the costs and benefits of different paths, we had to make compromises. The present so called rolling 1.3 release is safe to use, the APIs provided by the UITK are all stable and supported. But as it is still evolving and improving  it is a good idea to follow the news and announcements of the SDK developer team. We are available pretty much 24/7 on the #ubuntu-app-devel Freenode channel, on ubuntu-phone@lists.launchpad.net mailing list, on Telegram and on all commonly used public platforms. We are happy to listen to you and answer your questions.

on May 24, 2016 06:34 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #465 for the weeks of of May 2 – 15, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Chris Sirrs
  • Aaron Honeycutt
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on May 24, 2016 12:47 AM

Welcome to the Ubuntu Weekly Newsletter. This is issue #466 for the week May 16 – 22, 2016, and the full version is available here.

In this issue we cover:

The issue of The Ubuntu Weekly Newsletter is brought to you by:

  • Elizabeth K. Joseph
  • Chris Guiver
  • Chris Sirrs
  • Simon Quigley
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, content in this issue is licensed under a Creative Commons Attribution 3.0 License BY SA Creative Commons License

on May 24, 2016 12:33 AM

May 23, 2016

The importance of URLs

Stuart Langridge

“You don’t control Xykon! He controls you!”
“Like I said: subtle.”

Redcloak, The Order of the Stick

Lots of discussion about progressive web apps recently, with a general consensus among forward-thinking web people that this is the way we should be building things for the web from now on. Websites that work offline, that deal well with lie-fi1, that are responsive, that are progressive, that work everywhere but are better on devices that can cope with the glory. Alex Russell, who originally coined the term, talks about PWAs being responsive, connectivity independent, fresh, safe, discoverable, re-engageable, installable, linkable, and having app-like interactions.

We could discuss every part of that description, every word in that definition, for hours and hours, and if someone wants to nominate a pub with decent beer then I’m more than happy to have that discussion and a pint while doing it. But today, we’re talking about the word linkable.

Linkability

Jeremy tweeted:

Strongly disagree with Lighthouse wanting “Manifest’s display property set to standalone/fullscreen to allow launching without address bar.”

Jeremy Keith

A little background

First, a little background. Google Chrome attempts to detect whether the website you’re looking at “qualifies” as a Progressive Web App, because if it does then they will show an “install to home screen” banner on your second visit. This is a major improvement over the previous state of a user having to manually install a site they like to their home screen by fishing through the menus2 or using the “add to home screen” button in iOS Safari3. The Chrome team have then created Lighthouse, a tool which invokes a Chrome browser, checks whether a site passes their checks for “this looks like a PWA”, and returns a result.

So Jeremy’s point is this: Lighthouse is declaring that to be a valid PWA, you have to insist that when you’re added to the home screen, you stop showing the URL bar. And he doesn’t agree, because

I want people to be able to copy URLs. I want people to be able to hack URLs. I’m not ashamed of my URLs …I’m downright proud.

Jeremy Keith

This is inspirational stuff, and it’s true. URLs are important. Individual addressability of parts on the web is important.

However. (You knew there was a “however” coming.) Whether your web app shows a URL bar is not actually a thing about that web app.

A little more background

A bit more background. In order to qualify as a progressive web app, you have to provide a manifest.4 That manifest lists various properties about this web app which are useful to operating systems: what its human-readable name is, what a human-readable short name for it is, what its icon should be, a theme colour for it, and so on. This is all good.

But the manifest also lists a display_mode, defined in the spec as “how the web application is being presented within the context of an OS (e.g., in fullscreen, etc.)” Essentially, the options for the display mode are fullscreen (the app will take all the screen; hardware keys and the status bar will not be shown), standalone (no browser UI is shown, but the hardware keys and status bar will be displayed), and browser (the app will be shown with normal browser UI, ie. as a normal website).

Now we see Jeremy’s point. Chrome propose that you only qualify as a “real” PWA if you request “fullscreen” or “standalone” mode: that is, that you hide the URL bar. Jeremy says that URLs are important; they’re not a thing to hide away or to pretend that don’t exist. And he has a point. The hackability of URLs is surprisingly important, and unsurprisingly dismissed by app developers who want to lock down the user experience.

But, and this is the important point, whether a web app shows its URLs is not a property of that app. It’s a property of how that app’s developer thinks about the web.

Property versus preference

If Jeremy and I were both to work on a website, and then discuss what should be in the manifest, we’d agree on what the app’s name was, what a shortened name was, what the icon is. But we might disagree on whether the app should show a URL bar when launched. That disagreement isn’t about the app itself; it’s about whether you, the developer, think it’s OK to hide that an app is actually on the web, or whether you should proudly declare that it’s on the web. That doesn’t differ from app to app; it differs from developer to developer. The app manifest declares properties of the app, but the display property isn’t about the app; it’s about how the app’s developer wants it to be shown. Do they want to proudly declare that this app is on the web and of the web? Then they’ll add the URL bar. Do they want to conceal that this is actually a web app in order to look more like “native” apps? Then they’ll hide the URL bar. The display property feels rather less like it’s actually tied to the app, and rather more like it should be chosen at “add-to-home-screen” time by the user; do you, the bookmarking user, prefer to think of this as a web thing? Include the URL bar. Do you want to think of it as an app which doesn’t involve the web? Hide the URL bar. It’s a preference. It’s not a property.

On the desktop

The above argument stands alone. But there are additional issues with having a URL bar showing on an added-to-home-screen web app. We should discuss these separately, but here I have them in the same essay because it’s all relevant.

The additional issue is, essentially, this. On my desktop — not my phone — I add an app to my “home screen”. This might add it to my desktop as a shortcut icon, or to my Start Menu, or in the Applications list, or all of the above, depending on which OS I’m on.5 If that PWA declares itself as being standalone then how to handle it is obvious: open it in a new window, with no URL bar showing. Similarly, fullscreen web apps launched from an icon should be full screen. But what do we do when launching a browser display-mode web app on a desktop?

Since we’re launching something indistinguishable from just another browser tab, it should launch a browser tab, right? I mean, we’re opening something which is essentially a bookmark. But… wouldn’t it feel strange to you to pick something from your app menu or an icon from your desktop and have it just open a browser tab? It would for me, at least. So maybe we should launch a new browser window, with URL bar intact, as though you’d clicked “open in new window” on a link. But then I’d have a whole new browser window for something which doesn’t really deserve a whole new window; it’s just one more web page, so why does it get a window by itself? I manage my browser windows according to project; window A has tabs relevant to project A, window B has tabs relevant to project B, and so on. I don’t want a whole new window, and indeed I have extensions installed so that links which think they deserve a new window actually get a new tab instead.

It’s not very clear what should happen here. The whole idea of launching a website from an OS-level icon doesn’t actually mesh very well at all with the idea of tabbed browser windows. It does mesh well with the 2002-era idea of a-new-browser-window-for-every-URL, but that idea has gone away. We have tabbed browsing, and people like it.6

The Chrome team’s idea, that basically you can’t add an “OS-level bookmark” for a website which wants to be treated as a website, avoids these problems.

Jeremy’s got a point, though. Hiding away URLs, pretending that this thing you’re looking at is a “native” app, does indeed sacrifice one of the key strengths of the web — that everything’s individually addressable. You can’t bookmark the “account” page in Steam, or the “settings” window in Keynote or Word or LibreOffice. With the web, you can. That’s a good thing. We shouldn’t give it up lightly. But you already can’t do that for apps which use web technologies but pretend to be native. If Word or iTunes used a WebView to render its preferences dialog, would it be good if you could link directly to it with a URL like itunes://settings? Yes it would. Would it be good if the iTunes user interface had a URL bar at the top showing that URL all the time? Not really, no.

There is a paternalism discussion, here. URLs are a good thing about the web; the addressability of parts is a good thing about the web. People don’t necessarily appreciate that. How much effort should we put into making this stuff available even though people don’t want it, because they’re wrong to not want it? Do we actually know better than they do? I think: yes we do.7 But I don’t know how important that is, when we can also win people over to the web by pretending that it’s native apps, which is what people wrongly want.

Conclusions

On balance, therefore, I approve of the Lighthouse team’s idea that you don’t qualify as an add-to-home-screen-able app if you want a URL bar. I can see the argument against this, and I do agree that we’re giving up something important, something fundamental to the web by hiding away URLs. But I think that wanting to see the URL is not a property of an app; it’s a property of how you personally want to deal with apps. So browsers should, when adding things to the home screen, pretend that display:browser actually said display:standalone, but give people who care the ability to override that if they want. And if we want more people to care, then that’s what evangelism is for; having individual app developers decide how they want their app to be displayed just leads to fragmentation. Let’s educate people on why URLs are important, and then they can flip a switch and see the URLs for everything they use… but until we’ve convinced them, let’s not force them to see the URLs when what they want is a native-like experience.

  1. Jake Archibald eloquently names lie-fi as that situation where your phone claims to have a connection but actually it doesn’t, the lying sack of dingo’s entrails that it is, and just spins forever when you tell it to connect to a website. If you’ve ever toggled a device into airplane mode and back out again, you know what we’re talking about
  2. although the Chrome approach is not without its problems
  3. which is obscure enough that Matteo Spinelli made a library to show a pointer to the add-to-home-screen button; the library is wonderful, don’t get me wrong, but it ought to not need to exist
  4. If you don’t know how to create one, see the manifest generator that Bruce and I created
  5. and it should be noted that basically nobody actually handles PWAs properly on desktop yet; it’s all about mobile. But desktop is coming, and we’ll need to solve this.
  6. Whether tabbed browsing actually makes conceptual sense is not up for discussion, here; we’ve collectively decided to use it, much as we’ve collectively decided that one-file-manager-window-per-folder isn’t the way we want to go either.
  7. Hubris is a great idea. The Greeks taught us that.
on May 23, 2016 11:22 PM

PostBooks 4.9.5 was recently released and the packages for Debian (including jessie-backports), Ubuntu and Fedora have been updated.

Postbooks at pgDay.ch in Rapperswil, Switzerland

pgDay.ch is coming on Friday, 24 June. It is at the HSR Hochschule für Technik Rapperswil, at the eastern end of Lake Zurich.

I'll be making a presentation about Postbooks in the business track at 11:00.

Getting started with accounting using free, open source software

If you are not currently using a double-entry accounting system or if you are looking to move to a system that is based on completely free, open source software, please see my comparison of free, open source accounting software.

Free and open source solutions offer significant advantages: flexibility, businesses can choose any programmer to modify the code, and use of SQL back-ends, multi-user support and multi-currency support are standard. These are all things that proprietary vendors charge extra money for.

Accounting software is the lowest common denominator in the world of business software, people keen on the success of free and open source software may find that encouraging businesses to use one of these solutions is a great way to lay a foundation where other free software solutions can thrive.

PostBooks new web and mobile front end

xTuple, the team behind Postbooks, has been busy developing a new Web and Mobile front-end for their ERP, CRM and accounting suite, powered by the same PostgreSQL backend as the Linux desktop client.

More help is needed to create official packages of the JavaScript dependencies before the Web and Mobile solution itself can be packaged.

on May 23, 2016 05:35 PM

Last year I joined GitHub as Director Of Community. My role has been to champion and manage GitHub’s global, scalable community development initiatives. Friday was my last day as a hubber and I wanted to share a few words about why I have decided to move on.

My passion has always been about building productive, engaging communities, particularly focused on open source and technology. I have devoted my career to understanding the nuances of this work and which workflow, technical, psychological, and leadership ingredients can deliver the most effective and rewarding results.

As part of this body of work I wrote The Art of Community, founded the annual Community Leadership Summit, and I have led the development of community at Canonical, XPRIZE, OpenAdvantage, and for a range of organizations as a consultant and advisor.

I was attracted to GitHub because I was already a fan and was excited by the potential within such a large ecosystem. GitHub’s story has been a remarkable one and it is such a core component in modern software development. I also love the creativity and elegance at the core of GitHub and the spirit and tone in which the company operates.

Like any growing organization though, GitHub will from time to time need to make adjustments in strategy and organization. One component in some recent adjustments sadly resulted in the Director of Community role going away.

The company was enthusiastic about my contributions and encouraged me to explore some other roles that included positions in product marketing, professional services, and elsewhere. So, I met with these different teams to explore some new and existing positions and see what might be a good fit. Thanks to everyone in those conversations for your time and energy.

Unfortunately, I ultimately didn’t feel they matched my passion and skills for building powerful, productive, engaging communities, as I mentioned above. As such, I decided it was time to part ways with GitHub.

Of course, I am sad to leave. Working at GitHub was a blast. GitHub is a great company and is working on some valuable and important areas that strike right at the center of how we build great software. I worked with some wonderful people and I have many fond memories. I am looking forward to staying in touch with my former colleagues and executives and I will continue to be an ardent supporter, fan, and user of both GitHub and Atom.

So, what is next? Well, I have a few things in the pipeline that I am not quite ready to share yet, so stay tuned and I will share this soon. In the meantime, to my fellow hubbers, live long and prosper!

on May 23, 2016 03:20 PM

May 22, 2016

As I said in this post, I’m doing a series of blog posts about the programs that I use on Ubuntu.

My first program that I want to talk about is GitHub’s Atom code editor.  I tired a few code editors (mostly with working with Markdown) and none of them is as awesome as Atom.  Why?  See below:

Screenshot from 2016-05-21 17-45-46 Screenshot from 2016-05-21 17-48-01

In the first screenshot  on the left, you can see that I’m working on Ubuntu Membership workshop (that I plan to do a bit before the global jam or after it). The first panel on the left is the file manger for that specific project.  The middle is the editor where there is syntax highlighting happening for Markdown.  And on the last panel on the right, is the Markdown preview for that file.

In the next and final screenshot on the right, you can see the same file being worked on but in the middle panel shows what I have added (in green), deleted (in red), and changed (dark yellow) after the last commit.  It’s a handy feature.

I only have used Atom for a week now, and I don’t have much to say since I only work with Markdown files at the moment.  But that will change as I start to work on my coding projects, as stated here.  I know that Jono Bacon wrote an amazing post on Atom where I know that I will check out those other features and maybe report on them here.  Most likely, I will have a follow up post once I start to code more.

As for this series, I will post one post every Sunday until I run out of programs to talk about.  The next one that I will talk about is Mudlet, a M* client that I use to play my favorite text-based online roleplay.

See you next week!

on May 22, 2016 04:18 PM

May 20, 2016

So I’ve been using Ubuntu on my Nexus 7 (2013 Wi-Fi) for a few weeks. I’ve hit a few bugs here and there since I’ve been on rc-proposed (weekday images) pretty much the whole time. One of the most annoying ones was that the Unity 8 Shell would not rotate from landscape to portrait, if you ever used the Nexus 7 2012 or 2013 then you know this tablet is weighted for perfect portrait use. While apps have been able to do it for while now the Apps Scope as well as the other scopes have not till last week.

screenshot20160406_152708460   <– landscape                                                                                       screenshot20160504_104032104                                                                                                                                                                                                   portrait –>

 

 

 

 

I’ve seen other bigger bugs get fixed last week alone! Like the Camera app giving a error message after *trying* to record a video:

screenshot20160505_150027205

Or the Camera app not rotating correctly:

screenshot20160505_145933408

I’ve tried to use the tablet for my work on Kubuntu:

Editing files on the docs.kubuntu.org server with nano 🙂

screenshot20160412_182609436

Also having a tablet has helped me test my app uBeginner for AdaptivePageLayout: (insert shameless self promotion!)

screenshot20160423_193017448

I also enabled Read and Write Mode so I can install other applications like VIM for the hell of it lol

screenshot20160424_192216498

I’ve also found that Hangouts Video calling works in the default Ubuntu Web Browser:

screenshot20160512_194207585

while in Convergence Mode! no less! Only issue was that I could not switch to the front camera and I had a bit of echo on my end. With that I’ll so some more Convergence screenshots:

screenshot20160509_160640021 screenshot20160509_160629670 screenshot20160505_210405301

The next post will have some of my progress (thanks to the awesome other developers) on my new project uCycle. 🙂

 

on May 20, 2016 10:49 PM

Screenshot_20160518_160431

1. sudo apt-add-repository ppa:kubuntu-ppa/backports
2. sudo apt update
3. sudo apt full-upgrade -y

on May 20, 2016 08:15 PM

Colour palette updates

Canonical Design Team

Over the past few months, we’ve given you a peek into our evolving Suru design language with our wallpaper, convergence and Clock App blog posts.

Suru is based on Japanese aesthetics: minimalism, lightness and fluidity. This is why you will see the platform moving to a cleaner and more modern look.

Naturally, part of this evolution is colour. The new palette features a lightened set of colours with clearly defined usage to create  better visual hierarchy and an improved aesthetic sense of visual harmony.

On the technical side, SDK colour handling has been improved so that in the future colour usage will be more consistent and less prone to bugs.

The new palette has also expanded in scope with more colours to enable the creation of designs with greater depth, particularly as we move towards convergence, and reflects the design values we wish to impart onto the platform.

colour_palette

The new palette

Like our SDK, the colour palette has to be scalable. As we worked on visual designs for both apps and the shell, we found that having a colour palette which only contained six colours was too limiting. When approaching elements like the indicators or even shadows in the task switcher, light grey and dark grey weren’t going to be deep enough to stand out on a windowed view, where you have wallpaper and other windows to compete with.

The new palette is made up of thirteen colours. Some noticeable differences are an overall lighter look and additional greys. Purple is gone and we’ve added a yellow to the palette. This broader palette works to solve bugs where contrast and visibility were an issue, especially with the dark theme.

How we came to choose the colours was by iteratively reworking the visual designs across the whole platform and discovering what was needed as we designed. The greys came about when we worked on the revamped dark theme and shell projects, for example the upcoming contextual menus. While we added several new greys, the UI itself has taken on a less grey look.

App backgrounds have been upped to white, the grey neutral button has been lightened to Porcelain (a very light grey) in order to keep the button from looking disabled. These changes were made to improve visibility and contrast, to lighten the palette across the board and to keep everything consistent.

 

nearby_scopes
The previous design of the Nearby scope (left) and the updated design using the new palette (right) The background is lighter, buttons don’t look as disabled and text is higher contrast against the background.

 

The new palette allows developers to have more flexibility as the theme is now dynamic, rather than the colours being hard-coded to the UI as they were previously. In fact, our palette theme is built upon a layering system for which you can find the tutorial here.

Changing the use of orange

Previously orange was liberally used throughout our SDK, however such wide-ranging use of orange caused a number of UX issues:

  • Orange is so close to red that, at a glance, components could be misconstrued to be in an error state when in fact their current state was nominal.
  • Because orange is so close to red, the frequent use of orange made it harder for users to pick out actual error states in the UI.
  • Orange attracts the eye to wherever it is used, but frequently these elements didn’t warrant such a high level of visibility.

Around the same time as these issues were identified, we were also working on the design of focus states for keyboard navigation.

A focus state needs to be instantly visible so that a user can effortlessly see which item is focused without having to pause, look harder, and think.

After exploring a wide range of different concepts, we settled on using an orange frame as our keyboard navigation focus state.  However the use of this frame only worked if orange in all other areas was significantly toned down.

In order to fix the UX issues with the overuse of orange and to enable the use of an orange frame as our keyboard navigation focus state, the decision was made to be much more selective as to where and when orange should be applied.  The use of orange should now be limited to a single hero item per surface in addition to its use as our keyboard focus state.

This change has:

  • Improved visual hierarchy
  • Made error states instantly recognisable
  • Enabled the use of an orange frame as the keyboard navigation focus state

Usage of blue

For many years blue has been used in Ubuntu to alert the user to activities that are neutral (neither positive or negative).  Examples include the Launcher pips changing to blue to give the user a persistent notification of an app alert, or the messaging menu indicator changing to blue to indicate unread messages.

Previously in some (but not all) cases orange was being used to represent select and activity states but effective keyboard navigation had not yet been designed for Unity.

As part of our work on focus states, we also needed to consider a consistent visual language for select states, with a key requirement being that an item could be both focused and selected at the same time.

After much research, experimentation and testing, blue was chosen as the Ubuntu selected state colour.  Blue has also returned to being used for natural activity, for example in progress bars.  The use of blue for selected and other activity states works with almost all other elements, on both dark and light backgrounds, and stands out clearly and precisely when used in combination with a focus state.

Now that our usage of colour is more precisely and consistently defined (with orange = focus, blue = selected and neutral activity), you will see use of orange minimised so that it stands out as a focus state and more blue to replace orange’s previous selection and activity use.

 

Inbox
The sections headers now use blue to indicate which section is selected. This works well with the new focus frame that can be present when keyboard navigation is active.

The future for the palette

Colour is important for aesthetics (the palette needs to visually work together) but it also needs to convey meaning. Therefore a semantic approach is critical for maximum usability.  Some colours have cultural meanings, other colours have meanings applied by their context.

By extending the colours in our palette and organising them in a semantic way, we have created a stable framework of colour that developers can use to build their apps without time consuming and unnecessary work.  We can now be confident that our Suru design values are being consistently applied to every colour related design problem as we move forward with designing and building convergence.

on May 20, 2016 11:18 AM

May 19, 2016

S09E12 – Accordian Man - Ubuntu Podcast

Ubuntu Podcast from the UK LoCo

It’s Episode Twelve of Season Nine of the Ubuntu Podcast! Alan Pope, Mark Johnson, Laura Cowen and Martin Wimpress are connected and speaking to your brain.

We’re here again!

In this week’s show:

That’s all for this week! If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Comment on our Facebook page or comment on our Google+ page or comment on our sub-Reddit.

on May 19, 2016 02:00 PM

May 18, 2016

Python Bindings

Python provides a C API to define libraries which can be imported into a python environment. That python C interface is often used to provide bindings to existing libraries written in C++ and there are multiple existing technologies for that such as PyQt, PySide, Boost.Python and pybind.

pybind seems to be a more modern implementation of what Boost.Python provides. To create a binding, C++ code is written describing how the target API is to be exposed. That gives the flexibility of defining precisely which methods get exposed to the python environment and which do not.

PyQt and PySide are also similar in that they require the maintenance of a binding specification. In the case of PySide, that is an XML file, and in the case of PyQt that is a DSL which is similar to a C++ header. The advantage both of these systems have for Qt-based code is that they ship with bindings for Qt, relieving the binding author of the requirement to create those bindings.

PyKF5 application using KItemModels and KWidgetsAddons

PyKF5 application using KItemModels and KWidgetsAddons

Generated Python Bindings

For libraries which are large library collections, such as Qt5 and KDE Frameworks 5, manual binding specification soon becomes a task which is not suitable for manual creation by humans, so it is common to have the bindings generated from the C++ headers themselves.

The way that KDE libraries and bindings were organized in the time of KDE4 led to PyQt-based bindings being generated in a semi-automated process and then checked into the SCM repository and maintained there. The C++ header parsing tools used to do that were written before standardization of C++11 and have not kept pace with compilers adding new language features, and C++ headers using them.

Automatically Generated Python Bindings

It came as a surprise to me that no bindings had been completed for the KDE Frameworks 5 libraries. An email from Shaheed Hague about a fresh approach to generating bindings using clang looked promising, but he was hitting problems with linking binding code to the correct shared libraries and generally figuring out what the end-goal looks like. Having used clang APIs before, and having some experience with CMake, I decided to see what I could do to help.

Since then I’ve been helping get the bindings generator into something of a final form for the purposes of KDE Frameworks, and any other Qt-based or even non-Qt-based libraries. The binding generator uses the clang python cindex API to parse the headers of each library and generate a set of sip files, which are then processed to create the bindings. As the core concept of the generator is simply ‘use clang to parse the headers’ it can be adapted to other binding technologies in the future (such as PySide). PyQt-based bindings are the current focus because that fills a gap between what was provided with KDE4 and what is provided by KDE Frameworks 5.

All of that is internal though and doesn’t appear in the buildsystem of any framework. As far as each buildsystem is concerned, a single CMake macro is used to enable the build of python (2 and 3) bindings for a KDE Frameworks library:


  ecm_generate_python_binding(
    TARGET KF5::ItemModels
    PYTHONNAMESPACE PyKF5
    MODULENAME KItemModels
    SIP_DEPENDS
      QtCore/QtCoremod.sip
    HEADERS
      ${KItemModels_HEADERS}
  )

Each of the headers in the library are parsed to create the bindings, meaning we can then write this code:


  #!/usr/bin/env python
  #-*- coding: utf-8 -*-

  import sys

  sys.path.append(sys.argv[1])

  from PyQt5 import QtCore
  from PyQt5 import QtWidgets

  from PyKF5 import KItemModels

  app = QtWidgets.QApplication(sys.argv)

  stringListModel = QtCore.QStringListModel(
    ["Monday", "Tuesday", "Wednesday",
    "Thursday", "Friday", "Saturday", "Sunday"]);

  selectionProxy = KItemModels.KSelectionProxyModel()
  selectionProxy.setSourceModel(stringListModel)

  w = QtWidgets.QWidget()
  l = QtWidgets.QHBoxLayout(w)

  stringsView = QtWidgets.QTreeView()
  stringsView.setModel(stringListModel)
  stringsView.setSelectionMode(
    QtWidgets.QTreeView.ExtendedSelection)
  l.addWidget(stringsView)

  selectionProxy.setSelectionModel(
    stringsView.selectionModel())

  selectionView = QtWidgets.QTreeView()
  selectionView.setModel(selectionProxy)
  l.addWidget(selectionView)

  w.show()

  app.exec_()

and it just works with python 2 and 3.

Other libraries’ headers are more complex than KItemModels, so they have an extra rules file to maintain. The rules file is central to the design of this system in that it defines what to do when visiting each declaration in a C++ header file. It contains several small databases for handling declarations of containers, typedefs, methods and parameters, each of which may require special handling. The rules file for KCoreAddons is here.

The rules file contains entries to discard some method which can’t be called from python (in the case of heavily templated code for example) or it might replace the default implementation of the binding code with something else, in order to implement memory management correctly or for better integration with python built-in types.

Testing Automatically Generated Python Bindings

Each of the KDE Frameworks I’ve so far added bindings for gets a simple test file to verify that the binding can be loaded in the python interpreter (2 and 3). The TODO application in the screenshot is in the umbrella repo.

The binding generator itself also has tests to ensure that changes to it do not break generation for any framework. Actually extracting the important information using the cindex API is quite difficult and encounters many edge cases, like QStringLiteral (actually includes a lambda) in a parameter initializer.

Help Testing Automatically Generated Python Bindings

There is a call to action for anyone who wishes to help on the kde-bindings mailing list!


on May 18, 2016 09:18 PM

Screenshot_20160518_160431

Come over to #kubuntu-devel on freenode IRC, if you care to test from backports-landing.

on May 18, 2016 08:03 PM

On Friday last week I flew out to Austin to run the Community Leadership Summit and join OSCON. When I arrived in Austin, I called home and our son, Jack, was rather upset. It was clear he wasn’t just missing daddy, he also wasn’t feeling very well.

As the week unfolded he developed strep throat. While a fairly benign issue in the scheme of things, it is clearly uncomfortable for him and pretty scary for a 3 year-old. With my wife, Erica, flying out today to also join OSCON and perform one of the keynotes, it was clear that I needed to head home to take care of him. So, I packed my bag, wrestled to keep the OSCON FOMO at bay, and headed to the airport.

Coordinating the logistics was no simple feat, and stressful. We both feel awful when Jack is sick, and we had to coordinate new flights, reschedule meetings, notify colleagues and handover work, coordinate coverage for the few hours in-between her leaving and me landing, and other things. As I write this I am on the flight heading home and at some point she will zoom past me on another flight heading to Austin.

Now, none of this is unusual. Shit happens. People face challenges every day, and many far worse than this. What struck me so notably today though was the sheer level of kindness from our friends, family, and colleagues.

People wrapped around us like a glove. Countless people offered to take care of responsibilities, help us with travel and airport runs, share tips for helping Jack feel better, provide sympathy and support, and more.

This was all after a weekend of running the Community Leadership Summit, an event that solicited similar levels of kindness. There were volunteers who got out of bed at 5am to help us set up, people who offered to prepare and deliver keynotes and sessions, coordinate evening events, equipment, sponsorship contributions, and help run the event itself. Then, to top things off, there were remarkably generous words and appreciation for the event as a whole when it drew to a close.

This is the core of what makes community so special, and so important. While at times it can seem the world has been overrun with cynicism, narcissism, negativity, and selfishness, we are instead surrounded by an abundance of kindness. What helps this kindness bubble to the surface are great relationships, trust, respect, and clear ways in which people can play a participatory role and support each other. Whether it is something small like helping Erica and I to take care of our little man or something more involved such as an open source project, it never ceases to inspire and amaze me how innately kind and collaborative we are.

This is another example of why I have devoted my life to understanding every nuance I can of how we can tap into and foster these fundamental human instincts. This is how we innovate, how we make the world a better place, and how we build opportunity for everyone, no matter what their background is.

When we harness these instincts, understand the subtleties of how we think and operate, and wrap them in effective collaborative workflows and environments, we create the ability to build and disrupt things more effectively than ever.

It is an exciting journey, and I am thankful every day to be joined on it by so many remarkable people. We are going build an exciting future together and have a rocking great time doing so.

on May 18, 2016 07:48 PM

Increased security, reliability and ease of use, now available on Raspberry Pi

May 18, 2016, London. Today Screenly, the most popular digital signage solution for the Raspberry Pi, and Canonical, the company behind Ubuntu, the world’s most popular open-source platform, jointly announce a partnership to build Screenly on Ubuntu Core. Screenly is adopting Ubuntu Core to give its customers a stable platform that is secure, robust, simple to use and manage, all available on a $35 Raspberry Pi.

Screenly commercialises an easy to install digital signage box or “player” and a cloud-based interface that today powers thousands of screens around the world. This enables restaurants, universities, shops, offices and anyone with a modern TV or monitor to create a secure, reliable digital sign or dashboard. This cost effective solution is capable of displaying full HD quality moving imagery, web content and static images.

Ubuntu Core offers a production environment for IoT devices. In particular, this new “snappy” rendition of Ubuntu offers the ability to update and manage the OS and any applications independently. This means that Screenly players will be kept up to date with the latest version of the Screenly software, and also benefit from continuous OS updates for enhanced security, stability and performance. Transactional updates means that any update can automatically be rolled back, ensuring reliable performance even in a failed update scenario.

Furthermore, Ubuntu Core devices can be managed from a central location, allowing Screenly users to manage a globally distributed fleet of digital signs easily, and without expensive on-site visits. A compromised display can be corrected immediately and the security of devices that are in public sphere is drastically improved.

Viktor Petersson, CEO of Screenly explains, “Ubuntu Core enables us to be more flexible and to focus on our software rather than managing an OS and software distribution across our large fleet of devices.”

Ubuntu Core also offers standardised OS and interfaces, available across a variety of chipsets and hardware. This means that Screenly can expand their portfolio of players across platforms without the costs traditionally associated with porting software to a new architecture.

Viktor Petersson at Screenly continues, “In terms of hardware, it can run on multiple hardware platforms and therefore if one of our partners requires a different hardware platform, the need to rebuild and retest our whole solution for a new OS goes away. This takes away bargaining power of the hardware vendor and gives the power back to the service providers, which for us means we’ll see greater innovation in this area.”

Mark Shuttleworth, Canonical founder adds, “Ubuntu Core is perfectly suited to applications in digital signage. Its application isolation and transactional updates provide unrivalled security, stability and ease of use, something vital for constantly visible content. We’re pleased to be working with Screenly, whose agile approach is a perfect example of innovation in the digital signage space.”

on May 18, 2016 11:00 AM

May 17, 2016

Thank you CC

Mark Shuttleworth

Just to state publicly my gratitude that the Ubuntu Community Council has taken on their responsibilities very thoughtfully, and has demonstrated a proactive interest in keeping the community happy, healthy and unblocked. Their role is a critical one in the Ubuntu project, because we are at our best when we are constantly improving, and we are at our best when we are actively exploring ways to have completely different communities find common cause, common interest and common solutions. They say that it’s tough at the top because the easy problems don’t get escalated, and that is particularly true of the CC. So far, they are doing us proud.

 

on May 17, 2016 08:16 PM

Lubuntu.me is back

Lubuntu Blog

First of all, we need to apologise for being offline for several days, due to server problems. Now everything’s solved and working fine. The download links have been repaired, the usual sections are still there and, of course, the blog and comment ability is restored. Again, sorry for the annoyance and… Happy downloading!
on May 17, 2016 07:43 PM

Random pairing with Python

Charles Profitt

I am adviser to a high school robotics team and wrote a small Python script to solve a pairing problem. We are starting our spring fund raising drive and I … Continue reading
on May 17, 2016 03:01 PM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In April, 116.75 work hours have been dispatched among 9 paid contributors. Their reports are available:

  • Antoine Beaupré did 16h.
  • Ben Hutchings did 12.25 hours (out of 15 hours allocated + 5.50 extra hours remaining, he returned the remaining 8.25h to the pool).
  • Brian May did 10 hours.
  • Chris Lamb did nothing (instead of the 16 hours he was allocated, his hours have been redispatched to other contributors over May).
  • Guido Günther did 2 hours (out of 8 hours allocated + 3.25 remaining hours, leaving 9.25 extra hours for May).
  • Markus Koschany did 16 hours.
  • Santiago Ruano Rincón did 7.50 hours (out of 12h allocated + 3.50 remaining, thus keeping 8 extra hours for May).
  • Scott Kitterman posted a report for 6 hours made in March but did nothing in April. His 18 remaining hours have been returned to the pool. He decided to stop doing LTS work for now.
  • Thorsten Alteholz did 15.75 hours.

Many contributors did not use all their allocated hours. This is partly explained by the fact that in April Wheezy was still under the responsibility of the security team and they were not able to drive updates from start to finish.

In any case, this means that they have more hours available over May and since the LTS period started, they should hopefully be able to make a good dent in the backlog of security updates.

Evolution of the situation

The number of sponsored hours reached a new record with 132 hours per month, thanks to two new gold sponsors (Babiel GmbH and Plat’Home). Plat’Home’s sponsorship was aimed to help us maintain Debian 7 Wheezy on armel and armhf (on top of already supported amd64 and i386). Hopefully the trend will continue so that we can reach our objective of funding the equivalent of a full-time position.

The security tracker currently lists 45 packages with a known CVE and the dla-needed.txt file lists 44 packages awaiting an update.

This is a bit more than the 15-20 open entries that we used to have at the end of the Debian 6 LTS period.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on May 17, 2016 01:57 PM

Random pairing with Python

Charles Profitt

I am adviser to a high school robotics team and wrote a small Python script to solve a pairing problem. We are starting our spring fund raising drive and I … Continue reading
on May 17, 2016 12:36 PM

This weekend we had the GStreamer Spring Hackfest 2016 in Thessaloniki, my new home town. As usual it was great meeting everybody in person again, having many useful and interesting discussions and seeing what everybody was working on. It seems like everybody was quite productive during these days!

Apart from the usual discussions, cleaning up some of our Bugzilla backlog and getting various patches reviewed, I was working with Luis de Bethencourt on writing a GStreamer plugin with a few elements in Rust. Our goal was to be able to be able to read and write a file, i.e. implement something like the “cp” command around gst-launch-1.0 with just using the new Rust elements, while trying to have as little code written in C as possible and having a more-or-less general Rust API in the end for writing more source and sink elements. That’s all finished, including support for seeking, and I also wrote a small HTTP source.

For the impatient, the code can be found here: https://github.com/sdroege/rsplugin

Why Rust?

Now you might wonder why you would want to go through all the trouble of creating a bridge between GStreamer in C and Rust for writing elements. Other people have written much better texts about the advantages of Rust, which you might want to refer to if you’re interested: The introduction of the Rust documentation, or this free O’Reilly book.

But for myself the main reasons are that

  1. C is a rather antique and inconvenient language if you compare it to more modern languages, and Rust provides a lot of features from higher-level languages while still not introducing all the overhead that is coming with it elsewhere, and
  2. even more important are the safety guarantees of the language, including the strong type system and the borrow checker, which make a whole category of bugs much more unlikely to happen. And thus saves you time during development but also saves your users from having their applications crash on them in the best case, or their precious data being lost or stolen.

Rust is not the panacea for everything, and not even the perfect programming language for every problem, but I believe it has a lot of potential in various areas, including multimedia software where you have to handle lots of complex formats coming from untrusted sources and still need to provide high performance.

I’m not going to write a lot about the details of the language, for that just refer to the website and very well written documentation. But, although being a very young language not developed by a Fortune 500 company (it is developed by Mozilla and many volunteers), it is nowadays being used in production already at places like Dropbox or Firefox (their MP4 demuxer, and in the near future the URL parser). It is also used by Mozilla and Samsung for their experimental, next-generation browser engine Servo.

The Code

Now let’s talk a bit about how it all looks like. Apart from Rust’s standard library (for all the basics and file IO), what we also use are the url crate (Rust’s term for libraries) for parsing and constructing URLs, and the HTTP server/client crate called hyper.

On the C side we have all the boilerplate code for initializing a GStreamer plugin (plugin.c), which then directly calls into Rust code (lib.rs), which then calls back into C (plugin.c) for registering the actual GStreamer elements. The GStreamer elements themselves have then an implementation written in C (rssource.c and rssink.c), which is a normal GObject subclass of GstBaseSrc or GstBaseSink but instead of doing the actual work in C it is just calling into Rust code again. For that to work, some metadata is passed to the GObject class registration, including a function pointer to a Rust function that creates a new instance of the “machinery” of the element. This is then implementing the Source or Sink traits (similar to interfaces) in Rust (rssource.rs and rssink.rs):

pub trait Source: Sync + Send {
    fn set_uri(&mut self, uri_str: Option<&str>) -> bool;
    fn get_uri(&self) -> Option;
    fn is_seekable(&self) -> bool;
    fn get_size(&self) -> u64;
    fn start(&mut self) -> bool;
    fn stop(&mut self) -> bool;
    fn fill(&mut self, offset: u64, data: &mut [u8]) -> Result;
    fn do_seek(&mut self, start: u64, stop: u64) -> bool;
}

pub trait Sink: Sync + Send {
    fn set_uri(&mut self, uri_str: Option<&str>) -> bool;
    fn get_uri(&self) -> Option;
    fn start(&mut self) -> bool;
    fn stop(&mut self) -> bool;
    fn render(&mut self, data: &[u8]) -> GstFlowReturn;
}

And these traits (plus a constructor) are in the end all that has to be implemented in Rust for the elements (rsfilesrc.rs, rsfilesink.rs and rshttpsrc.rs).

If you look at the code, it’s all still a bit rough at the edges and missing many features (like actual error reporting back to GStreamer instead of printing to stderr), but it already works and the actual implementations of the elements in Rust is rather simple and fun. And even the interfacing with C code is quite convenient at the Rust level.

How to test it?

First of all you need to get Rust and Cargo, check the Rust website or your Linux distribution for details. This was all tested with the stable 1.8 release. And you need GStreamer plus the development files, any recent 1.x version should work.

# clone GIT repository
git clone https://github.com/sdroege/rsplugin
# build it
cd rsplugin
cargo build
# tell GStreamer that there are new plugins in this path
export GST_PLUGIN_PATH=`pwd`
# this dumps the Cargo.toml file to stdout, doing all file IO from Rust
gst-launch-1.0 rsfilesrc uri=file://`pwd`/Cargo.toml ! fakesink dump=1
# this dumps the Rust website to stdout, using the Rust HTTP library hyper
gst-launch-1.0 rshttpsrc uri=https://www.rust-lang.org ! fakesink dump=1
# this basically implements the "cp" command and copies Cargo.toml to a new file called test
gst-launch-1.0 rsfilesrc uri=file://`pwd`/Cargo.toml ! rsfilesink uri=file://`pwd`/test
# this plays Big Buck Bunny via HTTP using rshttpsrc (it has a higher rank than any
# other GStreamer HTTP source currently and is as such used for HTTP URIs)
gst-play-1.0 http://download.blender.org/peach/bigbuckbunny_movies/big_buck_bunny_480p_h264.mov

What next?

The three implemented elements are not too interesting and were mostly an experiment to see how far we can get in a weekend. But the HTTP source for example, once more features are implemented, could become useful in the long term.

Also, in my opinion, it would make sense to consider using Rust for some categories of elements like parsers, demuxers and muxers, as traditionally these elements are rather complicated and have the biggest exposure to arbitrary data coming from untrusted sources.

And maybe in the very long term, GStreamer or parts of it can be rewritten in Rust. But that’s a lot of effort, so let’s go step by step to see if it’s actually worthwhile and build some useful things on the way there already.

For myself, the next step is going to be to implement something like GStreamer’s caps system in Rust (of which I already have the start of an implementation), which will also be necessary for any elements that handle specific data and not just an arbitrary stream of bytes, and it could probably be also useful for other applications independent of GStreamer.

Issues

The main problem with the current code is that all IO is synchronous. That is, if opening the file, creating a connection, reading data from the network, etc. takes a bit longer it will block until a timeout has happened or the operation finished in one way or another.

Rust currently has no support for non-blocking IO in its standard library, and also no support for asynchronous IO. The latter is being discussed in this RFC though, but it probably will take a while until we see actual results.

While there are libraries for all of this, having to depend on an external library for this is not great as code using different async IO libraries won’t compose well. Without this, Rust is still missing one big and important feature, which will definitely be needed for many applications and the lack of it might hinder adoption of the language.

on May 17, 2016 11:47 AM
A few years ago, I wrote and released a fun little script that would carve up an Ubuntu Byobu terminal into a bunch of splits, running various random command line status utilities.

100% complete technical mumbo jumbo.  The goal was to turn your terminal into something that belongs in a Hollywood hacker film.

I am proud to see it included in this NBCNews piece about "Ransomware".  All of the screenshots, demonstrating what a "hacker" is doing with a system are straight from Ubuntu, Byobu, and Hollywood!







Here are a few screenshots, and the video is embedded below...



Enjoy!
:-Dustin
on May 17, 2016 07:07 AM

May 16, 2016

Cloud is now mainstream, but what’s holding it back, what are the biggest concerns of technology decision makers? Are industry leaders choosing public, private or hybrid clouds? Canonical has commissioned Forrester Consulting to explore enterprise cloud platform trends and adoption. Learn about:

  • Most popular cloud deployment and strategies
  • What percentage of enterprises are already utilizing IaaS/PaaS
  • The desired goals and benefits of implementing various cloud models
  • What enterprises plan to do with their clouds
  • Enterprise top concerns about the cloud

The report summarizes how decision makers really feel about the promise of greater flexibility, scalability, agility and cost savings offered by the Cloud.

Download eBook

on May 16, 2016 02:26 PM

You read it right! After several years of being absent, Ubuntu is going to be present at OSCON this 2016. We are going to be there as a non-profit, so make sure you visit us at booth 631-3.

It has been several years since we had a presence as exhibitors, and I am glad to say we’re going to have awesome things this year. It’s also OSCON’s first year at Austin. New year, new venue! But getting to the point,  we will have:

  • System76 laptops so you can play and experience with Ubuntu Desktop
  • A couple Nexus 4 phones, so you can try out Ubuntu Touch
  • A bq M10 Ubuntu Edition tablet so you can see how beautiful it is, and see convergence in action (thanks Popey!)
  • A Mycroft! (Thanks to the Mycroft guys, can’t wait to see one in person myself!)
  • Some swag for free (first come-first serve basis, make sure to drop by!)
  • And a raffle for the Official Ubuntu Book, 8th Edition!

The conference starts Monday the 16th May (tomorrow!) but the Expo Hall opens on Tuesday night. You could say we start on Wednesday:) If you are going to be there, don’t forget to drop by and say hi. It’s my first time at OSCON, so we’ll see how the conference is. I am pretty excited about it – hope to see several of you there!


on May 16, 2016 02:27 AM