August 21, 2019

When users of your application range from high school students to expert data scientists, it’s often wise to avoid any assumptions about their system configurations. The Jupyter Notebook is popular with a diverse user base, enabling the creation and sharing of documents containing live code, visualisations, and narrative text. The app uses processes (kernels) to run interactive code in different programming languages and send output back to the user. Filipe Fernandes has a key responsibility in the Jupyter community for its packaging and ease of installation. At the 2019 Snapcraft Summit in Montreal, he gave us his impressions of snaps as a tool to improve the experience for all concerned.

“I’m a packager and a hacker, and I’m also a Jupyter user. I find Jupyter to be great as a teaching tool. Others use it for data cleaning and analysis, numerical simulation and modelling, or machine learning, for example. One of the strengths of Jupyter is that it is effectively language agnostic. I wanted Jupyter packaging to be similar, distro-agnostic, if you like.”

Filipe had heard about snaps a while back, but only really discovered their potential after he received an invitation to the Snapcraft Summit and noticed that Microsoft Visual Studio Code had recently become available as a snap. The ease of use of snaps was a big factor for him. “I like things that just work. I often get hauled in to sort out installation problems for other users – including members of my own family! It’s great to be able to tell them just to use the snap version of an application. It’s like, I snap my fingers and the install problems disappear!”

At the Summit, getting Snapcraft questions answered was easy too. “Every time I hit a snag, I raised my hand, and someone helped me.” Filipe was able to experiment with packaging trade-offs for Jupyter snaps. “I made a design choice to make the overall Jupyter package smaller by not including the Qt console. Most people just want the browser interface anyway. Similarly, I excluded the dependency for converting Jupyter Notebooks to other formats via pandoc. The size of the Jupyter snap then decreased from about 230 MB to just 68 MB”. 

What would he like to see in the Snapcraft of tomorrow? “There are some technical tasks to be done for each Jupyter snap, like declaring features of plug-ins and setting different permissions. It would be nice to find a way for automating these tasks, so that they do not have to be done manually each time a snap is built. Also, it’s not always easy to see which parts of the Snapcraft documentation are official and which are from enthusiastic but unsanctioned users.” Filipe suggests that creating a ‘verified publisher’ status or certification could be an answer, helping other users to decide how they want to consider different contributions to the documentation.  

A stable Jupyter snap is now available from the Snap Store providing the Jupyter users another option to install beyond the official sources. Filipe and the Jupyter community have been working on promoting it via banners, and blogs. “Some people get overwhelmed by the amount of information out there, especially when they start Googling options. I think snaps is a way to shortcut that,” adds Filipe. He recommends that other developers who want to get to this level should also come to the Summit. “The interactions here are so quick, to the point that I felt very productive within a really small amount of time, like I’d accomplished weeks of work. It’s awesome to be here and I’m looking forward to the next one.”

Install the community managed Jupyter snap here

on August 21, 2019 09:53 AM

A well configured linter can catch common errors before code is even run or compiled. ROS 2 makes it easy to add linters of your choice and make them part of your package’s testing pipeline.

We’ll step through the process, from start to finish, of adding a linter to ament so it can be used to automatically test your projects. We’ll try to keep it generic, but where we need to lean on an example we’ll be referring to the linter we recently added for mypy, a static type analyzer for Python. You can view the finished source code for ament_mypy and ament_cmake_mypy.

Design

We’ll need to make sure our linter integrates into ament‘s testing pipeline. Namely, this means writing CMake scripts to integrate with ament_cmake_test and ament_lint_auto.

We need to be able to generate a JUnit XML report for the Jenkins build farm to parse, as well as handle automatically excluding directories with AMENT_IGNORE files, so we’ll need to write a wrapper script for our linter as well.

Overall, we’ll need to write the following packages:

  • ament_[linter]
    • CLI wrapper for linter
      • Collect files, ignore those in AMENT_IGNORE directories
      • Configure and call linter
      • Generate XML report
  • ament_cmake_[linter]
    • Set of CMake scripts
      • ament_[linter].cmake
        • Function to invoke linter wrapper
      • ament_cmake_[linter]-extras.cmake
        • Script to hook into ament_lint_auto
        • Registered at build as the CONFIG_EXTRA argument to ament_package
      • ament_[linter].cmake
        • Hook script for ament_lint

Getting Started – Python

We’ll start with making the ament_[linter] package.

We’ll be using Python to write this package, so we’ll add a setup.py file, and fill out some required fields. It’s easiest to just take one from an existing linter and customize it. What it ends up containing will be specific to the linter you’re adding, but for mypy it looks like this:

from setuptools import find_packages
from setuptools import setup

setup(
    name='ament_mypy',
    version='0.7.3',
    packages=find_packages(exclude=['test']),
    package_data={'': [
        'configuration/ament_mypy.ini',
    ]},
    install_requires=['setuptools'],
    zip_safe=False,
    author='Ted Kern',
    author_email='<email>',
    maintainer='Ted Kern',
    maintainer_email='<email>',
    url='https://github.com/ament/ament_lint',
    download_url='https://github.com/ament/ament_lint/releases',
    keywords=['ROS'],
    classifiers=[
        'Intended Audience :: Developers',
        'License :: OSI Approved :: Apache Software License',
        'Programming Language :: Python',
        'Topic :: Software Development',
    ],
    description='Check Python static typing using mypy.',
    long_description="""\
The ability to check code for user specified static typing with mypy.""",
    license='Apache License, Version 2.0',
    tests_require=['pytest', 'pytest-mock'],
    entry_points={
        'console_scripts': [
            'ament_mypy = ament_mypy.main:main',
        ],
    },
)

We’ll of course need a package.xml file. We’ll need to make sure it has an <exec_depend> on the linter’s package name in ROSDistro. If its not there, you’ll need to go through the process of adding it. This is required in order to actually install the linter itself as a dependency of our new ament linter package; without it any tests using it in CI would fail. Here’s what it looks like for mypy:

<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
  <name>ament_mypy</name>
  <version>0.7.3</version>
  <description>Support for mypy static type checking in ament.</description>
  <maintainer email="me@example.com">Ted Kern</maintainer>
  <license>Apache License 2.0</license>
  <author email="me@example.com">Ted Kern</author>

  <exec_depend>python3-mypy</exec_depend>

  <export>
    <build_type>ament_python</build_type>
  </export>
</package>

The Code

Create a python file called ament_[linter]/main.py, which will house all the logic for this linter. Below is the sample skeleton of a linter, again attempting to be generic where possible but nonetheless based on ament_mypy:

#!/usr/bin/env python3

import argparse
import os
import re
import sys
import textwrap
import time
from typing import List, Match, Optional, Tuple
from xml.sax.saxutils import escape
from xml.sax.saxutils import quoteattr

# Import your linter here
import mypy.api  # type: ignore

def main(argv: Optional[List[str]] = None) -> int:
    if not argv:
        argv = []

    parser.add_argument(
    'paths',
    nargs='*',
    default=[os.curdir],
    help='The files or directories to check. For directories files ending '
          'in '.py' will be considered.'
    )
    parser.add_argument(
        '--exclude',
        metavar='filename',
        nargs='*',
        dest='excludes',
        help='The filenames to exclude.'
    )
    parser.add_argument(
        '--xunit-file',
        help='Generate a xunit compliant XML file'
    )

    # Example of a config file specification option
    parser.add_argument(
        '--config',
        metavar='path',
        dest='config_file',
        default=os.path.join(os.path.dirname(__file__), 'configuration', 'ament_mypy.ini'),
        help='The config file'
    )

    # Example linter specific option
    parser.add_argument(
        '--cache-dir',
        metavar='cache',
        default=os.devnull,
        dest='cache_dir',
        help='The location mypy will place its cache in. Defaults to system '
             'null device'
    )

    args = parser.parse_args(argv)

    if args.xunit_file:
        start_time = time.time()

    if args.config_file and not os.path.exists(args.config_file):
        print("Could not find config file '{}'".format(args.config_file), file=sys.stderr)
        return 1

    filenames = _get_files(args.paths)
    if args.excludes:
        filenames = [f for f in filenames
                     if os.path.basename(f) not in args.excludes]
    if not filenames:
        print('No files found', file=sys.stderr)
        return 1

    normal_report, error_messages, exit_code = _generate_linter_report(
        filenames,
        args.config_file,
        args.cache_dir
    )

    if error_messages:
        print('mypy error encountered', file=sys.stderr)
        print(error_messages, file=sys.stderr)
        print('\nRegular report continues:')
        print(normal_report, file=sys.stderr)
        return exit_code

    errors_parsed = _get_errors(normal_report)

    print('\n{} files checked'.format(len(filenames)))
    if not normal_report:
        print('No errors found')
    else:
        print('{} errors'.format(len(errors_parsed)))

    print(normal_report)

    print('\nChecked files:')
    print(''.join(['\n* {}'.format(f) for f in filenames]))

    # generate xunit file
    if args.xunit_file:
        folder_name = os.path.basename(os.path.dirname(args.xunit_file))
        file_name = os.path.basename(args.xunit_file)
        suffix = '.xml'
        if file_name.endswith(suffix):
            file_name = file_name[:-len(suffix)]
            suffix = '.xunit'
            if file_name.endswith(suffix):
                file_name = file_name[:-len(suffix)]
        testname = '{}.{}'.format(folder_name, file_name)

        xml = _get_xunit_content(errors_parsed, testname, filenames, time.time() - start_time)
        path = os.path.dirname(os.path.abspath(args.xunit_file))
        if not os.path.exists(path):
            os.makedirs(path)
        with open(args.xunit_file, 'w') as f:
            f.write(xml)

    return exit_code


def _generate_linter_report(paths: List[str],
                          config_file: Optional[str] = None,
                          cache_dir: str = os.devnull) -> Tuple[str, str, int]:
    """Replace this section with code specific to your linter"""
    pass


def _get_xunit_content(errors: List[Match],
                       testname: str,
                       filenames: List[str],
                       elapsed: float) -> str:
    xml = textwrap.dedent("""\
        <?xml version="1.0" encoding="UTF-8"?>
        <testsuite
        name="{test_name:s}"
        tests="{test_count:d}"
        failures="{error_count:d}"
        time="{time:s}"
        >
    """).format(
                test_name=testname,
                test_count=max(len(errors), 1),
                error_count=len(errors),
                time='{:.3f}'.format(round(elapsed, 3))
    )

    if errors:
        # report each linter error/warning as a failing testcase
        for error in errors:
            pos = ''
            if error.group('lineno'):
                pos += ':' + str(error.group('lineno'))
                if error.group('colno'):
                    pos += ':' + str(error.group('colno'))
            xml += _dedent_to("""\
                <testcase
                    name={quoted_name}
                    classname="{test_name}"
                >
                    <failure message={quoted_message}/>
                </testcase>
                """, '  ').format(
                    quoted_name=quoteattr(
                        '{0[type]} ({0[filename]}'.format(error) + pos + ')'),
                    test_name=testname,
                    quoted_message=quoteattr('{0[msg]}'.format(error) + pos)
                )
    else:
        # if there are no mypy problems report a single successful test
        xml += _dedent_to("""\
            <testcase
              name="mypy"
              classname="{}"
              status="No problems found"/>
            """, '  ').format(testname)

    # output list of checked files
    xml += '  <system-out>Checked files:{escaped_files}\n  </system-out>\n'.format(
        escaped_files=escape(''.join(['\n* %s' % f for f in filenames]))
    )

    xml += '</testsuite>\n'
    return xml


def _get_files(paths: List[str]) -> List[str]:
    files = []
    for path in paths:
        if os.path.isdir(path):
            for dirpath, dirnames, filenames in os.walk(path):
                if 'AMENT_IGNORE' in filenames:
                    dirnames[:] = []
                    continue
                # ignore folder starting with . or _
                dirnames[:] = [d for d in dirnames if d[0] not in ['.', '_']]
                dirnames.sort()

                # select files by extension
                for filename in sorted(filenames):
                    if filename.endswith('.py'):
                        files.append(os.path.join(dirpath, filename))
        elif os.path.isfile(path):
            files.append(path)
    return [os.path.normpath(f) for f in files]


def _get_errors(report_string: str) -> List[Match]:
    return list(re.finditer(r'^(?P<filename>([a-zA-Z]:)?([^:])+):((?P<lineno>\d+):)?((?P<colno>\d+):)?\ (?P<type>error|warning|note):\ (?P<msg>.*)$', report_string, re.MULTILINE))  # noqa: E501


def _dedent_to(text: str, prefix: str) -> str:
    return textwrap.indent(textwrap.dedent(text), prefix)

if __name__ == 'main':
    sys.exit(main(sys.argv[1:]))

We’ll break this down into chunks.

Main Logic

We write the file as an executable and use the argparse library to parse the invocation, so we begin the file with the shebang:

#!/usr/bin/env python3

and end it with the main logic:

if __name__ == 'main':
    sys.exit(main(sys.argv[1:]))

to forward failure codes out of the script.

The main() function will host the bulk of the program’s logic. Define it, and make sure the entry_points argument in setup.py points to it.

def main(argv: Optional[List[str]] = None) -> int:
    if not argv:
        argv = []

Notice the use of type hints, mypy will perform static type checking where possible and where these hints are designated.

Parsing the Arguments

We add the arguments to argparse that ament expects:

parser.add_argument(
    'paths',
    nargs='*',
    default=[os.curdir],
    help='The files or directories to check. For directories files ending '
          'in '.py' will be considered.'
)
parser.add_argument(
    '--exclude',
    metavar='filename',
    nargs='*',
    dest='excludes',
    help='The filenames to exclude.'
)
parser.add_argument(
    '--xunit-file',
    help='Generate a xunit compliant XML file'
)

We also include any custom arguments, or args specific to the linter. For example, for mypy we also allow the user to pass in a custom config file to the linter, with a pre-configured default already set up:

# Example of a config file specification option
parser.add_argument(
    '--config',
    metavar='path',
    dest='config_file',
    default=os.path.join(os.path.dirname(__file__), 'configuration', 'ament_mypy.ini'),
    help='The config file'
)

# Example linter specific option
parser.add_argument(
    '--cache-dir',
    metavar='cache',
    default=os.devnull,
    dest='cache_dir',
    help='The location mypy will place its cache in. Defaults to system '
            'null device'
)

Note: remember to include any packaged non-code files (like default configs) using a manifest or package_data= in setup.py.

Finally, parse and validate the args:

args = parser.parse_args(argv)

if args.xunit_file:
    start_time = time.time()

if args.config_file and not os.path.exists(args.config_file):
    print("Could not find config file '{}'".format(args.config_file), file=sys.stderr)
    return 1

filenames = _get_files(args.paths)
if args.excludes:
    filenames = [f for f in filenames
                    if os.path.basename(f) not in args.excludes]
if not filenames:
    print('No files found', file=sys.stderr)
    return 1

Aside: _get_files

You’ll notice the call to the helper function _get_files, shown below. We use a snippet from the other linters to build up an explicit list of files to lint, in order to apply our exclusions and the AMENT_IGNORE behavior.

def _get_files(paths: List[str]) -> List[str]:
    files = []
    for path in paths:
        if os.path.isdir(path):
            for dirpath, dirnames, filenames in os.walk(path):
                if 'AMENT_IGNORE' in filenames:
                    dirnames[:] = []
                    continue
                # ignore folder starting with . or _
                dirnames[:] = [d for d in dirnames if d[0] not in ['.', '_']]
                dirnames.sort()

                # select files by extension
                for filename in sorted(filenames):
                    if filename.endswith('.py'):
                        files.append(os.path.join(dirpath, filename))
        elif os.path.isfile(path):
            files.append(path)
    return [os.path.normpath(f) for f in files]

Note that in the near future this and _get_xunit_content will hopefully be de-duplicated into the ament_lint package.

This function, when given a list of paths, expands out all files recursively and returns those .py files that don’t belong in directories containing an AMENT_IGNORE file.

We exclude those files that are in the exclude argument list, and we return a failure from main if no files are left afterwards.

filenames = _get_files(args.paths)

if args.excludes:
    filenames = [f for f in filenames
                 if os.path.basename(f) not in args.excludes]

if not filenames:
    print('No files found', file=sys.stderr)
    return 1

Otherwise we pass those files, as well as relevant configuration arguments, to the linter.

Invoking the Linter

We call the linter using whatever API it exposes:

normal_report, error_messages, exit_code = _generate_linter_report(
    filenames,
    args.config_file,
    args.cache_dir
)

abstracted here with the following method signature:

def _generate_linter_report(paths: List[str],
                          config_file: Optional[str] = None,
                          cache_dir: str = os.devnull) -> Tuple[str, str, int]:

Recording the Output

Any failures the linter outputs are printed to stdout, while any internal linter errors go to stderr and return the (non-zero) exit code:

if error_messages:
    print('linter error encountered', file=sys.stderr)
    print(error_messages, file=sys.stderr)
    print('\nRegular report continues:')
    print(normal_report, file=sys.stderr)
    return exit_code

We collect each warning/error/note message emitted individually:

errors_parsed = _get_errors(normal_report)

We then report the errors to the user with something like:

print('\n{} files checked'.format(len(filenames)))
if not normal_report:
    print('No errors found')
else:
    print('{} errors'.format(len(errors_parsed)))

print(normal_report)

print('\nChecked files:')
print(''.join(['\n* {}'.format(f) for f in filenames]))

Generating JUnit XML Output

Here we generate an xml report write the file to disk in the requested location.

if args.xunit_file:
        folder_name = os.path.basename(os.path.dirname(args.xunit_file))
        file_name = os.path.basename(args.xunit_file)
        suffix = '.xml'
        if file_name.endswith(suffix):
            file_name = file_name[:-len(suffix)]
            suffix = '.xunit'
            if file_name.endswith(suffix):
                file_name = file_name[:-len(suffix)]
        testname = '{}.{}'.format(folder_name, file_name)

        xml = _get_xunit_content(errors_parsed, testname, filenames, time.time() - start_time)
        path = os.path.dirname(os.path.abspath(args.xunit_file))
        if not os.path.exists(path):
            os.makedirs(path)
        with open(args.xunit_file, 'w') as f:
            f.write(xml)

An example of a valid output XML to the schema is shown below

<?xml version="1.0" encoding="UTF-8"?>
<testsuite
name="tst"
tests="4"
failures="4"
time="0.010"
>
  <testcase
      name="error (/tmp/pytest-of-ubuntu/pytest-164/use_me7/lc.py:0:0)"
      classname="tst"
  >
      <failure message="error message:0:0"/>
  </testcase>
  <testcase
      name="error (/tmp/pytest-of-ubuntu/pytest-164/use_me7/l.py:0)"
      classname="tst"
  >
      <failure message="error message:0"/>
  </testcase>
  <testcase
      name="error (/tmp/pytest-of-ubuntu/pytest-164/use_me7/no_pos.py)"
      classname="tst"
  >
      <failure message="error message"/>
  </testcase>
  <testcase
      name="warning (/tmp/pytest-of-ubuntu/pytest-164/use_me7/warn.py)"
      classname="tst"
  >
      <failure message="warning message"/>
  </testcase>
  <system-out>Checked files:
* /tmp/pytest-of-ubuntu/pytest-164/use_me7/lc.py
* /tmp/pytest-of-ubuntu/pytest-164/use_me7/l.py
* /tmp/pytest-of-ubuntu/pytest-164/use_me7/no_pos.py
* /tmp/pytest-of-ubuntu/pytest-164/use_me7/warn.py
  </system-out>
</testsuite>

Aside: _get_xunit_content

We write a helper function, _get_xunit_content, to format the XML output to the schema . This one is a bit specific to mypy, but hopefully it gives you a good idea of what’s needed:

def _get_xunit_content(errors: List[Match],
                       testname: str,
                       filenames: List[str],
                       elapsed: float) -> str:
    xml = textwrap.dedent("""\
        <?xml version="1.0" encoding="UTF-8"?>
        <testsuite
        name="{test_name:s}"
        tests="{test_count:d}"
        failures="{error_count:d}"
        time="{time:s}"
        >
    """).format(
                test_name=testname,
                test_count=max(len(errors), 1),
                error_count=len(errors),
                time='{:.3f}'.format(round(elapsed, 3))
    )

    if errors:
        # report each mypy error/warning as a failing testcase
        for error in errors:
            pos = ''
            if error.group('lineno'):
                pos += ':' + str(error.group('lineno'))
                if error.group('colno'):
                    pos += ':' + str(error.group('colno'))
            xml += _dedent_to("""\
                <testcase
                    name={quoted_name}
                    classname="{test_name}"
                >
                    <failure message={quoted_message}/>
                </testcase>
                """, '  ').format(
                    quoted_name=quoteattr(
                        '{0[type]} ({0[filename]}'.format(error) + pos + ')'),
                    test_name=testname,
                    quoted_message=quoteattr('{0[msg]}'.format(error) + pos)
                )
    else:
        # if there are no mypy problems report a single successful test
        xml += _dedent_to("""\
            <testcase
              name="mypy"
              classname="{}"
              status="No problems found"/>
            """, '  ').format(testname)

    # output list of checked files
    xml += '  <system-out>Checked files:{escaped_files}\n  </system-out>\n'.format(
        escaped_files=escape(''.join(['\n* %s' % f for f in filenames]))
    )

    xml += '</testsuite>\n'
    return xml

Return from main

Finally, we return the exit code.

return exit_code

The CMake Plugin

Now that our linting tool is ready, we need to write an interface for it to attach to ament.

Getting Started

We create a new ros2 package named ament_cmake_[linter] in the ament_lint folder, and fill out package.xml. As an example, the one for mypy looks like this:

<?xml version="1.0"?>
<?xml-model href="http://download.ros.org/schema/package_format3.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
  <name>ament_cmake_mypy</name>
  <version>0.7.3</version>
  <description>
    The CMake API for ament_mypy to perform static type analysis on python code
    with mypy.
  </description>
  <maintainer email="<email>">Ted Kern</maintainer>
  <license>Apache License 2.0</license>
  <author email="<email>">Ted Kern</author>

  <buildtool_depend>ament_cmake_core</buildtool_depend>
  <buildtool_depend>ament_cmake_test</buildtool_depend>

  <buildtool_export_depend>ament_cmake_test</buildtool_export_depend>
  <buildtool_export_depend>ament_mypy</buildtool_export_depend>

  <test_depend>ament_cmake_copyright</test_depend>
  <test_depend>ament_cmake_lint_cmake</test_depend>

  <export>
    <build_type>ament_cmake</build_type>
  </export>
</package>

CMake Configuration

We write the installation and testing instructions in CMakeLists.txt, as well as pass our extras file to ament_package. This is the one for mypy, yours should look pretty similar:

cmake_minimum_required(VERSION 3.5)

project(ament_cmake_mypy NONE)

find_package(ament_cmake_core REQUIRED)
find_package(ament_cmake_test REQUIRED)

ament_package(
  CONFIG_EXTRAS "ament_cmake_mypy-extras.cmake"
)

install(
  DIRECTORY cmake
  DESTINATION share/${PROJECT_NAME}
)

if(BUILD_TESTING)
  find_package(ament_cmake_copyright REQUIRED)
  ament_copyright()

  find_package(ament_cmake_lint_cmake REQUIRED)
  ament_lint_cmake()
endif()

Then we register our extension with ament in ament_cmake_[linter]-extras.cmake. Again, this one is for mypy, but you should be able to easily repurpose it.

find_package(ament_cmake_test QUIET REQUIRED)

include("${ament_cmake_mypy_DIR}/ament_mypy.cmake")

ament_register_extension("ament_lint_auto" "ament_cmake_mypy"
  "ament_cmake_mypy_lint_hook.cmake")

We then create a CMake function in cmake/ament_[linter].cmake to invoke our test when needed. This will be specific to your linter and the wrapper you wrote above, but here’s how it looks for mypy:

#
# Add a test to statically check Python types using mypy.
#
# :param CONFIG_FILE: the name of the config file to use, if any
# :type CONFIG_FILE: string
# :param TESTNAME: the name of the test, default: "mypy"
# :type TESTNAME: string
# :param ARGN: the files or directories to check
# :type ARGN: list of strings
#
# @public
#
function(ament_mypy)
  cmake_parse_arguments(ARG "" "CONFIG_FILE;TESTNAME" "" ${ARGN})
  if(NOT ARG_TESTNAME)
    set(ARG_TESTNAME "mypy")
  endif()

  find_program(ament_mypy_BIN NAMES "ament_mypy")
  if(NOT ament_mypy_BIN)
    message(FATAL_ERROR "ament_mypy() could not find program 'ament_mypy'")
  endif()

  set(result_file "${AMENT_TEST_RESULTS_DIR}/${PROJECT_NAME}/${ARG_TESTNAME}.xunit.xml")
  set(cmd "${ament_mypy_BIN}" "--xunit-file" "${result_file}")
  if(ARG_CONFIG_FILE)
    list(APPEND cmd "--config-file" "${ARG_CONFIG_FILE}")
  endif()
  list(APPEND cmd ${ARG_UNPARSED_ARGUMENTS})

  file(MAKE_DIRECTORY "${CMAKE_BINARY_DIR}/ament_mypy")
  ament_add_test(
    "${ARG_TESTNAME}"
    COMMAND ${cmd}
    OUTPUT_FILE "${CMAKE_BINARY_DIR}/ament_mypy/${ARG_TESTNAME}.txt"
    RESULT_FILE "${result_file}"
    WORKING_DIRECTORY "${CMAKE_CURRENT_SOURCE_DIR}"
  )
  set_tests_properties(
    "${ARG_TESTNAME}"
    PROPERTIES
    LABELS "mypy;linter"
  )
endfunction()

This function checks for the existence of your linting CLI, prepares the argument list to pass in, creates an output directory for the report, and labels the test type.

Finally, in ament_cmake_[linter]_lint_hook.cmake, we write the hook into the function we just defined. This one is for mypy but yours should look almost identical:

file(GLOB_RECURSE _python_files FOLLOW_SYMLINKS "*.py")
if(_python_files)
  message(STATUS "Added test 'mypy' to statically type check Python code.")
  ament_mypy()
endif()

Final Steps

With both packages ready, we build our new packages using colcon:

~/ros2/src $ colcon build --packages-select ament_mypy ament_cmake_mypy --event-handlers console_direct+ --symlink-install

If all goes well, we can now use this linter just like any other to test our Python packages!

It’s highly recommended you write a test suite to go along with your code. ament_mypy lints itself with flake8 and mypy, and has an extensive pytestbased suite of functions to validate its behavior. You can see this suite here.

Check out our other article on how to use the mypy linter if you’d like to learn more about how to invoke linters from your testing suite for other packages.

on August 21, 2019 09:30 AM

August 20, 2019

With the agreement of the Debian LTS contributors funded by Freexian, earlier this year I decided to spend some Freexian money on marketing: we sponsored DebConf 19 as a bronze sponsor and we prepared some stickers and flyers to give out during the event.

The stickers only promote the Debian LTS project with the semi-official logo we have been using and a link to the wiki page. You can see them on the back of a laptop in the picture below. As you can see, we have made two variants with different background colors:

The flyers and the video are meant to introduce the Debian LTS project and to convince companies to sponsor the Debian LTS project through the Freexian offer. Those are short documents and they can’t explain the precise relationship between Debian LTS and Freexian. We try to show that Freexian is just an intermediary between contributors and companies, but some persons will still have the feeling that a commercial entity is organizing Debian LTS.

Check out the video on YouTube:

The inside of the flyer looks like this:

Click on the picture to see it full size

Note that due to some delivery issues, we have left-over flyers and stickers. If you want some to give out during a free software event, feel free to reach out to me.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on August 20, 2019 10:45 AM

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In July, 199 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Adrian Bunk got 8h assigned but did nothing (plus 10 extra hours from June), thus he is carrying over 18h to August.
  • Ben Hutchings did 18.5 hours (out of 18.5 hours allocated).
  • Brian May did 10 hours (out of 10 hours allocated).
  • Chris Lamb did 18 hours (out of 18 hours allocated).
  • Emilio Pozuelo Monfort did 21 hours (out of 18.5h allocated + 17h remaining, thus keeping 14.5 extra hours for August).
  • Hugo Lefeuvre did 9.75 hours (out of 18.5 hours, thus carrying over 8.75h to Augustq).
  • Jonas Meurer did 19 hours (out of 17 hours allocated plus 2h extra hours June).
  • Markus Koschany did 18.5 hours (out of 18.5 hours allocated).
  • Mike Gabriel did 15.75 hours (out of 18.5 hours allocated plus 7.25 extra hours from June, thus carrying over 10h to August.).
  • Ola Lundqvist did 0.5 hours (out of 8 hours allocated plus 8 extra hours from June, then he gave 7.5h back to the pool, thus he is carrying over 8 extra hours to August).
  • Roberto C. Sanchez did 8 hours (out of 8 hours allocated).
  • Sylvain Beucler did 18.5 hours (out of 18.5 hours allocated).
  • Thorsten Alteholz did 18.5 hours (out of 18.5 hours allocated).

Evolution of the situation

July was different than other months. First, some people have been on actual vacations, while 4 of the above 14 contributors met in Curitiba, Brazil, for DebConf19. There, a talk about LTS (slides, video) was given, followed by a Q&ligA session. Also a new promotional video about Debian LTS, aimed at potential sponsors was shown there for the first time.

DebConf19 was also a success in respect to on-boarding of new contributors, we’ve found three potential new contributors, one of them is already in training.

The security tracker (now for oldoldstable as Buster has been released and thus Jessie became oldoldstable) currently lists 51 packages with a known CVE and the dla-needed.txt file has 35 packages needing an update.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

on August 20, 2019 09:38 AM

August 19, 2019

Welcome to the Ubuntu Weekly Newsletter, Issue 592 for the week of August 11 – 17, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on August 19, 2019 10:50 PM

I am absolutely thrilled to announce my brand new book, ‘People Powered: How communities can supercharge your business, brand, and teams’ published by HarperCollins Leadership.

It will be available in hard cover, audiobook, and e-book formats, available from Amazon, Audible, Walmart, Target, Google Play, Apple iBooks, Barnes and Noble, and other great retailers.

The book is designed for leaders, founders, marketing and customer success staff, community managers/evangelists, and others who want to build a more productive, more meaningful relationship with your users, customers, and broader audience.

‘People Powered’ covers three key areas:

  1. The value and potential of building a community inside and outside a business, how it can create a closer relationship with your users and customers, and deliver tangible value such as improved support, technology development, advocacy, and more.
  2. I present the strategic method that I have used with hundreds of clients and companies I consult with and advise. This guides you how to create a comprehensive, productive, and realistic community strategy that scales up, build cross-departmental skin in the game, create incentives, run events, measure community success, and deliver results.
  3. Finally, I walk you through how to to integrate this strategy into a business, covering hiring staff, building internal skills and capabilities, measuring this work with a series of concrete maturity models, and much more.

The book covers a comprehensive range of topics within these areas:

The book features a forward from New York Times bestseller Peter Diamandis, founder of XPRIZE and Singularity University.

It also features contributions from Joseph Gordon-Levitt (Emmy-award winning actor), Jim Whitehurst (CEO, Red Hat), Mike Shinoda (Co-Founder, Linkin Park), Ali Velshi (Anchor, MSNBC), Jim Zemlin (Executive Director, The Linux Foundation), Noah Everett (Founder, TwitPic), Alexander van Engelen (Contributor, Fractal Audio Systems), and others.

The book has also received a comprehensive range of endorsements, including Nat Friedman (CEO, GitHub), Jim Whitehurst (CEO, Red Hat), Whitney Bouck (COO, HelloSign), Jeff Atwood (Founder, StackOverflow/Discourse), Juan Olaizola (COO, Santander Espana), Jamie Hyneman (Co-Creator and Presenter, Mythbusters), and many others:

Here are a few sample endorsements:

“If you want to tap into the power that communities can bring to businesses and teams, there is no greater expert than Jono Bacon.”

Nat Friedman, CEO of GitHub

“If you want to unlock the power of collaboration in communities, companies, and teams, Jono should be your tour guide and ‘People Powered’ should be your map.”

Jamie Smith, Former Advisor to President Barack Obama

“If you don’t like herding cats but need to build a community, you need to read ‘People Powered’.”

Jamie Hyneman, Co-Creator/Host of Mythbusters

“In my profession, building networks is all about nurturing relationships for the long term. Jono Bacon has authored the recipe how to do this, and you should follow it.”

Gia Scinto, Head of Talent at YCombinator

“When people who are not under your command or payment eagerly work together towards a greater purpose, you can move mountains. Jono Bacon is one of the most accomplished experts on this, and in this book he tells you how to it’s done.”

Mårten Mickos, CEO of HackerOne

“Community is fundamental to DigitalOcean’s success, and helped us build a much deeper connection with our audience and customers. ‘People Powered’ presents the simple, pragmatic recipe for doing this well.”

Ben Uretsky, Co-Founder of DigitalOcean

“Technology tears down the barriers of collaboration and connects our communities – globally and locally. We need to give all organizations and developers the tools to build and foster this effort. Jono Bacon’s book provides timely insight into what makes us tick as humans, and how to build richer, stronger technology communities together.”

Kevin Scott, CTO of Microsoft

People Powered Preorder Package

‘People Powered’ is released on 12th November 2019 but I would love you wonderful people to preorder the book.

Preordering will give you access to a wide range of perks. This includes early access to half the book, free audio book chapters, an exclusive six-part, 4-hour+ ‘People Powered Plus’ video course, access to a knowledge base with 100+ articles, 2 books, and countless videos, exclusive webinars and Q&As, and sweepstakes for free 1-on-1 consulting workshops.

All of these perks are available just for the price of buying the book, there are no additional costs.

To unlock this preorder package, you simply buy the book, fill in a form with your order number and these perks will be unlocked. Good times!

To find out more about the book and unlock the preorder package, click here

The post Announcing my new book: ‘People Powered: How communities can supercharge your business, brand, and teams’ appeared first on Jono Bacon.

on August 19, 2019 03:00 PM

This iteration was the Web & design team’s first iteration of the second half of our roadmap cycle, after returning from the mid-cycle roadmap sprint in Toronto 2 weeks ago.

Priorities have moved around a bit since before the cycle, and we made a good start on the new priorities for the next 3 months. 

Web squad

Web is the squad that develop and maintain most of the brochure websites across the Canonical.

We launched three takeovers; “A guide to developing Android apps on Ubuntu”, “Build the data centre of the future” and “Creating accurate AI models with data”.

Ubuntu.com Vanilla conversion 

We’ve made good progress on converting ubuntu.com to version 2.3.0 of our Vanilla CSS framework

EKS redesign

We’ve been working on a new design for our EKS images page.

Canonical.com design evolution

New designs and prototypes are coming along well for redesigned partners and careers sections on canonical.com.

Vanilla squad

The Vanilla squad works on constantly improving the code and design patterns in our Vanilla CSS framework, which we use across all our websites.

Ubuntu SSO refresh

The squad continues to make good progress on adding Vanilla styling to all pages on login.ubuntu.com.

Colour theming best practices

We investigated some best practices for the use of colours in themes.

Improvements to Vanilla documentation

We made a number of improvements to the documentation of Vanilla framework.

Base

The Base squad supports the other squads with shared modules, development tooling and hosting infrastructure across the board.

certification.ubuntu.com

We continued to progress with the back-end rebuild and re-hosting of certification.ubuntu.com, which should be released next iteration.

Blog improvements

We investigated ways to improve the performance of our blog implementations (most importantly ubuntu.com/blog). We will be releasing new versions of the blog module over the next few weeks which should bring significant improvements.

MAAS

The MAAS squad works on the browser-based UI for MAAS, as well as the maas.io website.

“Real world MAAS”

We’ve been working on a new section for the maas.io homepage about “Real world MAAS”, which will be released in the coming days. As MAAS is used at various enterprises of different scale we’re proving grouped curated content for three of the main audiences

UI settings updates

We’ve made a number of user experience updates to the settings page in the MAAS UI, including significant speed improvements to the Users page in conjunction with the work of moving the settings part of the application to React (from Django). We have completed the move of the General, Network, and Storage tabs, and have redesigned the experience for DHCP snippets and Scripts. 

Redesigned DHCP snippets tab

JAAS

The JAAS squad works on jaas.ai, the Juju GUI, and upcoming projects to support Juju.

This iteration we setup a bare-bones scaffold of our new JAAS Dashboard app using React and Redux.

Snaps

The Snap squad works on improvements to snapcraft.io.

Updating snapcraft.io to Vanilla 2.3.0

We continued work updating snapcraft.io to the latest version of Vanilla.

The post Design and Web team summary – 16 August 2019 appeared first on Ubuntu Blog.

on August 19, 2019 09:39 AM
  • Replicating Particle Collisions at CERN with Kubeflow – this post is interesting for a number of reasons. First, it shows how Kubeflow delivers on the promise of portability and why that matters to CERN. Second, it reiterates that using Kubeflow adds negligible performance overhead as compared to other methods for training. Finally, the post shows another example of how images and deep learning can replace more computationally expensive methods for modelling real-word behaviour. This is the future, today.
  • AI vs. Machine Learning: The Devil Is in the Details – Need a refresh on what the difference is between artificial intelligence, machine learning and deep learning? Canonical has done a webinar on this very topic, but sometimes a different set of words are useful, so read this article for a refresh. You’ll also learn about a different set of use cases for how AI is changing the world – from Netflix to Amazon to video surveillance and traffic analysis and predictions.
  • Making Deep Learning User-Friendly, Possible? – The world has changed a lot in the 18 months since this article was published. One of the key takeaways from this article is a list of features to compare several standalone deep learning tools. The exciting news? The output of these tools can be used with Kubeflow to accelerate Model Training. There are several broader questions as well – How can companies leverage the advancements being made within the AI community? Are better tools the right answer? Finding a partner may be the right answer.
  • Interview spotlight: One of the fathers of AI is worried about its future – Yoshua Bengio is famous for championing deep learning, one of the most powerful technologies in AI. Read this transcript to understand some of his concerns with the direction of AI, as well as the exciting developments in AI. Research that is extending deep learning into things like reasoning, learning causality, and exploring the world in order to learn and acquire information.

The post Issue #2019.08.19 – Kubeflow at CERN appeared first on Ubuntu Blog.

on August 19, 2019 08:00 AM

Okay, I’m back from Summer Camp and have caught up (slightly) on life. I had the privilege of giving a talk at BSidesLV entitled “CTFs for Fun and Profit: Playing Games to Build Your Skills.” I wanted to post a quick link to my slides and talk about the IoT CTF I had the chance to play.

I played in the IoT Village CTF at DEF CON, which was interesting because it uses real-world devices with real-world vulnerabilities instead of the typical made-up challenges in a CTF. On the other hand, I’m a little disappointed that it seems pretty similar (maybe even the same) year-to-year, not providing much variety or new learning experiences if you’ve played before.

on August 19, 2019 07:00 AM

August 15, 2019

S12E19 – Starglider

Ubuntu Podcast from the UK LoCo

This week we’ve been fixing floors and playing with the new portal HTML element. We round up the Ubuntu community news including the release of 18.04.3 with a new hardware enablement stack, better desktop integration for Livepatch and improvements in accessing the latest Nvidia drivers. We also have our favourite picks from the general tech news.

It’s Season 12 Episode 19 of the Ubuntu Podcast! Alan Pope, Mark Johnson and Stuart Langridge are connected and speaking to your brain.

In this week’s show:

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on August 15, 2019 02:00 PM

APT Patterns

Julian Andres Klode

If you have ever used aptitude a bit more extensively on the command-line, you’ll probably have come across its patterns. This week I spent some time implementing (some) patterns for apt, so you do not need aptitude for that, and I want to let you in on the details of this merge request !74.

so, what are patterns?

Patterns allow you to specify complex search queries to select the packages you want to install/show. For example, the pattern ?garbage can be used to find all packages that have been automatically installed but are no longer depended upon by manually installed packages. Or the pattern ?automatic allows you find all automatically installed packages.

You can combine patterns into more complex ones; for example, ?and(?automatic,?obsolete) matches all automatically installed packages that do not exist any longer in a repository.

There are also explicit targets, so you can perform queries like ?for x: ?depends(?recommends(x)): Find all packages x that depend on another package that recommends x. I do not fully comprehend those yet - I did not manage to create a pattern that matches all manually installed packages that a meta-package depends upon. I am not sure it is possible.

reducing pattern syntax

aptitude’s syntax for patterns is quite context-sensitive. If you have a pattern ?foo(?bar) it can have two possible meanings:

  1. If ?foo takes arguments (like ?depends did), then ?bar is the argument.
  2. Otherwise, ?foo(?bar) is equivalent to ?foo?bar which is short for ?and(?foo,?bar)

I find that very confusing. So, when looking at implementing patterns in APT, I went for a different approach. I first parse the pattern into a generic parse tree, without knowing anything about the semantics, and then I convert the parse tree into a APT::CacheFilter::Matcher, an object that can match against packages.

This is useful, because the syntactic structure of the pattern can be seen, without having to know which patterns have arguments and which do not - basically, for the parser ?foo and ?foo() are the same thing. That said, the second pass knows whether a pattern accepts arguments or not and insists on you adding them if required and not having them if it does not accept any, to prevent you from confusing yourself.

aptitude also supports shortcuts. For example, you could write ~c instead of config-files, or ~m for automatic; then combine them like ~m~c instead of using ?and. I have not implemented these short patterns for now, focusing instead on getting the basic functionality working.

So in our example ?foo(?bar) above, we can immediately dismiss parsing that as ?foo?bar:

  1. we do not support concatenation instead of ?and.
  2. we automatically parse ( as the argument list, no matter whether ?foo supports arguments or not
apt not understanding invalid patterns

apt not understanding invalid patterns

Supported syntax

At the moment, APT supports two kinds of patterns: Basic logic ones like ?and, and patterns that apply to an entire package as opposed to a specific version. This was done as a starting point for the merge, patterns for versions will come in the next round.

We also do not have any support for explicit search targets such as ?for x: ... yet - as explained, I do not yet fully understand them, and hence do not want to commit on them.

The full list of the first round of patterns is below, helpfully converted from the apt-patterns(7) docbook to markdown by pandoc.

logic patterns

These patterns provide the basic means to combine other patterns into more complex expressions, as well as ?true and ?false patterns.

?and(PATTERN, PATTERN, ...)

Selects objects where all specified patterns match.

?false

Selects nothing.

?not(PATTERN)

Selects objects where PATTERN does not match.

?or(PATTERN, PATTERN, ...)

Selects objects where at least one of the specified patterns match.

?true

Selects all objects.

package patterns

These patterns select specific packages.

?architecture(WILDCARD)

Selects packages matching the specified architecture, which may contain wildcards using any.

?automatic

Selects packages that were installed automatically.

?broken

Selects packages that have broken dependencies.

?config-files

Selects packages that are not fully installed, but have solely residual configuration files left.

?essential

Selects packages that have Essential: yes set in their control file.

?exact-name(NAME)

Selects packages with the exact specified name.

?garbage

Selects packages that can be removed automatically.

?installed

Selects packages that are currently installed.

?name(REGEX)

Selects packages where the name matches the given regular expression.

?obsolete

Selects packages that no longer exist in repositories.

?upgradable

Selects packages that can be upgraded (have a newer candidate).

?virtual

Selects all virtual packages; that is packages without a version. These exist when they are referenced somewhere in the archive, for example because something depends on that name.

examples

apt remove ?garbage

Remove all packages that are automatically installed and no longer needed - same as apt autoremove

apt purge ?config-files

Purge all packages that only have configuration files left

oddities

Some things are not yet where I want them:

  • ?architecture does not support all, native, or same
  • ?installed should match only the installed version of the package, not the entire package (that is what aptitude does, and it’s a bit surprising that ?installed implies a version and ?upgradable does not)

the future

Of course, I do want to add support for the missing version patterns and explicit search patterns. I might even add support for some of the short patterns, but no promises. Some of those explicit search patterns might have slightly different syntax, e.g. ?for(x, y) instead of ?for x: y in order to make the language more uniform and easier to parse.

Another thing I want to do ASAP is to disable fallback to regular expressions when specifying package names on the command-line: apt install g++ should always look for a package called g++, and not for any package containing g (g++ being a valid regex) when there is no g++ package. I think continuing to allow regular expressions if they start with ^ or end with $ is fine - that prevents any overlap with package names, and would avoid breaking most stuff.

There also is the fallback to fnmatch(): Currently, if apt cannot find a package with the specified name using the exact name or the regex, it would fall back to interpreting the argument as a glob(7) pattern. For example, apt install apt* would fallback to installing every package starting with apt if there is no package matching that as a regular expression. We can actually keep those in place, as the glob(7) syntax does not overlap with valid package names.

Maybe I should allow using [] instead of () so larger patterns become more readable, and/or some support for comments.

There are also plans for AppStream based patterns. This would allow you to use apt install ?provides-mimetype(text/xml) or apt install ?provides-lib(libfoo.so.2). It’s not entirely clear how to package this though, we probably don’t want to have libapt-pkg depend directly on libappstream.

feedback

Talk to me on IRC, comment on the Mastodon thread, or send me an email if there’s anything you think I’m missing or should be looking at.

on August 15, 2019 01:55 PM

August 14, 2019

Splash Two

Stephen Michael Kellat

Well, I just finished up closing out the remaining account that I had on Tumblr. I hadn't touched it for a while. The property just got sold again and is being treated like nuclear waste. I did export my data and somehow had a two gigabyte export. I didn't realize I used it that much.

My profile on Instagram was nuked as well. As things keep sprouting the suffix of "--by Facebook" I can merrily shut down those profiles and accounts. That misbehaving batch of algorithms mischaracterizes me 85% of the time and I get tired of dealing with such messes. The accretions of outright non-sensical weirdness in Facebook's "Ad Interests" for me get frankly quite disturbing.

Remember, you should take the time to close out logins and accounts you don't use. Zombie accounts help nobody.

on August 14, 2019 02:22 AM

August 13, 2019

KDE.org Applications Site

Jonathan Riddell

I’ve updated the kde.org/applications site so KDE now has web pages and lists the applications we produce.

In the update this week it’s gained Console apps and Addons.

Some exciting console apps we have include Clazy, kdesrc-build, KDebug Settings (a GUI app but has no menu entry) and KDialog (another GUI app but called from the command line).

This KDialog example takes on a whole new meaning after watching the Chernobyl telly drama.

And for addon projects we have stuff like File Stash, Latte Dock and KDevelop’s addons for PHP and Python.

At KDE we want to be a great place to be a home for your project and this is an important part of that.

 

on August 13, 2019 02:00 PM
Whenever a process accesses a virtual address where there isn't currently a physical page mapped into its process space then a page fault occurs.  This causes an interrupt so that the kernel can handle the page fault.  

A minor page fault occurs when the kernel can successfully map a physically resident page for the faulted user-space virtual address (for example, accessing a memory resident page that is already shared by other processes).   Major page faults occur when accessing a page that has been swapped out or accessing a file backed memory mapped page that is not resident in memory.

Page faults incur latency in the running of a program, major faults especially so because of the delay of loading pages in from a storage device.

The faultstat tool allows one to easily monitor page fault activity allowing one to find the most active page faulting processes.  Running faultstat with no options will dump the page fault statistics of all processes sorted in major+minor page fault order.

Faultstat also has a "top" like mode, inoking it with the -T option will display the top page faulting processes again in major+minor page fault order.


The Major and Minor  columns show the respective major and minor page faults. The +Major and +Minor columns show the recent increase of page faults. The Swap column shows the swap size of the process in pages.

Pressing the 's' key will switch through the sort order. Pressing the 'a' key will add an arrow annotation showing page fault growth change. The 't' key will toggle between cumulative major/minor page total to current change in major/minor faults.

The faultstat tool has just landed in Ubuntu Eoan and can also be installed as a snap.  The source can is available on github.  

on August 13, 2019 11:14 AM

August 12, 2019

Welcome to the Ubuntu Weekly Newsletter, Issue 591 for the week of August 4 – 10, 2019. The full version of this issue is available here.

In this issue we cover:

The Ubuntu Weekly Newsletter is brought to you by:

  • Krytarik Raido
  • Bashing-om
  • Chris Guiver
  • Wild Man
  • EoflaOE
  • And many others

If you have a story idea for the Weekly Newsletter, join the Ubuntu News Team mailing list and submit it. Ideas can also be added to the wiki!

Except where otherwise noted, this issue of the Ubuntu Weekly Newsletter is licensed under a Creative Commons Attribution ShareAlike 3.0 License

on August 12, 2019 11:07 PM

August 11, 2019

Waiting On Race Judging

Stephen Michael Kellat

Previously I produced podcasts for almost six years in the early days of podcasting. I've had to step aways from that for almost six years by dint of being a working fed. With as crazy as things have gotten being part of the civil service I have been having to assess making changes in life. One way to go would be to pick back up things I have had to set aside such as media production like what was done under the aegis of Erie Looking Productions.

This weekend has been KCRW's Radio Race. Soundcloud has an entire playlist of 2018's participant tracks posted that can be listened to. The submission from Erie Looking Productions is posted to Soundcloud now. We were supposed to use Otter.ai as part of the competition as they happened to be a sponsor using machine learning for transcription services. I can't easily link to that and frankly was not amused with what is spit out in terms of machine recognition of my voice. How many different ways do you think the place name of Ashtabula could be mis-transcribed?

What are the next steps? The judges in California will be listening to three hundred some odd entries this week. Finalists will be announced next week. In two weeks we'll know who the winners are. Although placing would be great I'm just glad we were able to show that we could do what was essentially a cold restart after way too long in mothballs.

Between now and the end of September we have two short film projects we have to finish up. One will be going to the Dam Short Film Festival while one will go to MidWest WeirdFest. These are cold restart efforts as well. A documentary short is in the works for the call for WeirdFest while what is essentially an experimental piece is being finished up for Dam Short Film Festival in Boulder City. It is not as if we'll be shooting for a showing at the Ely Central Theatre on the single screen there but Boulder City is a suburb of Las Vegas with a wee bit more population than Ely.

We've also done some minor support work to back up a vendor presenting at the Music Along The River 2019 festival by helping them create nice marketing collateral.

A former Secretary of State and former Chief Justice of the United States, John Marshall, is quoted as saying that the power to tax is the power to destroy. That's still very true in the USA today. Slowly but surely I am trying to transition out of a job rooted in Marshall's view of destruction to something a bit more constructive.

Xubuntu and Ubuntu MATE have been there to make these recent efforts happen far more easily than I otherwise thought possible. I need to give more back to the team. There are just a few more barriers that have to be knocked down first.

on August 11, 2019 06:43 PM

August 09, 2019

As you may have been made aware on some news articles, blogs, and social media posts, a vulnerability to the KDE Plasma desktop was recently disclosed publicly. This occurred without KDE developers/security team or distributions being informed of the discovered vulnerability, or being given any advance notice of the disclosure.

KDE have responded quickly and responsibly and have now issued an advisory with a ‘fix’ [1].

Kubuntu is now working on applying this fix to our packages.

Packages in the Ubuntu main archive are having updates prepared [2], which will require a period of review before being released.

Consequently if users wish to get fixed packages sooner, packages with the patches applied have been made available in out PPAs.

Users of Xenial (out of support, but we have provided a patched package anyway), Bionic and Disco can get the updates as follows:

If you have our backports PPA [3] enabled:

The fixed packages are now in that PPA, so all is required is to update your system by your normal preferred method.

If you do NOT have our backports PPA enabled:

The fixed packages are provided in our UPDATES PPA [4].

sudo add-apt-repository ppa:kubuntu-ppa/ppa
sudo apt update
sudo apt full-upgrade

As a precaution to ensure that the update is picked up by all KDE processes, after updating their system users should at the very least log out and in again to restart their entire desktop session.

Regards

Kubuntu Team

[1] – https://kde.org/info/security/advisory-20190807-1.txt
[2] – https://bugs.launchpad.net/ubuntu/+source/kconfig/+bug/1839432
[3] – https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/backports
[4] – https://launchpad.net/~kubuntu-ppa/+archive/ubuntu/ppa

on August 09, 2019 03:29 PM
Thanks to all the hard work from our contributors, we are pleased to announce that Lubuntu 18.04.3 LTS has been released! What is Lubuntu? Lubuntu is an official Ubuntu flavor which uses the Lightweight X11 Desktop Environment (LXDE). The project’s goal is to provide a lightweight yet functional Linux distribution based on a rock solid […]
on August 09, 2019 12:20 AM

August 08, 2019

Ubuntu 18.04.3 LTS has just been released. As usual with LTS point releases, the main changes are a refreshed hardware enablement stack (newer versions of the kernel, xorg & drivers) and a number of bug and security fixes.

For the Desktop, newer stable versions of GNOME components have been included as well as a new feature: Livepatch desktop integration.

For those who aren’t familiar, Livepatch is a service which applies critical kernel patches without rebooting. The service is available as part of an Ubuntu Advantage subscriptions but also made available for free to Ubuntu users (up to 3 machines).  Fixes are downloaded and applied to your machine automatically to help reduce downtime and keep your Ubuntu LTS systems secure and compliant.  Livepatch is available for your servers and your desktops.

Andrea Azzarone worked on desktop integration for the service and his work finally landed in the 18.04 LTS.

To enabling Livepatch you just need an Ubuntu One account. The set up is part of the first login or can be done later from the corresponding software-properties tab.

Here is a simple walkthrough showing the steps and the result:

The wizard displayed during the first login includes a Livepatch step will help you get signed in to Ubuntu One and enable Livepatch:

Clicking the ‘Set Up’ button invites you to enter you Ubuntu One information (or to create an account) and that’s all that is needed.

The new desktop integration includes an indicator showing the current status and notifications telling when fixes have been applied.

You can also get more details on the corresponding CVEs from the Livepatch configuration UI

You can always hide the indicator using the toggle if you prefer to keep your top panel clean and simple.

Enjoy the increased security in between reboots!

 

 

 

on August 08, 2019 07:32 PM

S12E18 – Pilotwings

Ubuntu Podcast from the UK LoCo

This week we’ve been running Steam in the cloud via an NVIDIA SHIELD TV. We discuss if we even need new distros and whether its more Linux apps we need. Plus we bring you some GUI love and go over all your feedback.

It’s Season 12 Episode 18 of the Ubuntu Podcast! Mark Johnson, Martin Wimpress and Mattias Wernér are connected and speaking to your brain.

In this week’s show:

  • We discuss what we’ve been up to recently:
    • Martin has been setting up Steam with Family view and library sharing in the “nvidia cloud” using the NVIDIA SHIELD TV
    • Mattias has been snapping strife.
  • We discuss creating new distros vs. creating new Linux apps and how do we advocate for more app development.

  • We share a GUI Lurve:

  • And we go over all your amazing feedback – thanks for sending it – please keep sending it!
  • Image taken from Pilotwings published in 1989 for the Super Nintendo Entertainment System by Nintendo.

That’s all for this week! You can listen to the Ubuntu Podcast back catalogue on YouTube. If there’s a topic you’d like us to discuss, or you have any feedback on previous shows, please send your comments and suggestions to show@ubuntupodcast.org or Tweet us or Toot us or Comment on our Facebook page or comment on our sub-Reddit.

on August 08, 2019 02:00 PM

August 07, 2019

You have just run lxc launch ubuntu:18.04 mycontainer and a new container is being created. The command returns very quickly (around 1-2s) and the container image starts running. The container image may take a few more seconds to complete, so that the init performs all the required tasks.

The problem

The question is, how do you know programmatically when a container’s init has really finished and the startup has been completed?

We will answer first this question, why do we need to know when a running container’s startup has really been completed? We need to know when we write automation scripts. Some commands of the automation script will fail to work if the container has not completed fully the start up. For example, the ubuntu:18.04 container images create a non-root account (username: ubuntu). This account is created near the end of the startup process, therefore if we try to execute commands relating to ubuntu on a container that has not completed the startup, those commands will fail.

Towards a solution

The proper way to solve this issue is to use a feature of the init subsystem of the container image that can tell us when it has completed the startup.

In the case of the Ubuntu 16.04 and newer container images, they use systemd. And there is functionality to report back if a system has completed the startup, by running systemctl is-system-running. When a container is starting up, the state is initializing. As soon as it completed the startup, the state switches into running.

$ systemctl is-system-running
running

Five issues

The first issue is that systemd should have a feature to wait for us instead of we, having to check in a loop (polling) when the state changes. It actually does, and has been added to systemd on August 2019 as systemctl: add support for –wait to is-system-running #9796. Translating this into Ubuntu versions, it means that in Ubuntu 19.04 or newer, we can use --wait as in systemctl is-system-running --wait. Very easy.

The second issue is with Ubuntu versions prior to Ubuntu 19.04, where we need to perform polling. Polling has some complications. Some systemd targets will fail if they are set to run but are not able to complete in a container. Therefore, the end state per systemd will be degraded instead of running. Therefore, when polling, we need to check for either of these two states.

The third issue is that as soon as LXD launches a container, it takes a little bit for systemd to start up and be able to respond to requests for its state. You get the error Failed to connect to bus: No such file or directory if you ask too fast.

The fourth issue is that in newer versions systemd that has the –wait parameter, the command will fail with Failed to connect to bus: No such file or directoryif we run it too soon. Which means that a simple systemctl is-system-running --waitis not sufficient. We need a bit of polling here until systemd is ready to report the state.

The fifth issue is that both the following commands return the same error code 1. systemctl is-system-running when it gives the error Failed to connect to bus: No such file or directory. And systemctl is-system-running when the result is degraded. That means that we need to be careful when we consume the error message through the return value, because the return value is not unique for the error message.

Here is the sequence of states for systemd when it starts in an Ubuntu LXD container. In parenthesis is the number of multiple times I got this message on my test system, until systemd completed the startup (reaching state: degraded).

Failed to connect to bus: No such file or directory      (185 times)
initializing                                             (189 times)                                             
starting                                                 (168 times)
degraded                                                

Now we are ready to put all these together and have a solution for Ubuntu 19.04 (or newer), and a solution for Ubuntu 16.04/18.04.

Solution for Ubuntu 19.04 or newer

The following example script installs a snap package as soon as the container has fully started. The first lxc exec waits until after the systemctl is-system-running command does not return an error. The second lxc exec command ways until the container has finished the startup.

$ cat myscript-1904newer.sh
lxc stop mycontainer
lxc delete mycontainer
lxc launch ubuntu:19.04 mycontainer
lxc exec mycontainer -- bash -c 'while $(systemctl is-system-running &>/dev/null); (($?==1)); do :; done'
lxc exec mycontainer -- systemctl is-system-running --wait
lxc exec mycontainer -- sudo snap install hello

Note: This script checks the return value of systemctl is-system-running. When systemd is not available yet, the return value is 1. When the command returns degraded, the return value is also 1. Which means, bummer! We can make use of the --wait parameter but we cannot get a proper foolproof solution without having to resort to some polling of ours. However, in the case of Ubuntu 19.04 or newer, the startup tends to take more time because snapd has to start as well. Therefore, it is unlikely to hit the case that systemd has completed immediately and reports degraded (return value 1).

Solution for Ubuntu 16.04 and Ubuntu 18.04 (but also Ubuntu 19.04 and newer)

Use the following example script. You can run it repeatedly in order to verify that it works well. Has been tested with Ubuntu 16.04, Ubuntu 18.04 and Ubuntu 19.04.

$ cat myscript-1804older.sh
lxc stop mycontainer
lxc delete mycontainer
lxc launch ubuntu:18.04 mycontainer
lxc exec mycontainer -- bash -c 'while [ "$(systemctl is-system-running 2>/dev/null)" != "running" ] && [ "$(systemctl is-system-running 2>/dev/null)" != "degraded" ]; do :; done'
lxc exec mycontainer -- sudo snap install hello

Conclusion

As an overall solution, I suggest to use the last script that does polling. It works on Ubuntu 16.04, Ubuntu 18.04 and Ubuntu 19.04. Here is the line again that waits if the container mycontainer has not completed the startup yet.

lxc exec mycontainer -- bash -c 'while [ "$(systemctl is-system-running 2>/dev/null)" != "running" ] && [ "$(systemctl is-system-running 2>/dev/null)" != "degraded" ]; do :; done'

on August 07, 2019 05:03 PM

We’ve been hard at work optimizing Xfce’s screensaver to give users the best possible lock and screensaver experience in Xfce. With 0.1.6 and 0.1.7, we’ve dropped even more legacy code, while implementing a long-requested feature, per-screensaver configuration!

What’s New?

New Features

  • Added support for on-screen keyboards. This option adds a button to the login window to show and hide the keyboard at the bottom of the screen.
  • Added per-screensaver configuration. The available options are pulled from the xscreensaver theme file and are stored via Xfconf.
  • Improved background drawing when using 2x scaling.

Bug Fixes

  • Fixed flickering within the password dialog (0.1.6)
  • Fixed various display issues with the password dialog, all themes should now render xfce4-screensaver identically to lightdm-gtk-greeter (0.1.6)
  • Fixed confusion between screensaver and lock timeouts (Xfce #15726)
  • Removed reference to pkg-config (.pc) file (0.1.6) (Xfce #15597)

Code Cleanup

  • Cleaned up kdb-indicator logic (0.1.6)
  • Consolidated debug function calls (0.1.6)
  • Dropped libXxf86 dependency (MATE Screensaver #199)
  • Dropped lots of unused or unneeded code, significantly streamlining the codebase
  • Migrated xfce4-screensaver-command to GDBus
  • Moved job theme processing out of gs-manager (0.1.6)
  • Removed full-screen window shaking on failed login
  • Simplified handling of user preferences (0.1.6)
  • Simplified lock screen and screensaver activation code

Translation Updates

Armenian (Armenia), Belarusian, Bulgarian, Catalan, Chinese (China), Chinese (Taiwan), Czech, Danish, Dutch, Finnish, French, Galician, German, Hebrew, Hungarian, Italian, Lithuanian, Malay, Norwegian Bokmål, Polish, Portuguese, Portuguese (Brazil), Russian, Spanish, Turkish

Downloads

Source tarball (md5sha1sha256)

Xfce Screensaver is included in Xubuntu 19.10 “Eoan Ermine”, installed with the xfce4-screensaver package.

on August 07, 2019 01:51 AM

August 06, 2019

Here’s a brief changelog of what we’ve been up to since our last general update.

Bugs

  • Add basic GitLab bug linking (#1603679)
  • Expect the upstream bug ID in the “number” field of GitHub issue objects, not the “id” field (#1824728)
  • Include metadata-only bug changes in Person:+commentedbugs

Build farm

  • Filter ASCII NUL characters out of build logtails (#1831500)
  • Encode non-bytes subprocess arguments on Python 2 to avoid crashing on non-ASCII file names under LC_CTYPE=C (#1832072)

Code

  • Don’t preload recipe data when deleting recipes associated with branches or repositories, and add some more job indexes (#1793266, #1828062)
  • Fix crash if checkRefPermissions finds that the repository is nonexistent
  • Add a rescan button to branch merge proposals for failed branch or repository scans
  • Land parts of the work required for Git HTTPS push tokens, though this is not yet complete (#1824399)
  • Refactor code import authorisation to be clearer and safer
  • Set line-height on <pre> elements in Bazaar file views
  • Work in progress to redeploy Launchpad’s Git backend on more scalable infrastructure

Infrastructure

  • Upgrade to PostgreSQL 10
  • Fix make-lp-user, broken by the fix for #1576142
  • Use our own GPG key retrieval implementation when verifying signatures rather than relying on auto-key-retrieve
  • Give urlfetch a default timeout, fixing a regression in process-mail (#1820552)
  • Make test suite pass on Ubuntu 18.04
  • Retry webhook deliveries that respond with 4xx for an hour rather than a day
  • Merge up to a current version of Storm
  • Upgrade to Celery 4.1.1
  • Move development sites from .dev to .test
  • Upgrade to Twisted 19.2.1
  • Upgrade to requests 2.22.0
  • Use defusedxml to parse untrusted XML
  • Improve caching of several delegated authorization checks (#1834625)

Registry

  • Fix redaction in pillar listings of projects for which the user only has LimitedView (#1650430)
  • Tighten up the permitted pattern for newly-chosen usernames

Snappy

  • Landed parts of the work required to support private snap builds, though this is not yet complete (#1639975)
  • Generalise snap channel handling slightly, allowing channel selection for core16 and core18
  • Add build-aux/snap/snapcraft.yaml to the list of possible snapcraft.yaml paths (#1805219)
  • Add build-request-id and build-request-timestamp to SNAPCRAFT_IMAGE_INFO
  • Allow selecting source snap channels when requesting manual snap builds (#1791265)
  • Push build start timestamps to the store, and use release intents so that builds are more reliably released to channels in the proper sequence (#1684529)
  • Try to manually resolve symlinks in remote Git repositories when fetching snapcraft.yaml (#1797366)
  • Consistently commit transactions in SnapStoreUploadJob (#1833424)
  • Use build request jobs for all snap build requests in the web UI
  • Honour “base: bare” and “build-base” when requesting snap builds (#1819196)

Soyuz (package management)

  • Add command-not-found metadata in the archive to the Release file
  • Check the .deb format using dpkg-deb rather than ar
  • Add s390x Secure Initial Program Load signing support (#1829749)
  • Add u-boot Flat Image Tree signing support (#1831942)
  • Use timeout(1) to limit debdiff rather than using alarm(3) ourselves
  • Allow configuring the binary file retention period of a LiveFS (#1832477)
  • Import source packages from Debian bullseye
on August 06, 2019 06:16 PM

Enhancing our ZFS support on Ubuntu 19.10 - an introduction

Ubuntu has supported ZFS as an option for some time. We started with a file-based ZFS pool on Ubuntu 15.10, then delivered it as a FS container in 16.04, and recommended it for the fastest and most reliable container experience on LXD.

We have also created some dedicated tutorials for users who want to become more familiar with ZFS concepts, like on basic layouts and taking snapshots.

To do all this, we are using the excellent ZFS On Linux implementation which has a vibrant and active upstream community. This is built as a kernel module and therefore, no DKMS is involved.

Three years ago we spent time looking at the licensing which applies to the Linux kernel and to ZFS. Our conclusions are that we are acting within the rights granted and in compliance with the terms on both licenses.

By working towards adding support for ZFS as the root file system we will bring the benefits of ZFS to Ubuntu users through an easy to use interface and automated operations, abstracting some of the complexity while still allowing flexibility for power users.

ZFS & Ubuntu 19.10

We announced 6 months ago that support for deploying Ubuntu root on ZFS with MAAS was available as an experimental feature.

So, what’s new for Ubuntu 19.10 (Eoan Ermine)? As has already been reported, spotted in our weekly team report on Ubuntu discourse, we are going to enhance ZFS on root support in the coming cycles. Ubuntu 19.10 is a first round towards that goal.

We want to support ZFS on root as an experimental installer option, initially for desktop, but keeping the layout extensible for server later on. The desktop will be the first beneficiary in Ubuntu 19.10. Note the use of the term ‘experimental’ though! As we want to have the dataset layout right and we know a file system is crucial as it’s responsible for all your data, we don’t want to encourage people to use it on production systems yet, or at least, not without regular backups. The option will be highlighted as such - you are now warned! However, feel free to play with it and pass on feedback.

What’s already in Eoan?

The worked started several weeks ago and Eoan already has some nice improvements concerning ZFS:

  • We are shipping ZFS On Linux version 0.8.1, with features like native encryption, trimming support, checkpoints, raw encrypted zfs transmissions, project accounting and quota and a lot of performance enhancements. You can see more about 0.8 and 0.8.1 released on the ZOL project release page directly.
  • We backported (and will continue to backport) some post-release upstream fixes as they fit, to provide the best user experience and reliability.
  • We added a new support in the GRUB menu. A small preview is available below and a more detailed blog post will be presented later on.

Any existing ZFS on root user will automatically get those benefits as soon as they update to Ubuntu 19.10.

Incoming work with zsys

The goal here is to make some of the ZFS basic and advanced concepts easily accessible and transparent to anyone, like providing automated snapshots, an easy way to rollback, offline instant updates, easy backup support and so on. This is to make a solid and robust system which is configured correctly by default. This work therefore focuses on not requiring a deep understanding of ZFS, but still being to make use of the advanced features.

However, we are aware that some system administrators are very passionate about the file systems and want to be in control. This is why we designed our system in such a way that it can cope with manual tweaking, is easy to understand for people having some know-how on ZFS, and still remains very flexible.

Finally, we want to provide some best practices in terms of the ZFS dataset layout for various needs. For example, a daily desktop user will need reliability and for it to be easy to revert to a stable state, while a system administrator will want to optimise, tweak heavily and have persistent datasets, even when rolling back his/her server operating system.

For this, we are developing a new user space daemon, named zsys. This will cooperate with GRUB (but is not limited to it) and ZFS on Linux initramfs to give advanced features we’ll describe later on. Our goal is to upstream as much as possible to GRUB and zol project maintainers when things are solid enough.

For this, we are developing a new user space daemon, named zsys. This one cooperate with grub (but is not limited to it) and ZFS on Linux initramfs to give advanced features we’ll describe later on. Our goal is to upstream to grub and ZOL project maintainers as much as possible when things are proved to be solid enough…

The whole current progress and what’s up next is accessible via our public card project against the Ubuntu GitHub organisation via this link.

We didn’t do it alone

Thanks to early press coverage, we got in touch with Richard Laager who maintains the upstream HOWTO on root on ZFS for Ubuntu. After some back and forth on the draft specification, we came to some good conclusions and he slightly modified the HOWTO to be more compatible with our plans.

Similarly, Marcin Skadetailrbek got in touch as well and wants to bring ZSYS to Fedora (but some adjustments will be needed, making this is a longer-term project).

More work ahead

As you can see, the future of ZFS as root on Ubuntu is bright. We still have a lot to tackle and 19.10 will be only the beginning of the journey. However, the path forward is exciting and we hope to be able to bring something fresh and unique to ZFS users.

More blog posts will follow to shed more light on these enhancements, and report as to our status.

Join the discussion via the dedicated Ubuntu discourse thread.

on August 06, 2019 07:36 AM

August 05, 2019

Just a quick note to you all. I just submitted a session to SXSW in Austin, and there is a community voting component to this. Can you guess what I would I would love you to do?

Yup, to go and vote for it. 🙂

I have never spoken at or been to SXSW, so this would help enormously! It only takes a few minutes, and I would really appreciate your help.

As many of you know, the broader goal of my work is to produce more collaborative, impactful, productive communities at work, at home, and beyond.

My session, Hack The Network Effect: Customers to Contributors, is focused on getting this potential out to the SXSW audience:

Business is changing. Gone are the days of customers passively consuming your product, with little to no interaction beyond occasional support tickets. Consumers want meaningful, connected relationships with the businesses and organizations they love.

The future of business is enabling your customers to not just play an active role in their success, but that of their peers and your brand. Done well, this can builds remarkable brand loyalty, retention, and innovation and reduce costs.

Salesforce, Star Citizen, Random House, HackerOne, Sephora, and others have done it. Now it is your turn.

Jono Bacon, author of ‘People Powered’ by HarperCollins Leadership, presents the combination of psychology, workflow, branding, and technology that delivers this, packed with pragmatic next steps.

The overall focus of the session is to provide an overview of how communities can be powerfully harnessed to build more engaging, more productive relationship between businesses and their customers. My goal is three key takeaways:

  1. Human beings crave roles in meaningful, impactful communities. Businesses can harness this need to enable customers to channel peer contributions.
  2. These communities can generate diverse contributions: support, technology, fundraising, advocacy, and more, and careful incentives builds retention.
  3. Marketing needs modernizing to harness this: social/content is not enough. Peer recognition, social capital, and right relationships are critical.

I would love if you could go and vote for the session (you don’t have to be an attendee of SXSW to vote). It should only take a few minutes:

CLICK HERE TO VOTE

Thanks! 🙂

The post Please Vote! Hack The Network Effect: Customers to Contributors appeared first on Jono Bacon.

on August 05, 2019 07:54 PM

Ubuntu LTS releases are already available from the Microsoft Store (link) as apps, but there are other ways of installing Ubuntu for the Windows Subsystem for Linux. You can import the Ubuntu tarball with the wsl command or use graphical interfaces listed at https://wiki.ubuntu.com/WSL#Third-party_tools for custom managing tarballs.

Such Ubuntu tarballs were not publicly available in the past, but now I proudly present the downloadable WSL tarballs for Ubuntu 16.04 LTS (Xenial), 18.04 LTS (Bionic), 19.04 (Disco), and the for the still in development 19.10 (Eoan) release. Thanks to everyone in the Ubuntu Foundations, Certified Public Cloud and Desktop Teams who helped to make this happen!

While we still recommend installing the Ubuntu WSL app from the Microsoft Store, using the tarballs lets you maintain more parallel Ubuntu instances which can be handy for experiments and you can also take a look at the next Ubuntu versions!

If you find a program or HOW-TO that installs other Ubuntu tarballs for WSL, please help by asking their author to use the official tarballs, because only the WSL tarballs include the WSL integration packages as described in the previous post!

on August 05, 2019 07:54 AM

August 03, 2019

We are delighted to announce that the registrations for Ubucon EU 2019 are open!

This registration is completely free, and it is not mandatory if you want to attend the event, although if you register, you will receive some free swag.

If you register your entrance at Ubucon EU, you will receive:

  • Ubucon EU Sintra T-shirt;
  • Personalized badge with a name of your choice;
  • Probably more swag that we cannot confirm at this time.

On the same registration form, you can also register for our cultural events which will happen on the days preceding Ubucon; more information available on this post.

Hurry up! We have a limited number of free entrance tickets to the cultural sites and free swag to give away to the ones registering to the event, so make sure to reserve them on our registration form.

If you do not fancy registering to the event that is also completely fine for us, the event remains free and open to everybody who has an interest in attending the event.

We hope to see you soon in Sintra!

on August 03, 2019 11:37 AM

August 01, 2019

Flight with discounts

Ubucon Europe 2019

TAP Air Portugal is now our Official Carrier Partner. We’ve made a sweet agreement for all participants and adult companion. Check out all information on it’s offer here.

TAP Air Portugal

TAP is Portugal’s leading airline and a member of the global airline Star Alliance since 2005. Flying since 1945, TAP celebrated its 70th anniversary on March 14, 2015, before completing its privatization process later that year, now with the Atlantic Gateway Group as private shareholders.

As of Summer 2017, TAP’s network comprises 84 destinations in 34 countries worldwide. The airline currently operates around 2,500 weekly flights, with a modern fleet of 63 Airbus aircraft.  TAP Express, the airline’s regional arm, operates an additional 17 aircraft.

TAP is one of Europe’s most awarded airlines.  Global Traveler (USA) named TAP as Best Airline in Europe for seven consecutive years, from 2011 to 2017, and the World Travel Awards named TAP as both Europe’s Leading Airline to Africa and Europe’s Leading Airline to South America from 2014 – 2017. Previously TAP was awarded World’s Leading Airline to Africa, in 2011 and 2012, and World’s Leading Airline to South America from 2009 through 2012.  TAP’s Inflight Magazine, UP, received the World Travel Award as Europe’s Leading In-flight Magazine for 2015, 2016 and 2017.  

on August 01, 2019 10:52 PM

July 31, 2019

DC19 Group Photo

Group photo above taken at DebConf19 by Agairs Mahinovs.

2019-07-03: Upload calamares-settings-debian (10.0.20-1) (CVE 2019-13179) to debian unstable.

2019-07-05: Upload calamares-settings-debian (10.0.25-1) to debian unstable.

2019-07-06: Debian Buster Live final ISO testing for release, also attended Cape Town buster release party.

2019-07-08: Sponsor package ddupdate (0.6.4-1) for debian unstable (mentors.debian.net request, RFS: #931582)

2019-07-08: Upload package btfs (2.19-1) to debian unstable.

2019-07-08: Upload package calamares (3.2.11-1) to debian unstable.

2019-07-08: Request update for util-linux (BTS: #931613).

2019-07-08: Upload package gnome-shell-extension-dashtodock (66-1) to debian unstable.

2019-07-08: Upload package gnome-shell-extension-multi-monitors (18-1) to debian unstable.

2019-07-08: Upload package gnome-shell-extension-system-monitor (38-1) to debian unstable.

2019-07-08: Upload package gnome-shell-extension-tilix-dropdown (7-1) to debian unstable.

2019-07-08: Upload package python3-aniso8601 (7.0.0-1) to debian unstable.

2019-07-08: Upload package python-3-flask-restful (0.3.7-2) to debian unstable.

2019-07-08: Upload package xfce4-screensaver (0.1.6) to debian unstable.

2019-07-09: Sponsor package wordplay (8.0-1) (mentors.debian.net request).

2019-07-09: Sponsor package blastem (0.6.3.2-1) (mentors.debian.net request) (Closes RFS: #931263).

2019-07-09: Upload gnome-shell-extension-workspaces-to-dock (50-1) to debian unstable.

2019-07-09: Upload bundlewrap (3.6.1-2) to debian unstable.

2019-07-09: Upload connectagram (1.2.9-6) to debian unstable.

2019-07-09: Upload fracplanet (0.5.1-5) to debian unstable.

2019-07-09: Upload fractalnow (0.8.2-4) to debian unstable.

2019-07-09: Upload gnome-shell-extension-dash-to-panel (19-2) to debian unstable.

2019-07-09: Upload powerlevel9k (0.6.7-2) to debian unstable.

2019-07-09: Upload speedtest-cli (2.1.1-2) to debian unstable.

2019-07-11: Upload tetzle (2.1.4+dfsg1-2) to debian unstable.

2019-07-11: Review mentors.debian.net package hipercontracer (1.4.1-1).

2019-07-15 – 2019-07-28: Attend DebCamp and DebConf!

My DebConf19 mini-report:

There is really too much to write about that happened at DebConf, I hope to get some time and write seperate blog entries on those really soon.

  • Participated in Bursaries BoF, I was chief admin of DebConf bursaries in this cycle. Thanks to everyone who already stepped up to help with next year.
  • Gave a lightning talk titled “Can you install Debian within a lightning talk slot?” where I showed off Calamares on the latest official live media. Spoiler alert: it barely doesn’t fit in the allotted time, something to fix for bullseye!
  • Participated in a panel called “Surprise, you’re a manager!“.
  • Hosted “Debian Live BoF” – we made some improvements for the live images during the buster cycle, but there’s still a lot of work to do so we held a session to cut out our initial work for Debian 11.
  • Got the debbug and missed the day trip, I hope to return to this part of Brazil one day, so much to explore in just the surrounding cities.
  • The talk selection this year was good, there’s a lot that I learned and caught up on that I probably wouldn’t have done if it wasn’t for DebConf. Talks are recorded so (http archive, YouTube). PS: If you find something funny, please link (with time stamp) on the FunnyMoments wiki page (that page is way too bare right now).

on July 31, 2019 06:51 PM

July 30, 2019

Full Circle Weekly News #141

Full Circle Magazine


Mozilla Firefox Could Soon Get a “Tor mode” Add-on

https://news.softpedia.com/news/mozilla-firefox-could-soon-get-a-tor-mode-add-on-526774.shtml

Critical Flaw in VLC Media Player Discovered by German Cybersecurity Agency

https://news.softpedia.com/news/critical-flaw-in-vlc-media-player-discovered-by-german-cybersecurity-agency-526768.shtml

Hackers Exploit Jira [and] Exim Linux Servers to “Keep the Internet Safe”

https://www.bleepingcomputer.com/news/security/hackers-exploit-jira-exim-linux-servers-to-keep-the-internet-safe/

Dropbox Is Bringing Back Support for ZFS, XFS, BTRFS, and eCryptFS on Linux

https://itsfoss.com/dropbox-brings-back-linux-filesystem-support/

Announcing Coreboot 4.10

https://blogs.coreboot.org/blog/2019/07/22/announcing-coreboot-4-10/

Canonical Outs New Linux Kernel Security Updates for Ubuntu 19.04 and 18.04

https://news.softpedia.com/news/canonical-outs-new-linux-kernel-security-updates-for-ubuntu-19-04-and-18-04-lts-526818.shtml

Ubuntu OpenStack Architecture to Empower BT’s Next-Gen 5g Cloud Core

https://news.softpedia.com/news/canonical-s-ubuntu-openstack-architecture-to-empower-bt-s-next-gen-5g-cloud-core-526834.shtml

Virtualbox 6.0.10 Adds UEFI Secure Boot Driver Signing Support on Ubuntu [and] Debian

https://news.softpedia.com/news/virtualbox-6-0-10-adds-uefi-secure-boot-driver-signing-support-on-ubuntu-debian-526817.shtml

Credits:
Ubuntu “Complete” sound: Canonical
 
Theme Music: From The Dust – Stardust

https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

on July 30, 2019 03:47 PM

Joining Purism!

Sam Hewitt

Personal news time! Starting in August I’m going to be joining the team at Purism working on the design of PureOS and related software products, but what I’m very excited about is that I get to continue to work on GNOME design!

Purism Logo

I have to thank Purism for even offering me this opportunity it is beyond my wildest expectations that I would get to to work on Free Software professionally let alone in design!

on July 30, 2019 02:00 PM

July 28, 2019

Lubuntu 18.10, our first release with LXQt, has reached End of Life as of July 18, 2019. This means that no further security updates or bugfixes will be released. We highly recommend that you update to 19.04 as soon as possible if you are still running Lubuntu 18.10. The only currently-supported releases of Lubuntu today […]
on July 28, 2019 01:13 AM

July 27, 2019

I’ve begun to think about what I’ll take to Hacker Summer Camp this year, and I thought I’d share some of it as part of my Hacker Summer Camp blog post series. I hope it will be useful to veterans, but particularly to first timers who might have no idea what to expect – as that’s how I felt my first time.

Since it’s gotten so close, I’ll also talk about what steps you should take to protect yourself.

Packing

General Packing

I won’t state the obvious in terms of packing most of your basic needs, including clothing and toiletries, but I will remind you that Las Vegas will be super hot. Bring clothes for hot days, and pack deodorant! Keep in mind that some of the clubs have a dress code, so if that’s your thing, you’ll want to bring clubbing clothes. (The dress code tends not to be too high, but often pants and a collared shirt.)

I will suggest bringing a reusable water bottle to help cope with the heat. Just before last summer camp, I bought a Simple Modern vacuum insulated bottle, and I absolutely love it. I’ll bring it again this year to stay hydrated. Because I hate heat, I’ll also be bringing a cooling towel, which is surprisingly effective at cooling me off. Perhaps it’s a placebo effect, but I’ll take it.

Remember that large parts of DEF CON are cash only, so you’ll need to bring cash (obviously). At least $300 for a badge, plus more for swag, bars, etc. ATMs on the casino floors are probably safe to use, but will still charge you fairly hefty fees.

Tech Gear

There’s two schools of thought on bringing tech gear: minimalist and kitchen sink. I happen to be in the kitchen sink side of things. I’ll be bringing my laptop and about a whole bunch of accessories. In fact, I have a whole travel kit that I’ll detail in a future post, but a few highlights include:

On the other hand, some people want the disconnected experience and bring little to no tech. Sometimes this is because of concerns over “being hacked”, but sometimes this is just to focus on the face-to-face time.

Shipping

There are some consumables where I just find it easier to ship to my hotel. Note that the hotel will charge you for receiving a package, but I still find it cheaper/easier to have these things delivered directly.

Getting a case of water delivered is much cheaper than buying from the hotel gift shop. Another option is to hit up a CVS or Walgreens on the strip for some bottled water.

I’m a bit of a Red Bull addict, so I often get a few packs delivered to have on hand. The Red Bull Red Edition is a nice twist on the classic that’s worth a try if you haven’t had the pleasure.

Safety & Security

DEF CON has a reputation for being the “most dangerous network in the world”, but I think this is completely overblown. It defies logic that an attacker with a 0-day on a modern operating system would use it to perform untargeted attacks at DEF CON. If their traffic is captured, they’ve burned their 0-day, and probably to grab some random attendees data – it’s just not worth it to them.

That being said, you shouldn’t make yourself a target either. There are some simple steps you can (and should) take to protect yourself:

  • Use a VPN service for your traffic. I like Private Internet Access for a commercial provider.
  • Don’t connect to open WiFi networks.
  • Don’t accept certificate errors.
  • Don’t plug your phone into strange USB plugs.
  • Use HTTPS

These are all simple steps to protect yourself, both at DEF CON, and in general. You really ought to observe them all the time – the internet is a dangerous place in general!

To be honest, I worry more about physical security in Las Vegas – don’t carry too much cash, keep your wits about you, and watch your belongings. Use the in-room safe (they’re not perfect, but they’re better than nothing) to protect your goods.

Be aware of hotel policies on entering rooms – ever since the Las Vegas shooting, they’ve become much more invasive with forcing their way into hotel rooms. I recommend keeping anything valuable locked up and out of sight, and be aware of potential impostors using the pretext of being a hotel employee.

Good luck, and have fun in just over a week!

on July 27, 2019 07:00 AM

July 25, 2019

Ep 60 – Rumo ao Monte da Lua

Podcast Ubuntu Portugal

O drama do abandono da arquitectura de Intel a 32bits e as novidades da Ubucon Europe 2019. Sem esquecer pormenores muito pouco interessantes da vida dos intervenientes deste podcast… Já sabes, ouve, subscreve e partilha!

  • https://discourse.ubuntu.com/t/intel-32bit-packages-on-ubuntu-from-19-10-onwards/11263/
  • https://ubuntu.com/blog/statement-on-32-bit-i386-packages-for-ubuntu-19-10-and-20-04-lts
  • https://ubucon.eu
  • Propõe apresentações: https://sintra2019.ubucon.org/call-for-papers-announcement/
  • Faz-te voluntário: https://framaforms.org/volunteers-voluntarios-ubucon-europe-2019-sintra-1559899302

Apoios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Outra forma de nos apoiarem é usarem os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal
E podem obter tudo isso com 15 dollares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dollares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

Se estiverem interessados em outros bundles se acrescentarem no fim do link para qualquer bundle: ?partner=pup (da mesma forma como no link da sugestão) e vão estar também a apoiar-nos.

Atribuição e licenças

“Dingo”by Central Highlands Regional Council Libraries is licensed under CC BY 2.0

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on July 25, 2019 09:27 PM

July 24, 2019

As announced, cloud-init 19.2 was released last Wednesday! From the announcement, some highlights include: FreeBSD enhancements Added NoCloud datasource support. Added growfs support for rootfs. Updated tools/build-on-freebsd for python3 Arch distro added netplan rendering support. cloud-init analyze reporting on boot events. And of course numerous bug fixes and other enhancements. Version 19.1 is already available in Ubuntu Eoan. A stable release updates (SRU) to Ubuntu 18.
on July 24, 2019 12:00 AM

July 22, 2019

Full Circle Weekly News #140

Full Circle Magazine


GNU Linux-Libre 5.2 Kernel Released
https://news.softpedia.com/news/gnu-linux-libre-5-2-kernel-released-for-those-seeking-100-freedom-for-their-pcs-526671.shtml

Tails 3.15 Fixes Critical Bugs
https://tails.boum.org/news/version_3.15/index.en.html

Mozilla’s Add-Ons Outage Post-Mortem Result
https://hacks.mozilla.org/2019/07/add-ons-outage-post-mortem-result/

Ransomware uses Brute-Force SSH Attacks to Infect Linux-Based NAS Servers
https://thehackernews.com/2019/07/ransomware-nas-devices.html

Linux Mint 19.2 “Tina” Beta Is Here WIth Cinnamon, Mate and XFCE
https://betanews.com/2019/07/16/linux-mint-192-tina-beta-ubuntu/

New EvilGnome Backdoor Spies on Linux Users, Steals Their Files

https://www.bleepingcomputer.com/news/security/new-evilgnome-backdoor-spies-on-linux-users-steals-their-files/

Ubuntu 18.10 ‘Cosmic Cuttlefish’ Reaches End of Life
https://www.theinquirer.net/inquirer/news/3079174/ubuntu-1810-end-of-life

Credits:
Ubuntu “Complete” sound: Canonical
 
Theme Music: From The Dust – Stardust

https://soundcloud.com/ftdmusic
https://creativecommons.org/licenses/by/4.0/

on July 22, 2019 03:13 PM

July 20, 2019

Desk lamp

Sebastian Kügler

desk lamp with mirror behinddesk lamp with mirror behind

Some time ago, I wanted to make my own desk lamp. It should provide soft, bright task lighting above my desk, no sharp shadows that could cover part of my work area, but also some atmospheric lighting around the desk in my basement office. The lamp should have a natural look around it, but since I made it myself, I also didn’t mind exposing some of its internals.

desklamp-ledstripsSMD5050 LED strips

I had oak floor boards that I got from a friend (thanks, Wendy!) lying around. which I used as base material for the lamp. I combined these with some RGBW led strips that I had lying around, and a wireless controller that would allow me to connect the lamp to my Philips Hue lighting system, that I use throughout the house to control the lights. I sanded the wood until it was completely smooth, and then gave it an oild finish to make it durable and give it a more pronounced texture.

Fixed to the ceilingFixed to the ceiling
Internals of the desk lampInternals of the desk lamp

The center board is covered in 0.5mm aluminium sheets to dissipate heat from the LED strips (again, making them last longer) and provide some extra diffusion of the light. This material is easy to work with, and also very suitable to stick the led strips to. For the light itself, I used SMD5050 LED strips that can produce warm and cold white light, as well as RGB colors. I put 3 rows of strips next to each other to provide enough light. The strips wrap around at the top, so light is not just shining down on my desk, but also reflecting from walls and ceiling around it. The front and back are another piece of wood to avoid looking directly into the LEDs, which would be distractive, annoying when working and also quite ugly. I attached a front and back board as well to the lamp, making it into an H shape.

Light reflects nicely from surrounding surfacesLight reflects nicely from surrounding surfaces

The controller (a Gledopto Z-Wave controller, that is compatible with Philips Hue) is attached to the center board as well, so I just needed to run 2 12V wires to the lamp. I was being a bit creative here, and thought “why not use the power cables also to have the lamp hanging from the ceiling?”. I used coated steel wire, which I stripped here and there to have power run through steel hooks screwed into the ceiling to supply the lamp with power while also being able to adjust its height. This ended up creating a rather clean look for the whole lamp and really brought the whole thing together.

on July 20, 2019 04:02 PM

July 19, 2019

Kubuntu 18.10 reaches end of life

Kubuntu General News

Kubuntu 18.10 Cosmic Cuttlefish was released on October 18th 2018 with 9 months support. As of 18th July 2019, 18.10 reaches ‘end of life’. No more package updates will be accepted to 18.10, and it will be archived to old-releases.ubuntu.com in the coming weeks.

The official end of life announcement for Ubuntu as a whole can be found here [1].

Kubuntu 19.04 Disco Dingo continues to be supported, receiving security and high-impact bugfix updates until January 2020.

Users of 18.10 can follow the Kubuntu 18.10 to 19.04 Upgrade [2] instructions.

Should for some reason your upgrade be delayed, and you find that the 18.10 repositories have been archived to old-releases.ubuntu.com, instructions to perform a EOL Upgrade can be found on the Ubuntu wiki [3].

Thank you for using Kubuntu 18.10 Cosmic Cuttlefish.

The Kubuntu team.

[1] – https://lists.ubuntu.com/archives/ubuntu-announce/2019-July/000247.html
[2] – https://help.ubuntu.com/community/DiscoUpgrades/Kubuntu
[3] – https://help.ubuntu.com/community/EOLUpgrades

on July 19, 2019 09:32 AM

July 18, 2019

Ep 59 – Caça aos gambozinos

Podcast Ubuntu Portugal

Neste episódio tivemos a de novo participação do João Jotta e do André Paula do Linuxtechpt onde discutimos práticas de segurança e privacidade e snaps. Já sabes, ouve, subscreve e partilha!

  • https://linuxtech.pt/
  • https://ubucon.eu
  • https://sintra2019.ubucon.org/call-for-papers-announcement/
  • https://framaforms.org/volunteers-voluntarios-ubucon-europe-2019-sintra-1559899302

Apoios

Este episódio foi produzido e editado por Alexandre Carrapiço (Thunderclaws Studios – captação, produção, edição, mistura e masterização de som) contacto: thunderclawstudiosPT–arroba–gmail.com.

Outra forma de nos apoiarem é usarem os links de afiliados do Humble Bundle, porque ao usarem esses links para fazer uma compra, uma parte do valor que pagam reverte a favor do Podcast Ubuntu Portugal
E podem obter tudo isso com 15 dólares ou diferentes partes dependendo de pagarem 1, ou 8.
Achamos que isto vale bem mais do que 15 dólares, pelo que se puderem paguem mais um pouco mais visto que têm a opção de pagar o quanto quiserem.

    • Sugestão de bundle:
  • https://www.humblebundle.com/books/open-source-bookshelf?partner=pup
  • https://www.humblebundle.com/books/programmable-boards-make-books?partner=pup

Se estiverem interessados em outros bundles se acrescentarem no fim do link para qualquer bundle: ?partner=pup (da mesma forma como no link da sugestão) e vão estar também a apoiar-nos.

Atribuição e licenças

“Dingo”by PaulBalfe is licensed under CC BY 2.0

A música do genérico é: “Won’t see it comin’ (Feat Aequality & N’sorte d’autruche)”, por Alpha Hydrae e está licenciada nos termos da [CC0 1.0 Universal License](https://creativecommons.org/publicdomain/zero/1.0/).

Este episódio está licenciado nos termos da licença: Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0), cujo texto integral pode ser lido aqui. Estamos abertos a licenciar para permitir outros tipos de utilização, contactem-nos para validação e autorização.

on July 18, 2019 02:23 PM
As of today, July 18, 2019, Ubuntu Studio 18.10 has reached the end of its support cycle. We strongly urge all users of 18.10 to upgrade to Ubuntu Studio 19.04 for support through January 2020 and then after the release of Ubuntu Studio 19.10, codenamed Eoan Ermine, in October 2019 which will also be supported […]
on July 18, 2019 01:00 AM