Fedora People

Why integrates fedmsg with kiskadee

Posted by David Carlos on June 28, 2017 06:20 PM

On this post we will talk why we decides to integrate kiskadee [1] with fedmsg [2], and how this integration will enable us to easily monitors the source code of several different projects. The next post will explain how this integration was done.

Exists a initiative on Fedora called Anitya [3]. Anitya is a project version monitoring system, that monitor upstream releases and broadcast them on fedmsg. The registration of a project on Anitya it's quite simple. You will need to informe the homepage, which system used to host the project, and some other informations required by the system. After the registration process, Anitya will check, every day, if a new release of the project were released. If so, it will publish the new release, using a JSON format, on fedmsg. In the context of anitya, the systems used to host projects are called backends, and you can check all the supported backends on this link https://release-monitoring.org/about.

The Fedora infrastructure have several different services that need to talk to each other. One simple exemple is the AutoQA service, that listen to some events triggered by the fedpkg library. If we have only two services interacting the problem is minimal, but when several applications request and response to other several applications, the problem becomes huge. fedmsg (FEDerated MeSsaGe bus) is a python package and API defining a brokerless messaging architecture to send and receive messages to and from applications. Anitya uses this messaging architecture to publish on the bus the new releases of registered projects. Any application that is subscribed to the bus, can retrieve this events. Note that fedmsg is a whole architecture, so we need some mecanism to subscribe to the bus, and some mecanism to publish on the bus. fedmsg-hub is a daemon used to interact with the fedmsg bus, and it's been used by kiskadee to consume the new releases published by anitya.

Once kiskadee can receive notifications that a new release of some project was made, and this project will be packed to Fedora, we can trigger a analysis without having to watch directly to Fedora repositories. Obviously this is a generic solution, that will analysis several upstream, including upstream that will be packed, but is a first step to achive our goal that is help the QA team and the distribution to monitors the quality of the upstreams that will become Fedora packages.

[1]https://pagure.io/kiskadee
[2]http://www.fedmsg.com/en/latest/
[3]https://release-monitoring.org

Episode 53 - A plane isn't like a car

Posted by Open Source Security Podcast on June 28, 2017 12:14 PM
Josh and Kurt talk about security through obscurity, airplanes, the FAA, the Windows source code leak, and chicken sandwiches.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/330513530&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes



Collecting Docker infrastructure logs using syslog-ng

Posted by Peter Czanik on June 28, 2017 11:39 AM

Why use syslog-ng for collecting Docker logs? Docker already provides many drivers for logging, even for central log collection. On the other hand remote logging drivers arrive with a minimalist feature set and you are not able to use the “docker logs” command any more. To have the best of both worlds, you can use journald logging driver in Docker and use syslog-ng to read Docker logs from journald and forward log messages to your central log server or other destinations. You can even run syslog-ng itself in a Docker container, so you can use it on dedicated Docker host environments as well where it is not possible to install additional applications.

 

Why syslog-ng?

If you log in to one of your Docker hosts it is convenient to use the “docker logs” command to view logs of your containers locally. This only works if you use local logging, either the journald or the json-file logging method.

There are multiple logging drivers included in Docker for remote logging. Syslog and other protocols are supported. The problem with this approach is that these drivers are fairly limited. They include a minimalist feature set and it is not possible to filter or process log messages in any way before they are sent to their destination. Also, if the network connection to the central server is down for some reason, messages can easily be lost.

In an ideal world you can both use the “docker logs” commands locally and collect your infrastructure logs at a central location for long-time storage and analysis. This can be achieved by using journald driver in Docker for collecting logs. The journald driver is the default choice on Fedora Linux and derivatives like RHEL or CentOS. To enable it on openSUSE or SLES, add the –log-driver=journald option to the DOCKER_OPTS variable in /etc/sysconfig/docker and restart Docker.

You can use syslog-ng to read log messages from journald, process them, filter them and either store them locally, at a central syslog server or at one of the many supported destinations of syslog-ng, including Hadoop, Kafka, MongoDB or Elasticsearch.

The syslog-ng application can run directly on the host system but also in a Docker container. Running syslog-ng in a container is the only way if you use some of the dedicated Docker host environments – like Atomic Host – where you cannot even install applications directly on the host operating system.

I will show below how you can run syslog-ng in a container, collect structured log messages from journald and save them locally in JSON format preserve all data.

 

A simple configuration

First, create a simple configuration for syslog-ng. Once declaring the version number of syslog-ng, we also define two sources: one for the journal and another for syslog-ng’s internal messages. This helps resolving any run-time problems. Next, define two file destinations. For log messages coming from the journal, use a JSON template, so all name-value pairs can be recorded in a structured format. Docker creates some extra fields, like CONTAINER_NAME and different identifiers related to the containers. At the end the two log statements connect the previously listed building blocks together.

@version: 3.10
source s_journal {
  systemd-journal(prefix("journal."));
};

source s_internal {
  internal();
};

destination d_int {
  file("/var/log/int");
};

destination d_file {
  file("/var/log/journal.json" template("$(format_json --scope rfc5424 --key journal.*)\n\n"));
};

log {source(s_journal); destination(d_file); };
log {source(s_internal); destination(d_int); };

This configuration is good enough to get you started and you can check that syslog-ng is already collecting log messages. Once everything works as expected, you can create more complex configurations with filters, Big Data destinations and more.

 

Starting syslog-ng

The following command starts syslog-ng in a Docker container. It assumes that the configuration and data are not in the container, but mapped from the host file system. This makes configuring syslog-ng and checking the log files easier. The path for the configuration is /data/syslog-ng/conf/journal.conf and for the logs it is /data/syslog-ng/logs/.

docker run -ti -v /etc/machine-id:/etc/machine-id -v /data/syslog-ng/conf/journal.conf:/etc/syslog-ng/syslog-ng.conf -v /data/syslog-ng/logs/:/var/log -v /var/log/journal:/var/log/journal --name journal balabit/syslog-ng:latest

When you start the container using the command above, it will be started in the foreground and you will not receive the command prompt back. While it makes debugging easier, it is not for production use. In my case I will see a message about configuration versions (running 3.10 with a 3.9 configuration), but in an ideal case nothing is displayed on screen:

[root@localhost ~]# docker run -ti -v /etc/machine-id:/etc/machine-id -v /data/syslog-ng/conf/journal.conf:/etc/syslog-ng/syslog-ng.conf -v /data/syslog-ng/logs/:/var/log -v /var/log/journal:/var/log/journal --name journal balabit/syslog-ng:latest
syslog-ng: Error setting capabilities, capability management disabled; error='Operation not permitted'
[2017-06-27T09:29:19.032130] WARNING: Configuration file format is too old, syslog-ng is running in compatibility mode Please update it to use the syslog-ng 3.10 format at your time of convenience, compatibility mode can operate less efficiently in some cases. To upgrade the configuration, please review the warnings about incompatible changes printed by syslog-ng, and once completed change the @version header at the top of the configuration file.;

Let’s see the parameters of this Docker command in detail:

  • “run” starts a command in a new container
  • “-ti” starts interactive mode (use -d in production)
  • “-v /etc/machine-id:/etc/machine-id” mapping is required, because journald stores the logs in a directory named after the machine-id
  • “-v /data/syslog-ng/conf/journal.conf:/etc/syslog-ng/syslog-ng.conf” maps the configuration from the host file system into the container. If you use a more complex configuration with includes or crypto keys, map a directory instead.
  • “-v /data/syslog-ng/logs/:/var/log” maps the directory where we store logs on the host file system into the container
  • “-v /var/log/journal:/var/log/journal” maps the directory with journal logs from the host file system into the container
  • “–name journal” assigns the name “journal” to the container
  • “balabit/syslog-ng:latest” is the name of the Docker image to start

When you first execute this command, it can take a few minutes until syslog-ng is up and running, because the image is downloaded over the Internet. On subsequent executions, Docker will use the local copy and start immediately.

 

Testing

You can check that syslog-ng is collecting logs properly by taking a look at the log files created by syslog-ng. There are two log files. If you used the above directory structure and configuration, /data/syslog-ng/logs/int contains the internal messages of syslog-ng. You should see similar logs in that file:

Jun 27 09:29:19 7e4fcd8247c4 syslog-ng[1]: syslog-ng starting up; version='3.10.1'

The file /data/syslog-ng/logs/journal.json has your log messages read from the journal in JSON format. It might be a huge file if you had many messages in the journal. The following is a sample log message, the same syslog-ng warning message together with CONTAINER_NAME and other docker related name-value pairs:

{"journal":{"_UID":"0","_TRANSPORT":"journal","_SYSTEMD_UNIT":"docker.service","_SYSTEMD_SLICE":"system.slice","_SYSTEMD_CGROUP":"/system.slice/docker.service","_SOURCE_REALTIME_TIMESTAMP":"1498555672081993","_PID":"972","_MACHINE_ID":"c299b16e64ac46e6ac3c38ac5da988c0","_HOSTNAME":"localhost.localdomain","_GID":"0","_EXE":"/usr/bin/dockerd-current","_COMM":"dockerd-current","_CMDLINE":"/usr/bin/dockerd-current --add-runtime docker-runc=/usr/libexec/docker/docker-runc-current --default-runtime=docker-runc --exec-opt native.cgroupdriver=systemd --userland-proxy-path=/usr/libexec/docker/docker-proxy-current --selinux-enabled --log-driver=journald --signature-verification=false","_CAP_EFFECTIVE":"1fffffffff","_BOOT_ID":"7b55ae658381491983949f5cf5274425","PRIORITY":"6","MESSAGE":"[2017-06-27T09:27:52.081729] WARNING: Configuration file format is too old, syslog-ng is running in compatibility mode Please update it to use the syslog-ng 3.10 format at your time of convenience, compatibility mode can operate less efficiently in some cases. To upgrade the configuration, please review the warnings about incompatible changes printed by syslog-ng, and once completed change the @version header at the top of the configuration file.;\r","CONTAINER_TAG":"","CONTAINER_NAME":"journal","CONTAINER_ID_FULL":"8c1a79c3d0a150102057cfff010ec72f4fd2cb383bc60a784018fe9c6b6ea721","CONTAINER_ID":"8c1a79c3d0a1"},"PROGRAM":"dockerd-current","PRIORITY":"info","PID":"972","MESSAGE":"[2017-06-27T09:27:52.081729] WARNING: Configuration file format is too old, syslog-ng is running in compatibility mode Please update it to use the syslog-ng 3.10 format at your time of convenience, compatibility mode can operate less efficiently in some cases. To upgrade the configuration, please review the warnings about incompatible changes printed by syslog-ng, and once completed change the @version header at the top of the configuration file.;\r","HOST":"58559f2f1026","FACILITY":"local0","DATE":"Jun 27 09:27:52"}

What is next?

Once a log message is in syslog-ng you have practically endless possibilities. You can parse messages, enrich them or process them in many other ways. As a next step, you can make sure that only relevant messages are stored by using message filtering. Finally, you have to store the messages somewhere. It could be your central syslog-ng server, Elasticsearch, Kafka or one of the many other destinations supported by syslog-ng.

It is certainly not our goal to teach you Docker or syslog-ng in-depth in this blog post. If you are interested and would like to know more, check these resources:

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Collecting Docker infrastructure logs using syslog-ng appeared first on Balabit Blog.

Testing modules and containers with Modularity Testing Framework

Posted by Fedora Magazine on June 28, 2017 10:50 AM

Fedora Modularity is a project within Fedora with the goal of Building a modular operating system with multiple versions of components on different lifecycles. Fedora 26 features the first look of the modularity vision: the Fedora 26 Boltron Server. However,  if you are jumping into the modularity world, creating and deploying your own modules and containers — your next question may be how to test these artifacts.  The Modularity Testing Framework (MTF) has been designed for testing artifacts such as modules, RPM base repos, containers, and other artifact types. It helps you to write tests easily, and the tests are also independent of the type of the module.

MTF is a minimalistic library built on the existing avocado and behave testing frameworks. enabling developers to enable test automation for various module aspects and requirements quickly. MTF adds basic support and abstraction for testing various module artifact types: RPM based, docker images, ISOs, and more. For detailed information about the framework, and how to use it check out the MTF Documentation.

Installing MTF

The Modularity Testing Framework is available in the official Fedora repositories. Install MTF using the command:

dnf install -y modularity-testing-framework

A COPR is available if you want to use the untested, unstable version. Install via COPR with the commands:

dnf copr enable phracek/Modularity-testing-framework
dnf install -y modularity-testing-framework

Writing a simple test

Creating a testing directory structure

First, create the tests/ directory in the root directory of the module. In the tests/ directory, create a Makefile file:

MODULE_LINT=/usr/share/moduleframework/tools/modulelint/*.py
TESTS=*.py (try not to use “*.py”, but use the test files with names such as sanity1.py sanity2.py... separated by spaces)

CMD=python -m avocado run $(MODULE_LINT) $(TESTS)

#
all:
    generator  # use it in case that tests are defined also in config.yaml file (described below)
    $(CMD)

In the root directory of the module, create a Makefile file containing a section test. For example:

.PHONY: build run default

IMAGE_NAME = memcached

MODULEMDURL=file://memcached.yaml

default: run

build:
    docker build --tag=$(IMAGE_NAME) .

run: build
    docker run -d $(IMAGE_NAME)

test: build
    # used for testing docker image available on Docker Hub. Dockerfile 
    cd tests; MODULE=docker MODULEMD=$(MODULEMDURL) URL="docker.io/modularitycontainers/memcached" make all
    # used for testing docker image available on locally.
    # Dockerfile and relavant files has to be stored in root directory of the module.
    cd tests; MODULE=docker MODULEMD=$(MODULEMDURL) URL="docker=$(IMAGE_NAME)" make all
    # This tests "modules" on local system.
    cd tests; MODULE=rpm MODULEMD=$(MODULEMDURL) URL="https://kojipkgs.fedoraproject.org/compose/latest-Fedora-Modular-26/compose/Server/x86_64/os/" make all

In the tests/ directory, place the config.yaml configuration file for module testing(Do adresare tests umisti configuracni soubor config.yaml pro testovani modulu) . See minimal-config.yaml. For example:

document: modularity-testing
version: 1
name: memcached
modulemd-url: http://raw.githubusercontent.com/container-images/memcached/master/memcached.yaml
service:
    port: 11211
packages:
    rpms:
        - memcached
        - perl-Carp
testdependecies:
    rpms:
        - nc
module:
    docker:
        start: "docker run -it -e CACHE_SIZE=128 -p 11211:11211"
        labels:
            description: "memcached is a high-performance, distributed memory"
            io.k8s.description: "memcached is a high-performance, distributed memory"
        source: https://github.com/container-images/memcached.git
        container: docker.io/modularitycontainers/memcached
    rpm:
        start: /usr/bin/memcached -p 11211 &
        repo:
           - https://kojipkgs.fedoraproject.org/compose/latest-Fedora-Modular-26/compose/Server/x86_64/os/

test:
    processrunning:
        - 'ls  /proc/*/exe -alh | grep memcached'
testhost:
    selfcheck:
        - 'echo errr | nc localhost 11211'
        - 'echo set AAA 0 4 2 | nc localhost 11211'
        - 'echo get AAA | nc localhost 11211'
    selcheckError:
        - 'echo errr | nc localhost 11211 |grep ERROR'

 

Add the simpleTest.py python file, which tests a service or an application, into the tests/ directory:

#!/usr/bin/python
# -*- coding: utf-8 -*-
#
# This Modularity Testing Framework helps you to write tests for modules
# Copyright (C) 2017 Red Hat, Inc.

import socket
from avocado import main
from avocado.core import exceptions
from moduleframework import module_framework


class SanityCheck1(module_framework.AvocadoTest):
    """
    :avocado: enable
    """

    def testSettingTestVariable(self):
        self.start()
        s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
        s.connect(('localhost', self.getConfig()['service']['port']))
        s.sendall('set Test 0 100 4\r\n\n')
        #data = s.recv(1024)
        # print data

        s.sendall('get Test\r\n')
        #data = s.recv(1024)
        # print data
        s.close()

    def testBinExistsInRootDir(self):
        self.start()
        self.run("ls / | grep bin")

    def test3GccSkipped(self):
        module_framework.skipTestIf("gcc" not in self.getActualProfile())
        self.start()
        self.run("gcc -v")

if __name__ == '__main__':
    main()

Running tests

To execute tests from the root directory of the module, type

# run tests from a module root directory
$ sudo make test

The result looks like:

docker build --tag=memcached .
Sending build context to Docker daemon 268.3 kB
Step 1 : FROM baseruntime/baseruntime:latest
---> 0cbcd55844e4
Step 2 : ENV NAME memcached ARCH x86_64
---> Using cache
---> 16edc6a5f7b6
Step 3 : LABEL MAINTAINER "Petr Hracek" <phracek@redhat.com>
---> Using cache
---> 693d322beab2
Step 4 : LABEL summary "High Performance, Distributed Memory Object Cache" name "$FGC/$NAME" version "0" release "1.$DISTTAG" architecture "$ARCH" com.redhat.component $NAME usage "docker run -p 11211:11211 f26/memcached" help "Runs memcached, which listens on port 11211. No dependencies. See Help File below for more details." description "memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load." io.k8s.description "memcached is a high-performance, distributed memory object caching system, generic in nature, but intended for use in speeding up dynamic web applications by alleviating database load." io.k8s.diplay-name "Memcached 1.4 " io.openshift.expose-services "11211:memcached" io.openshift.tags "memcached"
---> Using cache
---> eea936c1ae23
Step 5 : COPY repos/* /etc/yum.repos.d/
---> Using cache
---> 920155da88d9
Step 6 : RUN microdnf --nodocs --enablerepo memcached install memcached &&     microdnf -y clean all
---> Using cache
---> c83e613f0806
Step 7 : ADD files /files
---> Using cache
---> 7ec5f42c0064
Step 8 : ADD help.md README.md /
---> Using cache
---> 34702988730f
Step 9 : EXPOSE 11211
---> Using cache
---> 577ef9f0d784
Step 10 : USER 1000
---> Using cache
---> 671ac91ec4e5
Step 11 : CMD /files/memcached.sh
---> Using cache
---> 9c933477acc1
Successfully built 9c933477acc1
cd tests; MODULE=docker MODULEMD=file://memcached.yaml URL="docker=memcached" make all
make[1]: Entering directory '/home/phracek/work/FedoraModules/memcached/tests'
Added test (runmethod: run): processrunning
Added test (runmethod: runHost): selfcheck
Added test (runmethod: runHost): selcheckError
python -m avocado run --filter-by-tags=-WIP /usr/share/moduleframework/tools/modulelint.py *.py
JOB ID     : 9ba3a3f9fd982ea087f4d4de6708b88cee15cbab
JOB LOG    : /root/avocado/job-results/job-2017-06-14T16.25-9ba3a3f/job.log
(01/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testDockerFromBaseruntime: PASS (1.52 s)
(02/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testDockerRunMicrodnf: PASS (1.53 s)
(03/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testArchitectureInEnvAndLabelExists: PASS (1.63 s)
(04/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testNameInEnvAndLabelExists: PASS (1.61 s)
(05/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testReleaseLabelExists: PASS (1.60 s)
(06/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testVersionLabelExists: PASS (1.45 s)
(07/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testComRedHatComponentLabelExists: PASS (1.64 s)
(08/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testIok8sDescriptionExists: PASS (1.51 s)
(09/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testIoOpenshiftExposeServicesExists: PASS (1.50 s)
(10/20) /usr/share/moduleframework/tools/modulelint.py:DockerfileLinter.testIoOpenShiftTagsExists: PASS (1.53 s)
(11/20) /usr/share/moduleframework/tools/modulelint.py:DockerLint.testBasic: PASS (13.75 s)
(12/20) /usr/share/moduleframework/tools/modulelint.py:DockerLint.testContainerIsRunning: PASS (14.19 s)
(13/20) /usr/share/moduleframework/tools/modulelint.py:DockerLint.testLabels: PASS (1.57 s)
(14/20) /usr/share/moduleframework/tools/modulelint.py:ModuleLintPackagesCheck.test: PASS (14.03 s)
(15/20) generated.py:GeneratedTestsConfig.test_processrunning: PASS (13.77 s)
(16/20) generated.py:GeneratedTestsConfig.test_selfcheck: PASS (13.85 s)
(17/20) generated.py:GeneratedTestsConfig.test_selcheckError: PASS (14.32 s)
(18/20) sanity1.py:SanityCheck1.testSettingTestVariable: PASS (13.86 s)
(19/20) sanity1.py:SanityCheck1.testBinExistsInRootDir: PASS (13.81 s)
(20/20) sanity1.py:SanityCheck1.test3GccSkipped: ERROR (13.84 s)
RESULTS    : PASS 19 | ERROR 1 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME   : 144.85 s
JOB HTML   : /root/avocado/job-results/job-2017-06-14T16.25-9ba3a3f/html/results.html
Makefile:6: recipe for target 'all' failed
make[1]: *** [all] Error 1
Makefile:14: recipe for target 'test' failed
make: *** [test] Error 2
$

To execute tests from the tests/ directory, type:

# run Python tests from the tests/ directory
$ sudo MODULE=docker avocado run ./*.py

The result looks like:

$ sudo MODULE=docker avocado run ./*.py
[sudo] password for phracek:
JOB ID     : 2a171b762d8ab2c610a89862a88c015588823d29
JOB LOG    : /root/avocado/job-results/job-2017-06-14T16.43-2a171b7/job.log
(1/6) ./generated.py:GeneratedTestsConfig.test_processrunning: PASS (24.79 s)
(2/6) ./generated.py:GeneratedTestsConfig.test_selfcheck: PASS (18.18 s)
(3/6) ./generated.py:GeneratedTestsConfig.test_selcheckError: ERROR (24.16 s)
(4/6) ./sanity1.py:SanityCheck1.testSettingTestVariable: PASS (18.88 s)
(5/6) ./sanity1.py:SanityCheck1.testBinExistsInRootDir: PASS (17.87 s)
(6/6) ./sanity1.py:SanityCheck1.test3GccSkipped: ERROR (19.30 s)
RESULTS    : PASS 4 | ERROR 2 | FAIL 0 | SKIP 0 | WARN 0 | INTERRUPT 0 | CANCEL 0
JOB TIME   : 124.19 s
JOB HTML   : /root/avocado/job-results/job-2017-06-14T16.43-2a171b7/html/results.html

6: the bull gets a notification/ alert-AnsiBull then receives a...

Posted by Angela Pagan (Outreachy) on June 28, 2017 09:00 AM


6: the bull gets a notification/ alert

-AnsiBull then receives a status notification that saturn-rover-05 didn’t complete the playbook.

7: The bull tries to run the playbook again-He runs the playbook...

Posted by Angela Pagan (Outreachy) on June 28, 2017 09:00 AM


7: The bull tries to run the playbook again

-He runs the playbook again so that any systems that happened to fall in a ditch or get attacked by a Rigellian felinoid catches up on anything they missed. Rovers that already successfully completed all the tasks in the playbook’s plays aren’t affected.

8: The rover is at the bottom of a ditch-This little rover...

Posted by Angela Pagan (Outreachy) on June 28, 2017 09:00 AM


8: The rover is at the bottom of a ditch

-This little rover (saturn-rover-05) fell in a ditch while receiving a mission update from a playbook run. It has temporarily lost its network connection. Oh no! What will happen?

Fedora 26 Upgrade Test Day 2017-06-30

Posted by Fedora Community Blog on June 28, 2017 07:26 AM

Friday, 2017-06-30, is the Fedora 26 Upgrade Test Day! As part of this planned Change for Fedora 26, we need your help to test if everything runs smoothly!

Why Upgrade Test Day?

As we approach the Final Release date for Fedora 26. Most users will be upgrading to Fedora 26 and this test day will help us understand if everything is
working perfectly. This test day will cover both a Gnome graphical upgrade and an upgrade done using DNF .

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Fedora 26 Upgrade Test Day 2017-06-30 appeared first on Fedora Community Blog.

FLISoL Bilwi event report

Posted by Eduardo Mayorga on June 28, 2017 06:13 AM

The free software community in Bilwi organized FLISoL (Latin American free software install fest) this year as the only city in Nicaragua holding this event on April 22nd (Masaya holding it a week later). I had previously got in touch with one of the community members, Robert Müller, at the Central American Free Software Meeting in 2016, but I had never visited Bilwi before. FLISoL took place at URACCAN. Since this university does not offer many engineering programs, the expected audience was a little different than I was used to.

Besides installations, there were talks open to the public. I helped to arrange the Fedora booth, the only booth in the event. Attendees were attracted by install media and t-shirts that were raffled after the last talk.

I did a talk on “What’s new in Fedora 26.” There were many questions after the presentation, and the audience was invited to follow up in the Fedora stand. I also co-presented on OpenStreetMap with the event organizer, Robert Müller.

This was not the only Fedora-related presentation at this event. Omar Berroterán spoke on Fedora for home entertainment and Naima Fernández did a talk on educational robotics with Icaro. This last talk was very engaging and the university staff even asked her to do more sessions on Icaro via Hangouts.

I personally helped to install Fedora in one machine, which unfortunately turned out to have a failing hard disk drive so it was unsuccessful. However, Omar Berroterán managed to install Fedora on many other computers!

We are very thankful with the FLISoL staff in Bilwi and URACCAN for providing the facilities and support for the free software community in Bilwi. Also thankful with the attendees in general for their interest in the Fedora Project. There are now more Fedora users in Bilwi (we met a bunch of old users in the stand) and chances are the community will continue to grow. I hope to visit Bilwi in near future.

1: The drills don’t work2: The rover is sad3: The Bull checks...

Posted by Angela Pagan (Outreachy) on June 28, 2017 03:50 AM


1: The drills don’t work

2: The rover is sad

3: The Bull checks the screen

4: The Bull sees that other ice drilling rovers are having problems

5: The Bull finds this odd

6: The Bull finds the planets with those rovers

7: They set the ship in motion

8: & off they go!

1: Tractor beam picks up the rover2: The rover is dropped onto/...

Posted by Angela Pagan (Outreachy) on June 28, 2017 03:44 AM


1: Tractor beam picks up the rover

2: The rover is dropped onto/ into the ship

3: The Bull carries the rover

4: The Bull’s helmet pops back on

5: The Bull deploys a screen for the rover

6: The rover reads the playbook

7: A continuation of the play book

8:The rover tries to run the play book and the drills spin

mkosi — A Tool for Generating OS Images

Posted by Lennart Poettering on June 27, 2017 10:00 PM

Introducing mkosi

After blogging about casync I realized I never blogged about the mkosi tool that combines nicely with it. mkosi has been around for a while already, and its time to make it a bit better known. mkosi stands for Make Operating System Image, and is a tool for precisely that: generating an OS tree or image that can be booted.

Yes, there are many tools like mkosi, and a number of them are quite well known and popular. But mkosi has a number of features that I think make it interesting for a variety of use-cases that other tools don't cover that well.

What is mkosi?

What are those use-cases, and what does mkosi precisely set apart? mkosi is definitely a tool with a focus on developer's needs for building OS images, for testing and debugging, but also for generating production images with cryptographic protection. A typical use-case would be to add a mkosi.default file to an existing project (for example, one written in C or Python), and thus making it easy to generate an OS image for it. mkosi will put together the image with development headers and tools, compile your code in it, run your test suite, then throw away the image again, and build a new one, this time without development headers and tools, and install your build artifacts in it. This final image is then "production-ready", and only contains your built program and the minimal set of packages you configured otherwise. Such an image could then be deployed with casync (or any other tool of course) to be delivered to your set of servers, or IoT devices or whatever you are building.

mkosi is supposed to be legacy-free: the focus is clearly on today's technology, not yesteryear's. Specifically this means that we'll generate GPT partition tables, not MBR/DOS ones. When you tell mkosi to generate a bootable image for you, it will make it bootable on EFI, not on legacy BIOS. The GPT images generated follow specifications such as the Discoverable Partitions Specification, so that /etc/fstab can remain unpopulated and tools such as systemd-nspawn can automatically dissect the image and boot from them.

So, let's have a look on the specific images it can generate:

  1. Raw GPT disk image, with ext4 as root
  2. Raw GPT disk image, with btrfs as root
  3. Raw GPT disk image, with a read-only squashfs as root
  4. A plain directory on disk containing the OS tree directly (this is useful for creating generic container images)
  5. A btrfs subvolume on disk, similar to the plain directory
  6. A tarball of a plain directory

When any of the GPT choices above are selected, a couple of additional options are available:

  1. A swap partition may be added in
  2. The system may be made bootable on EFI systems
  3. Separate partitions for /home and /srv may be added in
  4. The root, /home and /srv partitions may be optionally encrypted with LUKS
  5. The root partition may be protected using dm-verity, thus making offline attacks on the generated system hard
  6. If the image is made bootable, the dm-verity root hash is automatically added to the kernel command line, and the kernel together with its initial RAM disk and the kernel command line is optionally cryptographically signed for UEFI SecureBoot

Note that mkosi is distribution-agnostic. It currently can build images based on the following Linux distributions:

  1. Fedora
  2. Debian
  3. Ubuntu
  4. ArchLinux
  5. openSUSE

Note though that not all distributions are supported at the same feature level currently. Also, as mkosi is based on dnf --installroot, debootstrap, pacstrap and zypper, and those packages are not packaged universally on all distributions, you might not be able to build images for all those distributions on arbitrary host distributions.

The GPT images are put together in a way that they aren't just compatible with UEFI systems, but also with VM and container managers (that is, at least the smart ones, i.e. VM managers that know UEFI, and container managers that grok GPT disk images) to a large degree. In fact, the idea is that you can use mkosi to build a single GPT image that may be used to:

  1. Boot on bare-metal boxes
  2. Boot in a VM
  3. Boot in a systemd-nspawn container
  4. Directly run a systemd service off, using systemd's RootImage= unit file setting

Note that in all four cases the dm-verity data is automatically used if available to ensure the image is not tampered with (yes, you read that right, systemd-nspawn and systemd's RootImage= setting automatically do dm-verity these days if the image has it.)

Mode of Operation

The simplest usage of mkosi is by simply invoking it without parameters (as root):

# mkosi

Without any configuration this will create a GPT disk image for you, will call it image.raw and drop it in the current directory. The distribution used will be the same one as your host runs.

Of course in most cases you want more control about how the image is put together, i.e. select package sets, select the distribution, size partitions and so on. Most of that you can actually specify on the command line, but it is recommended to instead create a couple of mkosi.$SOMETHING files and directories in some directory. Then, simply change to that directory and run mkosi without any further arguments. The tool will then look in the current working directory for these files and directories and make use of them (similar to how make looks for a Makefile…). Every single file/directory is optional, but if they exist they are honored. Here's a list of the files/directories mkosi currently looks for:

  1. mkosi.default — This is the main configuration file, here you can configure what kind of image you want, which distribution, which packages and so on.

  2. mkosi.extra/ — If this directory exists, then mkosi will copy everything inside it into the images built. You can place arbitrary directory hierarchies in here, and they'll be copied over whatever is already in the image, after it was put together by the distribution's package manager. This is the best way to drop additional static files into the image, or override distribution-supplied ones.

  3. mkosi.build — This executable file is supposed to be a build script. When it exists, mkosi will build two images, one after the other in the mode already mentioned above: the first version is the build image, and may include various build-time dependencies such as a compiler or development headers. The build script is also copied into it, and then run inside it. The script should then build whatever shall be built and place the result in $DESTDIR (don't worry, popular build tools such as Automake or Meson all honor $DESTDIR anyway, so there's not much to do here explicitly). It may also run a test suite, or anything else you like. After the script finished, the build image is removed again, and a second image (the final image) is built. This time, no development packages are included, and the build script is not copied into the image again — however, the build artifacts from the first run (i.e. those placed in $DESTDIR) are copied into the image.

  4. mkosi.postinst — If this executable script exists, it is invoked inside the image (inside a systemd-nspawn invocation) and can adjust the image as it likes at a very late point in the image preparation. If mkosi.build exists, i.e. the dual-phased development build process used, then this script will be invoked twice: once inside the build image and once inside the final image. The first parameter passed to the script clarifies which phase it is run in.

  5. mkosi.nspawn — If this file exists, it should contain a container configuration file for systemd-nspawn (see systemd.nspawn(5) for details), which shall be shipped along with the final image and shall be included in the check-sum calculations (see below).

  6. mkosi.cache/ — If this directory exists, it is used as package cache directory for the builds. This directory is effectively bind mounted into the image at build time, in order to speed up building images. The package installers of the various distributions will place their package files here, so that subsequent runs can reuse them.

  7. mkosi.passphrase — If this file exists, it should contain a pass-phrase to use for the LUKS encryption (if that's enabled for the image built). This file should not be readable to other users.

  8. mkosi.secure-boot.crt and mkosi.secure-boot.key should be an X.509 key pair to use for signing the kernel and initrd for UEFI SecureBoot, if that's enabled.

How to use it

So, let's come back to our most trivial example, without any of the mkosi.$SOMETHING files around:

# mkosi

As mentioned, this will create a build file image.raw in the current directory. How do we use it? Of course, we could dd it onto some USB stick and boot it on a bare-metal device. However, it's much simpler to first run it in a container for testing:

# systemd-nspawn -bi image.raw

And there you go: the image should boot up, and just work for you.

Now, let's make things more interesting. Let's still not use any of the mkosi.$SOMETHING files around:

# mkosi -t raw_btrfs --bootable -o foobar.raw
# systemd-nspawn -bi foobar.raw

This is similar as the above, but we made three changes: it's no longer GPT + ext4, but GPT + btrfs. Moreover, the system is made bootable on UEFI systems, and finally, the output is now called foobar.raw.

Because this system is bootable on UEFI systems, we can run it in KVM:

qemu-kvm -m 512 -smp 2 -bios /usr/share/edk2/ovmf/OVMF_CODE.fd -drive format=raw,file=foobar.raw

This will look very similar to the systemd-nspawn invocation, except that this uses full VM virtualization rather than container virtualization. (Note that the way to run a UEFI qemu/kvm instance appears to change all the time and is different on the various distributions. It's quite annoying, and I can't really tell you what the right qemu command line is to make this work on your system.)

Of course, it's not all raw GPT disk images with mkosi. Let's try a plain directory image:

# mkosi -d fedora -t directory -o quux
# systemd-nspawn -bD quux

Of course, if you generate the image as plain directory you can't boot it on bare-metal just like that, nor run it in a VM.

A more complex command line is the following:

# mkosi -d fedora -t raw_squashfs --checksum --xz --package=openssh-clients --package=emacs

In this mode we explicitly pick Fedora as the distribution to use, ask mkosi to generate a compressed GPT image with a root squashfs, compress the result with xz, and generate a SHA256SUMS file with the hashes of the generated artifacts. The package will contain the SSH client as well as everybody's favorite editor.

Now, let's make use of the various mkosi.$SOMETHING files. Let's say we are working on some Automake-based project and want to make it easy to generate a disk image off the development tree with the version you are hacking on. Create a configuration file:

# cat > mkosi.default <<EOF
[Distribution]
Distribution=fedora
Release=24

[Output]
Format=raw_btrfs
Bootable=yes

[Packages]
# The packages to appear in both the build and the final image
Packages=openssh-clients httpd
# The packages to appear in the build image, but absent from the final image
BuildPackages=make gcc libcurl-devel
EOF

And let's add a build script:

# cat > mkosi.build <<EOF
#!/bin/sh
./autogen.sh
./configure --prefix=/usr
make -j `nproc`
make install
EOF
# chmod +x mkosi.build

And with all that in place we can now build our project into a disk image, simply by typing:

# mkosi

Let's try it out:

# systemd-nspawn -bi image.raw

Of course, if you do this you'll notice that building an image like this can be quite slow. And slow build times are actively hurtful to your productivity as a developer. Hence let's make things a bit faster. First, let's make use of a package cache shared between runs:

# mkdir mkosi.cache

Building images now should already be substantially faster (and generate less network traffic) as the packages will now be downloaded only once and reused. However, you'll notice that unpacking all those packages and the rest of the work is still quite slow. But mkosi can help you with that. Simply use mkosi's incremental build feature. In this mode mkosi will make a copy of the build and final images immediately before dropping in your build sources or artifacts, so that building an image becomes a lot quicker: instead of always starting totally from scratch a build will now reuse everything it can reuse from a previous run, and immediately begin with building your sources rather than the build image to build your sources in. To enable the incremental build feature use -i:

# mkosi -i

Note that if you use this option, the package list is not updated anymore from your distribution's servers, as the cached copy is made after all packages are installed, and hence until you actually delete the cached copy the distribution's network servers aren't contacted again and no RPMs or DEBs are downloaded. This means the distribution you use becomes "frozen in time" this way. (Which might be a bad thing, but also a good thing, as it makes things kinda reproducible.)

Of course, if you run mkosi a couple of times you'll notice that it won't overwrite the generated image when it already exists. You can either delete the file yourself first (rm image.raw) or let mkosi do it for you right before building a new image, with mkosi -f. You can also tell mkosi to not only remove any such pre-existing images, but also remove any cached copies of the incremental feature, by using -f twice.

I wrote mkosi originally in order to test systemd, and quickly generate a disk image of various distributions with the most current systemd version from git, without all that affecting my host system. I regularly use mkosi for that today, in incremental mode. The two commands I use most in that context are:

# mkosi -if && systemd-nspawn -bi image.raw

And sometimes:

# mkosi -iff && systemd-nspawn -bi image.raw

The latter I use only if I want to regenerate everything based on the very newest set of RPMs provided by Fedora, instead of a cached snapshot of it.

BTW, the mkosi files for systemd are included in the systemd git tree: mkosi.default and mkosi.build. This way, any developer who wants to quickly test something with current systemd git, or wants to prepare a patch based on it and test it can check out the systemd repository and simply run mkosi in it and a few minutes later he has a bootable image he can test in systemd-nspawn or KVM. casync has similar files: mkosi.default, mkosi.build.

Random Interesting Features

  1. As mentioned already, mkosi will generate dm-verity enabled disk images if you ask for it. For that use the --verity switch on the command line or Verity= setting in mkosi.default. Of course, dm-verity implies that the root volume is read-only. In this mode the top-level dm-verity hash will be placed along-side the output disk image in a file named the same way, but with the .roothash suffix. If the image is to be created bootable, the root hash is also included on the kernel command line in the roothash= parameter, which current systemd versions can use to both find and activate the root partition in a dm-verity protected way. BTW: it's a good idea to combine this dm-verity mode with the raw_squashfs image mode, to generate a genuinely protected, compressed image suitable for running in your IoT device.

  2. As indicated above, mkosi can automatically create a check-sum file SHA256SUMS for you (--checksum) covering all the files it outputs (which could be the image file itself, a matching .nspawn file using the mkosi.nspawn file mentioned above, as well as the .roothash file for the dm-verity root hash.) It can then optionally sign this with gpg (--sign). Note that systemd's machinectl pull-tar and machinectl pull-raw command can download these files and the SHA256SUMS file automatically and verify things on download. With other words: what mkosi outputs is perfectly ready for downloads using these two systemd commands.

  3. As mentioned, mkosi is big on supporting UEFI SecureBoot. To make use of that, place your X.509 key pair in two files mkosi.secureboot.crt and mkosi.secureboot.key, and set SecureBoot= or --secure-boot. If so, mkosi will sign the kernel/initrd/kernel command line combination during the build. Of course, if you use this mode, you should also use Verity=/--verity=, otherwise the setup makes only partial sense. Note that mkosi will not help you with actually enrolling the keys you use in your UEFI BIOS.

  4. mkosi has minimal support for GIT checkouts: when it recognizes it is run in a git checkout and you use the mkosi.build script stuff, the source tree will be copied into the build image, but will all files excluded by .gitignore removed.

  5. There's support for encryption in place. Use --encrypt= or Encrypt=. Note that the UEFI ESP is never encrypted though, and the root partition only if explicitly requested. The /home and /srv partitions are unconditionally encrypted if that's enabled.

  6. Images may be built with all documentation removed.

  7. The password for the root user and additional kernel command line arguments may be configured for the image to generate.

Minimum Requirements

Current mkosi requires Python 3.5, and has a number of dependencies, listed in the README. Most notably you need a somewhat recent systemd version to make use of its full feature set: systemd 233. Older versions are already packaged for various distributions, but much of what I describe above is only available in the most recent release mkosi 3.

The UEFI SecureBoot support requires sbsign which currently isn't available in Fedora, but there's a COPR.

Future

It is my intention to continue turning mkosi into a tool suitable for:

  1. Testing and debugging projects
  2. Building images for secure devices
  3. Building portable service images
  4. Building images for secure VMs and containers

One of the biggest goals I have for the future is to teach mkosi and systemd/sd-boot native support for A/B IoT style partition setups. The idea is that the combination of systemd, casync and mkosi provides generic building blocks for building secure, auto-updating devices in a generic way from, even though all pieces may be used individually, too.

FAQ

  1. Why are you reinventing the wheel again? This is exactly like $SOMEOTHERPROJECT! — Well, to my knowledge there's no tool that integrates this nicely with your project's development tree, and can do dm-verity and UEFI SecureBoot and all that stuff for you. So nope, I don't think this exactly like $SOMEOTHERPROJECT, thank you very much.

  2. What about creating MBR/DOS partition images? — That's really out of focus to me. This is an exercise in figuring out how generic OSes and devices in the future should be built and an attempt to commoditize OS image building. And no, the future doesn't speak MBR, sorry. That said, I'd be quite interested in adding support for booting on Raspberry Pi, possibly using a hybrid approach, i.e. using a GPT disk label, but arranging things in a way that the Raspberry Pi boot protocol (which is built around DOS partition tables), can still work.

  3. Is this portable? — Well, depends what you mean by portable. No, this tool runs on Linux only, and as it uses systemd-nspawn during the build process it doesn't run on non-systemd systems either. But then again, you should be able to create images for any architecture you like with it, but of course if you want the image bootable on bare-metal systems only systems doing UEFI are supported (but systemd-nspawn should still work fine on them).

  4. Where can I get this stuff? — Try GitHub. And some distributions carry packaged versions, but I think none of them the current v3 yet.

  5. Is this a systemd project? — Yes, it's hosted under the systemd GitHub umbrella. And yes, during run-time systemd-nspawn in a current version is required. But no, the code-bases are separate otherwise, already because systemd is a C project, and mkosi Python.

  6. Requiring systemd 233 is a pretty steep requirement, no? — Yes, but the feature we need kind of matters (systemd-nspawn's --overlay= switch), and again, this isn't supposed to be a tool for legacy systems.

  7. Can I run the resulting images in LXC or Docker? — Humm, I am not an LXC nor Docker guy. If you select directory or subvolume as image type, LXC should be able to boot the generated images just fine, but I didn't try. Last time I looked, Docker doesn't permit running proper init systems as PID 1 inside the container, as they define their own run-time without intention to emulate a proper system. Hence, no I don't think it will work, at least not with an unpatched Docker version. That said, again, don't ask me questions about Docker, it's not precisely my area of expertise, and quite frankly I am not a fan. To my knowledge neither LXC nor Docker are able to run containers directly off GPT disk images, hence the various raw_xyz image types are definitely not compatible with either. That means if you want to generate a single raw disk image that can be booted unmodified both in a container and on bare-metal, then systemd-nspawn is the container manager to go for (specifically, its -i/--image= switch).

Should you care? Is this a tool for you?

Well, that's up to you really.

If you hack on some complex project and need a quick way to compile and run your project on a specific current Linux distribution, then mkosi is an excellent way to do that. Simply drop the mkosi.default and mkosi.build files in your git tree and everything will be easy. (And of course, as indicated above: if the project you are hacking on happens to be called systemd or casync be aware that those files are already part of the git tree — you can just use them.)

If you hack on some embedded or IoT device, then mkosi is a great choice too, as it will make it reasonably easy to generate secure images that are protected against offline modification, by using dm-verity and UEFI SecureBoot.

If you are an administrator and need a nice way to build images for a VM or systemd-nspawn container, or a portable service then mkosi is an excellent choice too.

If you care about legacy computers, old distributions, non-systemd init systems, old VM managers, Docker, … then no, mkosi is not for you, but there are plenty of well-established alternatives around that cover that nicely.

And never forget: mkosi is an Open Source project. We are happy to accept your patches and other contributions.

Oh, and one unrelated last thing: don't forget to submit your talk proposal and/or buy a ticket for All Systems Go! 2017 in Berlin — the conference where things like systemd, casync and mkosi are discussed, along with a variety of other Linux userspace projects used for building systems.

How did the world ever work without Facebook?

Posted by Daniel Pocock on June 27, 2017 07:29 PM

Almost every day, somebody tells me there is no way they can survive without some social media like Facebook or Twitter. Otherwise mature adults fearful that without these dubious services, they would have no human contact ever again, they would die of hunger and the sky would come crashing down too.

It is particularly disturbing for me to hear this attitude from community activists and campaigners. These are people who aspire to change the world, but can you really change the system using the tools the system gives you?

Revolutionaries like Gandi and the Bolsheviks don't have a lot in common: but both of them changed the world and both of them did so by going against the system. Gandi, of course, relied on non-violence while the Bolsheviks continued to rely on violence long after taking power. Neither of them needed social media but both are likely to be remembered far longer than any viral video clip you have seen recently.

With US border guards asking visitors for their Facebook profiles and Mark Zuckerberg being a regular participant at secretive Bilderberg meetings, it should be clear that Facebook and conventional social media is not on your side, it's on theirs.

Kettling has never been easier

When street protests erupt in major cities such as London, the police build fences around the protesters, cutting them off from the rest of the world. They become an island in the middle of the city, like a construction site or broken down bus that everybody else goes around. The police then set about arresting one person at a time, taking their name and photograph and then slowly letting them leave in different directions. This strategy is called kettling.

Facebook helps kettle activists in their arm chair. The police state can gather far more data about them, while their impact is even more muted than if they ventured out of their home.

You are more likely to win the lottery than make a viral campaign

Every week there is news about some social media campaign that has gone viral. Every day, marketing professionals, professional campaigners and motivated activists sit at their computer spending hours trying to replicate this phenomenon.

Do the math: how many of these campaigns can really be viral success stories? Society can only absorb a small number of these campaigns at any one time. For most of the people trying to ignite such campaigns, their time and energy is wasted, much like money spent buying lottery tickets and with odds that are just as bad.

It is far better to focus on the quality of your work in other ways than to waste any time on social media. If you do something that is truly extraordinary, then other people will pick it up and share it for you and that is how a viral campaign really begins. The time and effort you put into trying to force something to become viral is wasting the energy and concentration you need to make something that is worthy of really being viral.

An earthquake and an escaped lion never needed to announce themselves on social media to become an instant hit. If your news isn't extraordinary enough for random people to spontaneously post, share and tweet it in the first place, how can it ever go far?

The news media deliberately over-rates social media

News media outlets, including TV, radio and print, gain a significant benefit crowd-sourcing live information, free of charge, from the public on social media. It is only logical that they will cheer on social media sites and give them regular attention. Have you noticed that whenever Facebook's publicity department makes an announcement, the media are quick to publish it ahead of more significant stories about social or economic issues that impact our lives? Why do you think the media puts Facebook up on a podium like this, ahead of all other industries, if the media aren't getting something out of it too?

The tail doesn't wag the dog

One particular example is the news media's fascination with Donald Trump's Twitter account. Some people have gone as far as suggesting that this billionaire could have simply parked his jet and spent the whole of 2016 at one of his golf courses sending tweets and he would have won the presidency anyway. Suggesting that Trump's campaign revolved entirely around Twitter is like suggesting the tail wags the dog.

The reality is different: Trump has been a prominent public figure for decades, both in the business and entertainment world. During his presidential campaign, he had at least 220 major campaign rallies attended by over 1.2 million people in the real world. Without this real-world organization and history, the Twitter account would have been largely ignored like the majority of Twitter accounts.

On the left of politics, the media have been just as quick to suggest that Bernie Sanders and Jeremy Corbyn have been supported by the "Facebook generation". This label is superficial and deceiving. The reality, again, is a grass roots movement that has attracted young people to attend local campaign meetings in pubs up and down the country. Getting people to get out and be active is key. Social media is incidental to their campaign, not indispensible.

Real-world meetings, big or small, are immensely more powerful than a social media presence. Consider the Trump example again: if 100,000 people receive one of his tweets, how many even notice it in the non-stop stream of information we are bombarded with today? On the other hand, if 100,000 bellow out a racist slogan at one of his rallies, is there any doubt whether each and every one of those people is engaged with the campaign at that moment? If you could choose between 100 extra Twitter followers or 10 extra activists attending a meeting every month, which would you prefer?

Do we need this new definition of a Friend?

Facebook is redefining what it means to be a friend.

Is somebody who takes pictures of you and insists on sharing them with hundreds of people, tagging your face for the benefit of biometric profiling systems, really a friend?

If you want to find out what a real friend is and who your real friends really are, there is no better way to do so then blowing away your Facebook and Twitter account and waiting to see who contacts you personally about meeting up in the real world.

If you look at a profile on Facebook or Twitter, one of the most prominent features is the number of friends or followers they have. Research suggests that humans can realistically cope with no more than about 150 stable relationships. Facebook, however, has turned Friending people into something like a computer game.

This research is also given far more attention then it deserves though: the number of really meaningful friendships that one person can maintain is far smaller. Think about how many birthdays and spouse's names you can remember and those may be the number of real friendships you can manage well. In his book Busy, Tony Crabbe suggests between 10-20 friendships are in this category and you should spend all your time with these people rather than letting your time be spread thinly across superficial Facebook "friends".

This same logic can be extrapolated to activism and marketing in its many forms: is it better for a campaigner or publicist to have fifty journalists following him on Twitter (where tweets are often lost in the blink of an eye) or three journalists who he meets for drinks from time to time?

Facebook alternatives: the ultimate trap?

Numerous free, open source projects have tried to offer an equivalent to Facebook and Twitter. GNU social, Diaspora and identi.ca are some of the more well known examples.

Trying to persuade people to move from Facebook to one of these platforms rarely works. In most cases, Metcalfe's law suggests the size of Facebook will suck them back in like the gravity of a black hole.

To help people really beat these monstrosities, the most effective strategy is to help them live without social media, whether it is proprietary or not. The best way to convince them may be to give it up yourself and let them see how much you enjoy life without it.

Share your thoughts

The FSFE community has recently been debating the use of propriety software and services. Please feel free to join the list and click here to reply on the thread.

F25 Updated Lives Available (4.11.6-201) — Linux-modder’s Tech Corner

Posted by Ben Williams on June 27, 2017 06:03 PM

We in the Respins SIG are pleased to mention the latest series of Updated Live Respins carrying the 4.11.6-201 Kernel. These respins use the livemedia-creator tool packaged in the default Fedora repo and following the guide here as well as using the scripts located here. As Always there are available @ http://tinyurl.com/live-respins2 For those needing a non-shortened url that expands to https://dl.fedoraproject.org/pub/alt/live-respins/

via F25 Updated Lives Available (4.11.6-201) — Linux-modder’s Tech Corner


F25 Updated Lives Available (4.11.6-201)

Posted by Corey ' Linuxmodder' Sheldon on June 27, 2017 12:00 PM

We in the Respins SIG are pleased to mention the latest series of Updated Live Respins carrying the 4.11.6-201 Kernel.  These respins use the livemedia-creator tool packaged in the default Fedora repo and following the guide here as well as using the scripts located here.

As Always  there are available @  http://tinyurl.com/live-respins2

For those needing a non-shortened url that expands to https://dl.fedoraproject.org/pub/alt/live-respins/

 

 


Filed under: Community, F25, F25 Torrents, Fedora, Projects, PSAs, Volunteer

Cockpit Virtual Hackfest

Posted by Cockpit Project on June 27, 2017 10:25 AM

There’s a Cockpit Hackfest underway in Karlsruhe, Germany. We’re working on the virtual machine functionality in Cockpit.

Hackfest

That means interacting with libvirt. Although libvirt has remoting functionality it has no API that’s actually remotable and callable from Cockpit javascript code. So Lars and Pavel started working on a DBus wrapper for the API.

At the same time, Martin is working on making the current virsh based access to libvirt more performant, so we don’t block on waiting until the DBus wrapper is done.

Lots of work was done understanding redux. The initial machines code in Cockpit was written using redux, and we needed to map it’s concept of models and state to the Cockpit way of storing state on the server and UI concepts like dialogs. Everyone was involved.

Andreas, Garrett have been working on designs for creating a virtual machine and editing virtual machines. Dominik started work on implementing that code.

Marius worked on deletion of virtual machines, and already has a pull request open.

Stef worked on the integration tests for the virtual machine stuff and is booting nested VMs using nested images.

Hackfest

Wheeee.

virt-builder Debian 9 image available

Posted by Richard W.M. Jones on June 27, 2017 09:01 AM

Debian 9 (“Stretch”) was released last week and now it’s available in virt-builder, the fast way to build virtual machine disk images:

$ virt-builder -l | grep debian
debian-6                 x86_64     Debian 6 (Squeeze)
debian-7                 sparc64    Debian 7 (Wheezy) (sparc64)
debian-7                 x86_64     Debian 7 (Wheezy)
debian-8                 x86_64     Debian 8 (Jessie)
debian-9                 x86_64     Debian 9 (stretch)

$ virt-builder debian-9 \
    --root-password password:123456
[   0.5] Downloading: http://libguestfs.org/download/builder/debian-9.xz
[   1.2] Planning how to build this image
[   1.2] Uncompressing
[   5.5] Opening the new disk
[  15.4] Setting a random seed
virt-builder: warning: random seed could not be set for this type of guest
[  15.4] Setting passwords
[  16.7] Finishing off
                   Output file: debian-9.img
                   Output size: 6.0G
                 Output format: raw
            Total usable space: 3.9G
                    Free space: 3.1G (78%)

$ qemu-system-x86_64 \
    -machine accel=kvm:tcg -cpu host -m 2048 \
    -drive file=debian-9.img,format=raw,if=virtio \
    -serial stdio

Designing for scalability – a startup perspective

Posted by Rahul Bhalerao on June 27, 2017 07:05 AM
What is scale? Is it the number of customers a company has? Is it the number of products that are sold? Is it the amount of revenue the company makes? Is it the amount of infrastructure one has? Or is it the number of features the product has? Well, it can be all of this or none of this. When an organization talks about scale it is not just a number. When an economist talks about it, again it’s not just a number. For an economist, the scale is about doing more with less. Scale is about perspective, about relationship between two or more variables that ultimately help you generate optimum value for your efforts. <o:p></o:p>

Consider this, let’s say you are into making a lollipop. Selling 10 lollipops you earn $100 revenue of which $10 is you profit and $90 is the cost. When you increase you production you start selling 100 lollypops and earn $1000 revenue with $100 as your profit and $900 as cost. Did you really benefit from the scale here? In absolute terms, of course. In terms of ratios though, nothing has changed. Your costs are still 90% of the revenues, your cost per lollipop is still $9. Is this the scalability you would like to achieve for a sustainable business? From economic perspective, there is zero gain from this scaling as the law of economies of scale has not been realized. What if your costs actually shoot up and become $950? You are indeed not benefited from the scale. What if tomorrow, the demand for lollipops shrinks suddenly and you end up selling only 50 of them? Would your costs still be at 90% or what if the extra infrastructure that you have bought for higher production pushes your costs to $800? This is a very simple example, but the point here is that when you talk about scalability, it is not just the gigantic numbers we throw out, it is about the relationship between them and how do you benefit the most out of it.

Check out this cartoon from xkcd on what scale is not:
Source: xkcd.com/1737/


A good scalable system, whether it is manufacturing or IT, inherently must have two properties:
  1. Higher the scale (of production), lower must be the cost per unit of produce
  2. It must be a bidirectional scalability, meaning, it should be possible to scale upwards for higher demands and scale downward for lower demands with minimal impact on cost per unit of production.

The first property comes from economics while second comes from technology. This is what the entire cloud computing economy is based on. If this is not considered while designing a software system on cloud, it may turn out to be a futile exercise of following the herd on to the cloud without reaping the true benefits of a truly scalable application and cloud infrastructure. 

It is important to understand that scalability does no come as an off the shelf product, just hosting your site on cloud won’t make it scalable. It is important to understand that true scalability lies in the design. The design of not just the infrastructure but also the software would eventually impact the total scalability. Think of it in terms of web 2.0 practices. Had there been no extensive use of ajax methods and caching techniques, we would still be loading pages after pages jamming the networks, overloading the servers, apart from tiring the users with 90s internet.

While the established big players have already learned the tricks of the trade through experience and past failures, they have indeed evolved with and helped building the right ecosystem for computing scalability, the most startups don’t see the opportunity to do it right since the first step of implementation. Let’s talk about all these factors that would affect scalability and how a startup/new project can cope with it in detail in further posts.<o:p></o:p>

Producing coverage report for Haskell binaries

Posted by Alexander Todorov on June 27, 2017 07:00 AM

Recently I've started testing a Haskell application and a question I find unanswered (or at least very poorly documented) is how to produce coverage reports for binaries ?

Understanding HPC & cabal

hpc is the Haskell code coverage tool. It produces the following files:

  • .mix - module index file, contains information about tick boxes - their type and location in the source code;
  • .tix - tick index file aka coverage report;
  • .pix - program index file, used only by hpc trans.

The invocation to hpc report needs to know where to find the .mix files in order to be able to translate the coverage information back to source and it needs to know the location (full path or relative from pwd) to the tix file we want to report.

cabal is the package management tool for Haskell. Among other thing it can be used to build your code, execute the test suite and produce the coverage report for you. cabal build will produce module information in dist/hpc/vanilla/mix and cabal test will store coverage information in dist/hpc/vanilla/tix!

A particular thing about Haskell is that you can only test code which can be imported, e.g. it is a library module. You can't test (via Hspec or Hunit) code which lives inside a file that produces a binary (e.g. Main.hs). However you can still execute these binaries (e.g. invoke them from the shell) and they will produce a coverage report in the current directory (e.g. main.tix).

Putting everything together

  1. Using cabal build and cabal test build the project and execute your unit tests. This will create the necessary .mix files (including ones for binaries) and .tix files coming from unit testing;
  2. Invoke your binaries passing appropriate data and examining the results (e.g. compare the output to a known value). A simple shell or Python script could do the job;
  3. Copy the binary.tix file under dist/hpc/vanilla/binary/binary.tix!

Produce coverage report with hpc:

hpc markup --hpcdir=dist/hpc/vanilla/mix/lib --hpcdir=dist/hpc/vanilla/mix/binary  dist/hpc/vanilla/tix/binary/binary.tix

Convert the coverage report to JSON and send it to Coveralls.io:

cabal install hpc-coveralls
~/.cabal/bin/hpc-coveralls --display-report tests binary

Example

Check out the haskell-rpm repository for an example. See job #45 where there is now coverage for the inspect.hs, unrpm.hs and rpm2json.hs files, producing binary executables. Also notice that in RPM/Parse.hs the function parseRPMC is now covered, while it was not covered in the previous job #42!

.travis.yml snippet
script:
  - ~/.cabal/bin/hlint .
  - cabal install --dependencies-only --enable-tests
  - cabal configure --enable-tests --enable-coverage --ghc-option=-DTEST
  - cabal build
  - cabal test --show-details=always

  # tests to produce coverage for binaries
  - wget https://s3.amazonaws.com/atodorov/rpms/macbook/el7/x86_64/efivar-0.14-1.el7.x86_64.rpm
  - ./tests/test_binaries.sh ./efivar-0.14-1.el7.x86_64.rpm

  # move .tix files in appropriate directories
  - mkdir ./dist/hpc/vanilla/tix/inspect/ ./dist/hpc/vanilla/tix/unrpm/ ./dist/hpc/vanilla/tix/rpm2json/
  - mv inspect.tix ./dist/hpc/vanilla/tix/inspect/
  - mv rpm2json.tix ./dist/hpc/vanilla/tix/rpm2json/
  - mv unrpm.tix ./dist/hpc/vanilla/tix/unrpm/

after_success:
  - cabal install hpc-coveralls
  - ~/.cabal/bin/hpc-coveralls --display-report tests inspect rpm2json unrpm

Thanks for reading and happy testing!

Sensor ultrasónico HC-SR04 en Icaro

Posted by Neville A. Cross - YN1V on June 27, 2017 02:44 AM

Para conectar un sensor ultrasónico HC-SR04 a una placa Icaro vamos a usar el conector P16 a como se muestra en la siguiente imágen.

Opcionalmente podríamos tomar corriente del banco de sensores analógicos.

En Icaro Bloques podemos ir a Archivo -> Ejemplos y escoger la carpeta hc-sr04 y abrir el ejemplo llamado ping.icr

Este firmware lo que hace es mostrar la distancia en centimetros en valores binarios en la barra de leds de la placa.

Icaro Bloques 1.0.8-3 y Pingüino Bootloader Versión 4

Posted by Neville A. Cross - YN1V on June 27, 2017 01:57 AM

Icaro Bloques llega a la versión 1.0.8-3 y trae como principal cambio el uso del bootloader de pingüino  versión 4.

Este bootloader esta escrito en sdcc3 y entre los cambios notables está que es más pequeño y nos deja más espacio para nuestro firmware.

El cambio se ve impulsado por problemas con en Fedora 24 y 25 en la comunicación serial. En algunos casos no se lograba montar adecuadamente el dispositivo ACM, aunque en algunos casos el PIC se colgaba al inicializar. Lo que uno veía era que la luz de estado del PIC se apagaba justo después de encender, haciendo un parpadeo de un lapso apagado largo y un breve destello.

Lo más novedoso es el proceso de ejecución y carga de firmwares. Ahora la placa entra en estado de ejecución del firmware al encender y solo entra en la rutina de carga al apretar el botón reset.

Con el nuevo bootloader el proceso de carga del firmware cambia. Los pasos son a como siguen:

  1. Dar click en el botón de “Compilar” en Icaro Bloques para compilar el firmaware
  2. Dar click en Cargar para enviar el firmaware al PIC
  3. Oprimir el botón de reset en la placa. (El led de estado parpadea de forma continua)
  4. Dar click “Aceptar” en la ventanita lanzador.py que nos muestra la comunicación de la PC y la placa
  5. Dar click en OK en la notificación del resultado de la carga.

La carga es mucho más rápida.

Otras ventajas del nuevo bootloader es que se han trabajado las librerías, y por ejemplo ahora el control de servos es más preciso.

Si bien Icaro Bloques 1.0.8-3 tiene por defecto comunicarse con la versión 4 del bootloader uno puede editar el archivo /home/user/.icaro/config.ini

[general]
turtlear = /usr/share/sugar/activities/TurtleBlocks.activity/turtleblocks.py
dir = firmware
sdcc = sdcc-sdcc

[icaro_config]
bootloader = v4

Solo hay que reemplazar el número 4 por un número 2. Con ese cambio estaremos listos para correr el programa con PICs que tengan aun el bootloader de la versión 2 de pingüino.

Pilas no esta disponible en Icaro Bloques 1.0.8-3 debido a que no se ha trabajado las nuevas librerías para comunicación Bulk. Sin embargo se ha optado por avanzar debido a los grandes problemas de comunicación con el firmaware de tortucaro para comunicación CDC.

All systems go

Posted by Fedora Infrastructure Status on June 27, 2017 01:34 AM
New status good: Everything seems to be working. for services: COPR Build System, Fedora Infrastructure Cloud

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on June 26, 2017 09:25 PM
New status scheduled: Scheduled updates in progress for services: Fedora Infrastructure Cloud, COPR Build System

GSoC2017 (Fedora) — Week 3&4

Posted by Mandy Wang on June 26, 2017 05:28 PM

I went to Guizhou and Hunan in China for my after-graduation trip last week. I walked on the glass skywalk in Zhangjiajie, visited Huangguoshu waterfallss and Fenghuang Ancient City, ate a lot of delicious food at the night market in Guiyang and so on. I had a wonderful time there, welcome to China to experience these! (GNOME Asia, hold in Chongqing, in October is a good choice, Chongqing is a big city which has a lot of hot food and hot girls.)

The main work I did these days for GSoC is carding and detailing the work about establishing the environment for Plinth in Fedora. I realize it by some crude way before, such as using some packages in Debian directly, but now I will make these steps more clear, organize the useful information and write them into INSTALL file.

But my mentor and I had a problem when I tried to run firstboot, I don’t know which packages are needed when I want to debug JS in Fedora, in other words, I want to find which packages in Fedora has the same function with the libjs-bootstrap, libjs-jquery and libjs-modernizr in Debian. If you know how to deal with it, please tell me, I’d be grateful.


Blender is on Flathub

Posted by Mathieu Bridon (bochecha) on June 26, 2017 01:00 PM

I have been maintaining Flatpak builds of Blender for some time, now. In fact at the time I started Flatpak was still called XDG-App. :)

As of today, my Blender builds are now on Flathub, and I will stop updating my personal repository.

One of the benefits of moving it to Flathub is that it is a shared resource, where others can maintain the app so that it doesn't rely solely on me. Another one is that Blender is now available on i386, x86_64, but also ARM and Aarch64.

You can add the Flathub repository and install Blender from it as follows:

$ flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
$ flatpak install flathub org.blender.Blender

At this point you can run Blender from your favourite graphical environment, or from the command line:

$ flatpak run org.blender.Blender

Don't hesitate to let us know if you encounter any issue with this build which is not an upstream bug. (compare with the upstream bundle)

Note: If you had previously installed Blender from my repository, you'll want to uninstall it first:

$ flatpak uninstall org.blender.Blender
$ flatpak remote-delete bochecha

Wildcard SAN certificates in FreeIPA

Posted by Fraser Tweedale on June 26, 2017 12:48 PM

In an earlier post I discussed how to make a certificate profile for wildcard certificates in FreeIPA, where the wildcard name appeared in the Subject Common Name (CN) (but not the Subject Alternative Name (SAN) extension). Apart from the technical details that post also explained that wildcard certificates are deprecated, why they are deprecated, and therefore why I was not particularly interested in pursuing a way to get wildcard DNS names into the SAN extension.

But, as was portended long ago (more than 15 years, when RFC 2818 was published) DNS name assertions via the CN field are deprecated, and finally some client software removed CN name processing support. The Chrome browser is first off the rank, but it won’t be the last!

Unfortunately, programs that have typically used wildcard certificates (hosting services/platforms, PaaS, and sites with many subdomains) are mostly still using wildcard certificates, and FreeIPA still needs to support these programs. As much as I would like to say "just use Let’s Encrypt / ACME!", it is not realistic for all of these programs to update in so short a time. Some may never be updated. So for now, wildcard DNS names in SAN is more than a "nice to have" – it is a requirement for a handful of valid use cases.

Configuration

Here is how to do it in FreeIPA. Most of the steps are the same as in the earlier post so I will not repeat them here. The only substantive difference is in the Dogtag profile configuration.

In the profile configuration, set the following directives (note that the key serverCertSet and the index 12 are indicative only; the index does not matter as long as it is different from the other profile policy components):

policyset.serverCertSet.12.constraint.class_id=noConstraintImpl
policyset.serverCertSet.12.constraint.name=No Constraint
policyset.serverCertSet.12.default.class_id=subjectAltNameExtDefaultImpl
policyset.serverCertSet.12.default.name=Subject Alternative Name Extension Default
policyset.serverCertSet.12.default.params.subjAltNameNumGNs=2
policyset.serverCertSet.12.default.params.subjAltExtGNEnable_0=true
policyset.serverCertSet.12.default.params.subjAltExtType_0=DNSName
policyset.serverCertSet.12.default.params.subjAltExtPattern_0=*.$request.req_subject_name.cn$
policyset.serverCertSet.12.default.params.subjAltExtGNEnable_1=true
policyset.serverCertSet.12.default.params.subjAltExtType_1=DNSName
policyset.serverCertSet.12.default.params.subjAltExtPattern_1=$request.req_subject_name.cn$

Also be sure to add the index to the directive containing the list of profile policies:

policyset.serverCertSet.list=1,2,3,4,5,6,7,8,9,10,11,12

This configuration will cause two SAN DNSName values to be added to the certificate – one using the CN from the CSR, and the other using the CN from the CSR preceded by a wildcard label.

Finally, be aware that because the subjectAltNameExtDefaultImpl component adds the SAN extension to a certificate, it conflicts with the userExtensionDefault component when configured to copy the SAN extension from a CSR to the new certificate. This profile component will have a configuration like the following:

policyset.serverCertSet.11.constraint.class_id=noConstraintImpl
policyset.serverCertSet.11.constraint.name=No Constraint
policyset.serverCertSet.11.default.class_id=userExtensionDefaultImpl
policyset.serverCertSet.11.default.name=User Supplied Extension Default
policyset.serverCertSet.11.default.params.userExtOID=2.5.29.17

Again the numerical index is indicative only, but the OID is not; 2.5.29.17 is the OID for the SAN extension. If your starting profile configuration contains the same directives, remove them from the configuration, and remove the index from the policy list too:

policyset.serverCertSet.list=1,2,3,4,5,6,7,8,9,10,12

Discussion

The profile containing the configuration outlined above will issue certificates with a wildcard DNS name in the SAN extension, alongside the DNS name from the CN. Mission accomplished; but note the following caveats.

This configuration cannot contain the userExtensionDefaultImpl component, which copies the SAN extension from the CSR to the final certificate if present in the CSR, because any CSR that contains a SAN extension would cause Dogtag to attempt to add a second SAN extension to the certificate (this is an error). It would be better if the conflicting profile components somehow "merged" the SAN values, but this is not their current behaviour.

Because we are not copying the SAN extension from the CSR, any SAN extension in the CSR get ignored by Dogtag – but not by FreeIPA; the FreeIPA CSR validation machinery always fully validates the subject alternative names it sees in a CSR, regardless of the Dogtag profile configuration.

If you work on software or services that currently use wildcard certificates please start planning to move away from this. CN validation was deprecated for a long time and is finally being phased out; wildcard certificates are also deprecated (RFC 6125) and they too may eventually be phased out. Look at services and technologies like Let’s Encrypt (a free, automated, publicly trusted CA) and ACME (the protocol that powers it) for acquiring all the certificates you need without administrator or operator intervention.

What's the bug in this pseudo-code

Posted by Alexander Todorov on June 26, 2017 10:30 AM

Rails Girls Vratsa sticker

This is one of the stickers for the second edition of Rails Girls Vratsa which was held yesterday. Let's explore some of the bug proposals submitted by the Bulgarian QA group:

  1. sad() == true is ugly
  2. sad() is not very nice, better make it if(isSad())
  3. use sadStop(), and even better - stopSad()
  4. there is an extra space character in beAwesome( )
  5. the last curly bracket needs to be on a new line

Lyudmil Latinov

My friend Lu describes what I would call style issues. The style he refers to is mostly Java oriented, especially with naming things. In Ruby we would probably go with sad? instead of isSad. Style is important and there are many tools to help us with that this will not cause a functional problem! While I'm at it let me say the curly brackets are not the problem either. They are not valid in Ruby this is a pseudo-code and they also fall in the style category.

The next interesting proposal comes from Tsveta Krasteva. She examines the possibility of sad() returning an object or nil instead of boolean value. Her first question was will the if statement still work, and the answer is yes. In Ruby everything is an object and every object can be compared to true and false. See Alan Skorkin's blog post on the subject.

Then Tsveta says the answer is to use sad().stop() with the warning that it may return nil. In this context the sad() method returns on object indicating that the person is feeling sad. If the method returns nil then the person is feeling OK.

example by Tsveta
class Csad
  def stop()
    print("stop\n");
  end
end

def sad()
  print("sad\n");
  Csad.new();
end

def beAwesome()
  print("beAwesome\n");
end

# notice == true was removed
if(sad())
  print("Yes, I am sad\n");
  sad.stop();
  beAwesome( );
end

While this is coming closer to a functioning solution something about it is bugging me. In the if statement the developer has typed more characters than required (== true). This sounds to me unlikely but is possible with less experienced developers. The other issue is that we are using an object (of class Csad) to represent an internal state in the system under test. There is one method to return the state (sad()) and another one to alter the state (Csad.stop()). The two methods don't operate on the same object! Not a very strong OOP design. On top of that we have to call the method twice, first time in the if statement, the second time in the body of the if statement, which may have unwanted side effects. It is best to assign the return value to some variable instead.

IMO if we are to use this OOP approach the code should look something like:

class Person
  def sad?()
  end

  def stopBeingSad()
  end

  def beAwesome()
  end
end

p = Person.new
if p.sad?
    p.stopBeingSad
    p.beAwesome
end

Let me return back to assuming we don't use classes here. The first obvious mistake is the space in sad stop(); first spotted by Peter Sabev*. His proposal, backed by others is to use sad.stop(). However they didn't use my hint asking what is the return value of sad() ?

If sad() returns boolean then we'll get undefined method 'stop' for true:TrueClass (NoMethodError)! Same thing if sad() returns nil, although we skip the if block in this case.

In Ruby we are allowed to skip parentheses when calling a method, like I've shown above. If we ignore this fact for a second, then sad?.stop() will mean execute the method named stop() which is a member of the sad? variable, which is of type method! Again, methods don't have an attribute named stop!

The last two paragraphs are the semantic/functional mistake I see in this code. The only way for it to work is to use an OOP variant which is further away from what the existing clues give us.

Note: The variant sad? stop() is syntactically correct. This means call the function sad? with parameter the result of calling the method stop(), which depending on the outer scope of this program may or may not be correct (e.g. stop is defined, sad? accepts optional parameters, sad? maintains global state).

Thanks for reading and happy testing!

Slice of Cake #11

Posted by Brian "bex" Exelbierd on June 26, 2017 09:00 AM

A slice of cake

In the last two weeks as FCAIC I:

  • Continued working on Flock, including organizing the budget so we could get some vendor approvals done. If you’re interested in tracking the progress, keep an eye on the flock-planning email list.
  • Attended LinuxCon Beijing - an event report is in the queue, I promise!
  • Attended the Education Day held by the Red Hat Beijing office. This exciting event will also get an event report but represented a great way to meet with potential members of the Fedora Community in Beijing.

A la Mode

  • Hahahaha with as much time as I spent in planes and stuck in airports I didn’t get to do anything for me.

Cake Around the World

I’ll be traveling some and hope you’ll ping me for coffee if you’re nearby.

  • Working from Gdansk, Poland from 3-4 July. Wanna get a coffee? We can pretend I am practicing my Polish!
  • LATAM Organizational FAD from 13-15 July in Cusco, Peru.
  • Flock on Cape Cod, Massachusetts, USA from 29 August - 1 September.

Upcoming Fedora Atomic Host lifecycle changes

Posted by Fedora Magazine on June 26, 2017 08:00 AM

The Fedora Project ships new Fedora Server and Workstation releases at roughly six-month intervals. It then maintains each release for around thirteen months. So Fedora N is supported by the community until one month after the release of Fedora N+2. Since the first Fedora Atomic Host shipped, as part of Fedora 21, the project has maintained separate ostree repositories for both active Fedora releases. For instance, there are currently trees available for Fedora Atomic 25 and Fedora Atomic 24.

Fedora Atomic sets out to be a particularly fast-moving branch of Fedora. It provides releases every two weeks and updates to key Atomic Host components such as Docker and Kubernetes. The release moves more quickly than one might expect from the other releases of Fedora.

Due in part to this faster pace, the Fedora Atomic Working Group has always focused its testing and integration efforts most directly on the latest stable release. The group encourages users of the older release to rebase to the newer tree as soon as possible. Releases older than the current tree are supported only on a best effort basis. This means the ostree is updated, but there is no organized testing of older releases.

Upcoming changes

This will change with either the Fedora 26 to 27 or the 27 to 28 upgrade cycle (depending on readiness). The Fedora Atomic Working Group will then collapse Fedora Atomic into a single version. That release will track the latest stable Fedora branch. When a new stable version of Fedora is released, Fedora Atomic users will automatically shift to the new version when they install updates.

Traditional OS upgrades can be disruptive and error-prone. Due to the image-based technologies that Atomic Hosts use for system components (rpm-ostree) and for applications (Linux containers), upgrading an Atomic Host between major releases is like installing updates within a single release. In both scenarios, the system updates are applied by running an rpm-ostree command and rebooting. The release provides rollback to the previous state available in case something goes wrong. Applications running in containers are unaffected by the host upgrade or update.

If you’d like to get involved in the Fedora Atomic Working Group, come talk to us in IRC in #fedora-cloud or #atomic on Freenode, or join the Atomic WG on Pagure.

Stuck in the Challenge

Posted by Julita Inca Chiroque on June 26, 2017 04:37 AM

My Sunday days are reserved for the GNOME Peru Challenge 2017-1 and one key action to  success is to get an application running, to fix a bug. In this matter, jhbuild and GNOME Builder are the way to achieve it.

Unfortunately, when we are trying to clone an app, libraries missing are the problem. These are some screenshots of the job of our group so far.

Felipe Moreno with Test Builder on Fedora   Martin Vuelta with Polari on ArchiLinux

Randy Real with Polari on Fedora

Ronaldo Cabezas with Gedit on Fedora

Julita Inca with Cheese on Fedora

I hope we can overcome this situation to post in a near future “A successful way to clone apps with GNOMe Builder” 😉

  • Thanks to Randy Real for arranging a lab at UIGV today, and for his support.
  • This journey, we had the honor to count with a developer from Brasil called Thiago!

Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: clone problem, clonning apps, fedora, Fedora + GNOME community, GNOME, gnome 3, GNOME bug, GNOME Builder, GNOME Peru Challenge, GNOME Peru Challenge 2017, Julita Inca, Julita Inca Chiroque

Learning to code in one’s own language

Posted by Sayamindu Dasgupta on June 26, 2017 04:00 AM

Millions of young people from around the world are learning to code. Often, during their learning experiences, these youth are using visual block-based programming languages like Scratch, App Inventor, and Code.org Studio. In block-based programming languages, coders manipulate visual, snap-together blocks that represent code constructs instead of textual symbols and commands that are found in more traditional programming languages.

The textual symbols used in nearly all non-block-based programming languages are drawn from English—consider “if” statements and “for” loops for common examples. Keywords in block-based languages, on the other hand, are often translated into different human languages. For example, depending on the language preference of the user, an identical set of computing instructions in Scratch can be represented in many different human languages:

Scratch code translated into English, Italian, Norwegian Bokmål, and German

Although my research with Benjamin Mako Hill focuses on learning, both Mako and I worked on local language technologies before coming back to academia. As a result, we were both interested in how the increasing translation of programming languages might be making it easier for non-English speaking kids to learn to code.

After all, a large body of education research has shown that early-stage education is more effective when instruction is in the language that the learner speaks at home. Based on this research, we hypothesized that children learning to code with block-based programming languages translated to their mother-tongues will have better learning outcomes than children using the blocks in English.

We sought to test this hypothesis in Scratch, an informal learning community built around a block-based programming language. We were helped by the fact that Scratch is translated into many languages and has a large number of learners from around the world.

To measure learning, we built on some of our our own previous work and looked at learners’ cumulative block repertoires—similar to a code vocabulary. By observing a learner’s cumulative block repertoire over time, we can measure how quickly their code vocabulary is growing.

Using this data, we compared the rate of growth of cumulative block repertoire between learners from non-English speaking countries using Scratch in English to learners from the same countries using Scratch in their local language. To identify non-English speakers, we considered Scratch users who reported themselves as coming from five primarily non-English speaking countries: Portugal, Italy, Brazil, Germany, and Norway. We chose these five countries because they each have one very widely spoken language that is not English and because Scratch is almost fully translated into that language.

Even after controlling for a number of factors like social engagement on the Scratch website, user productivity, and time spent on projects, we found that learners from these countries who use Scratch in their local language have a higher rate of cumulative block repertoire growth than their counterparts using Scratch in English. This faster growth was despite having a lower initial block repertoire. The graph below visualizes our results for two “prototypical” learners who start with the same initial block repertoire: one learner who uses the English interface, and a second learner who uses their native language.

Graph of our results

Our results are in line with what theories of education have to say about learning in one’s own language. Our findings also represent good news for designers of block-based programming languages who have spent considerable amounts of effort in making their programming languages translatable. It’s also good news for the volunteers who have spent many hours translating blocks and user interfaces.

Although we find support for our hypothesis, we should stress that our findings are both limited and incomplete. For example, because we focus on estimating the differences between Scratch learners, our comparisons are between kids who all managed to successfully use Scratch. Before Scratch was translated, kids with little working knowledge of English or the Latin script might not have been able to use Scratch at all. Because of translation, many of these children are now able to learn to code.

This blog-post and the work that it describes is a collaborative project with Benjamin Mako Hill. You can read our paper here. The paper was published in the ACM Learning @ Scale Conference. We also recently gave a talk about this work at the International Communication Association’s annual conference. We have received support and feedback from members of the Scratch team at MIT (especially Mitch Resnick and Natalie Rusk), as well as from Nathan TeBlunthuis at the University of Washington. Financial support came from the US National Science Foundation.

When in doubt, blame open source

Posted by Josh Bressers on June 26, 2017 12:54 AM
If you've not read my previous post on thought leadership, go do that now, this one builds on it. The thing that really kicked off my thinking on these matters was this article:

Security liability is coming for software: Is your engineering team ready?

The whole article is pretty silly, but the bit about liability and open source is the real treat. There's some sort of special consideration when you use open source apparently, we'll get back to that. Right now there is basically no liability of any sort when you use software. I doubt there will be anytime soon. Liability laws are tricky, but the lawyers I've spoken with have been clear that software isn't currently covered in most instances. The whole article is basically nonsense from that respect. The people they interview set the stage for liability and responsibility then seem to discuss how open source should be treated special in this context.

Nothing is special, open source is no better or worse than closed source software. If you build something why would open source need more responsibility than closed source? It doesn't of course, it's just an easy target to pick on. The real story is we don't know how to deal with this problem. Open source is an easy boogeyman. It's getting picked on because we don't know where else to point the finger.

The real problem is we don't know how to secure our software in an acceptable manner. Trying to talk about liability and responsibility is fine, nobody is going to worry about security until they have to. Using open source as a discussion point in this conversation clouds it though. We now get to shift the conversation from how do we improve security, to blaming something else for our problems. Open source is one of the tools we use to build our software. It might be the most powerful tool we've ever had. Tools are never the problem in a broken system even though they get blamed on a regular basis.

The conversation we must have revolves around incentives. There is no incentive to build secure software. Blaming open source or talking about responsibility are just attempts to skirt the real issue. We have to fix our incentives. Liability could be an incentive, regulation can be an incentive. User demand can be an incentive as well. Today the security quality of software doesn't seem to matter.

I'd like to end this saying we should make an effort to have more honest discussions about security incentives, but I don't think that will happen. As I mention in my previous blog post, our problem is a lack of leadership. Even if we fix security incentives, I don't see things getting much better under current leadership.

RetroFlix / PI Switch Followup

Posted by Mo Morsi on June 25, 2017 10:11 PM

I've been trying to dedicate some cycles to wrapping up the Raspberry PI entertainment center project mentioned a while back. I decided to abandon the PI Switch idea as the original controller which was purchased for it just did not work properly (or should I say only worked sporadically/intermitantly). It being a cheap device bought online, it wasn't worth the effort to debug (funny enough I can't find the device on Amazon anymore, perhaps other people were having issues...).

Not being able to find another suitable gamepad to use as the basis for a snap together portable device, I bought a Rii wireless controller (which works great out of the box!) and dropped the project (also partly due to lack of personal interest). But the previously designed wall mount works great, and after a bit of work the PI now functions as a seamless media center.

Unfortunately to get it there, a few workarounds were needed. These are listed below (in no particular order).

<style> #rpi_setup li{ margin-bottom: 10px; } </style>
  • To start off, increase your GPU memory. This we be needed to run games with any reasonable performance. This can be accomplished through the Raspberry PI configuration interface.

    Rpi setup1 Rpi setup2

    Here you can also overclock your PI if your model supports it (v3.0 does not as evident w/ the screenshot, though there are workarounds)

  • If you are having trouble w/ the PI output resolution being too large / small for your tv, try adjusting the aspect ratio on your set. Previously mine was set to "theater mode", cutting off the edges of the device output. Resetting it to normal resolved the issue.

    Rpi setup3 Rpi setup5 Rpi setup4
  • To get the Playstation SixAxis controller working via bluetooth required a few steps.
    • Unplug your playstation (since it will boot by default when the controller is activated)
    • On the PI, run
              sudo bluetoothctl
      
    • Start the controller and watch for a new devices in the bluetoothctl output. Make note of the device id
    • Still in the bluetoothctl command prompt, run
              trust [deviceid]
      
    • In the Raspberry PI bluetooth menu, click 'make discoverable' (this can also be accomplished via the bluetoothctl command prompt with the discoverable on command) Rpi setup6
    • Finally restart the controller and it should autoconnect!
  • To install recent versions of Ruby you will need to install and setup rbenv. The current version in the RPI repos is too old to be of use (of interest for RetroFlix, see below)
  • Using mednafen requires some config changes, notabley to disable opengl output and enable SDL. Specifically change the following line from
          video.driver opengl
    
    To
          video.driver sdl
    
    Unfortunately after alot of effort, I was not able to get mupen64 working (while most games start, as confirmed by audio cues, all have black / blank screens)... so no N64 games on the PI for now ☹
  • But who needs N64 when you have Nethack! ♥‿♥(the most recent version of which works flawlessly). In addition to the small tweaks needed to compile the latest version on Linux, inorder to get the awesome Nevanda tileset working, update include/config.h to enable XPM graphics:
        -/* # define USE_XPM */ /* Disable if you do not have the XPM library */
        +#define USE_XPM  /* Disable if you do not have the XPM library */
    
    Once installed, edit your nh/install/games/lib/nethackdir/NetHack.ad config file (in ~ if you installed nethack there), to reference the newtileset:
        -NetHack.tile_file: x11tiles
        +NetHack.tile_file: /home/pi/Downloads/Nevanda.xpm
    

Finally RetroFlix received some tweaking & love. Most changes were visual optimizations and eye candy (including some nice retro fonts and colors), though a workers were also added so the actual downloads could be performed without blocking the UI. Overall it's simple and works great, a perfect portal to work on those high scores!

That's all for now, look for some more updates on the ReFS front in the near future!