Fedora People

All systems go

Posted by Fedora Infrastructure Status on May 21, 2018 08:22 PM
Service 'Pagure' now has status: good: Everything seems to be working.

Heroes of Fedora (HoF) – F28 Beta

Posted by Fedora Community Blog on May 21, 2018 10:35 AM

It’s time for some stats concerning Fedora 28 Beta!

Welcome back to another installation of Heroes of Fedora, where we’ll look at the stats concerning the testing of Fedora 28 Beta. The purpose of Heroes of Fedora is to provide a summation of testing activity on each milestone release of Fedora. So, without further ado, let’s get started!


Updates Testing

Test period: Fedora 28 Beta (2018-03-06 – 2018-04-17)
Testers: 166
Comments1: 1467

Name Updates commented
Pete Walter (pwalter) 219
Dmitri Smirnov (cserpentis) 148
Filipe Rosset (filiperosset) 111
Martti Kuosmanen (kuosmanen) 90
Björn Esser (besser82) 88
Alexander Kurtakov (akurtakov) 76
Hans Müller (cairo) 71
Peter Smith (smithp) 70
Charles-Antoine Couret (renault) 55
sassam 54
Nie Lili (lnie) 41
Piotr Drąg (piotrdrag) 37
Daniel Lara Souza (danniel) 32
Adam Williamson (adamwill) 24
anonymous 22
lruzicka 15
Randy Barlow (bowlofeggs) 12
Eugene Mah (imabug) 12
Parag Nemade (pnemade) 10
Peter Robinson (pbrobinson) 9
František Zatloukal (frantisekz) 9
Matthias Runge (mrunge) 9
Paul Whalen (pwhalen) 8
Tom Sweeney (tomsweeneyredhat) 7
Alessio Ciregia (alciregi) 7
mastaiza 6
Zbigniew Jędrzejewski-Szmek (zbyszek) 5
bluepencil 5
Sérgio Monteiro Basto (sergiomb) 5
sobek 5
Héctor H. Louzao P. (hhlp) 5
zdenek 4
Itamar Reis Peixoto (itamarjp) 4
Neal Gompa (ngompa) 4
leigh123linux 4
Nathan (nathan95) 4
Wolfgang Ulbrich (raveit65) 3
Luis Roca (roca) 3
Rory Gallagher (earthwalker) 3
Jiri Eischmann (eischmann) 3
Dusty Mabe (dustymabe) 3
Miro Hrončok (churchyard) 3
Christian Kellner (gicmo) 3
Kevin Fenzi (kevin) 3
Colin Walters (walters) 3
…and also 121 other reporters who created less than 3 reports each, but 153 reports combined!

1 If a person provides multiple comments to a single update, it is considered as a single comment. Karma value is not taken into account.

Validation Testing

Test period: Fedora 28 Beta (2018-03-06 – 2018-04-17)
Testers: 23
Reports: 991
Unique referenced bugs: 35

Name Reports submitted Referenced bugs1
pwhalen 367 1520580 1553488 1565217 1566593 (4)
lruzicka 170 1558027 1566566 1569411 on bare metal (4)
alciregi 108 1520580 1536356 1541868 1552130 1553935 1554072 1554075 1556951 1557472 1561072 1564784 1568119 (12)
tablepc 56 1560314 (1)
lnie 51 1557655 1557659 (2)
coremodule 34
satellit 23 1519042 1533310 1554996 1558671 1561304 (5)
lbrabec 22
tenk 20 1539499 1561304 1562024 (3)
sumantrom 20
kparal 18 1555752 1557472 1560738 1562087 (4)
frantisekz 17 1561115 (1)
sgallagh 15
table1pc 15
adamwill 12 1560738 1561768 (2)
sinnykumari 10
fab 7
siddharthvipul1 6
pschindl 6
kevin 5
jdoss 4
mohanboddu 4
hobbes1069 1 1561284 (1)

1 This is a list of bug reports referenced in test results. The bug itself may not be created by the same person.

Bug Reports

Test period: Fedora 28 Beta (2018-03-06 – 2018-04-17)
Reporters: 320
New reports: 1918

Name Reports submitted1 Excess reports2 Accepted blockers3
Fedora Release Engineering 1104 47 (4%) 0
Lukas Ruzicka 31 6 (19%) 2
lnie 30 9 (30%) 0
Adam Williamson 25 0 (0%) 7
Alessio 22 8 (36%) 0
Chris Murphy 18 1 (5%) 0
Heiko Adams 16 2 (12%) 0
Florian Weimer 12 0 (0%) 0
mastaiza 12 0 (0%) 0
ricky.tigg at gmail.com 11 3 (27%) 0
Juan Orti 10 0 (0%) 0
Menanteau Guy 10 6 (60%) 0
Christian Heimes 9 1 (11%) 0
Daniel Mach 9 0 (0%) 0
Stephen Gallagher 8 0 (0%) 1
Hayden 8 0 (0%) 0
Jared Smith 8 1 (12%) 0
Joseph 8 0 (0%) 0
pmkellly at frontier.com 8 1 (12%) 0
René Genz 8 1 (12%) 0
Paul Whalen 7 0 (0%) 1
Anass Ahmed 7 0 (0%) 0
František Zatloukal 7 0 (0%) 0
Hedayat Vatankhah 7 0 (0%) 0
Leslie Satenstein 7 0 (0%) 0
Miro Hrončok 7 0 (0%) 0
Randy Barlow 7 0 (0%) 0
shirokuro005 at gmail.com 7 0 (0%) 0
Andrey Motoshkov 6 0 (0%) 0
Ankur Sinha (FranciscoD) 6 0 (0%) 0
J. Haas 6 0 (0%) 0
Jens Petersen 6 0 (0%) 0
John Reiser 6 0 (0%) 0
Langdon White 6 1 (16%) 0
Martin Pitt 6 0 (0%) 0
Merlin Mathesius 6 1 (16%) 0
Mikhail 6 0 (0%) 0
Stephen 6 0 (0%) 0
Zbigniew Jędrzejewski-Szmek 6 0 (0%) 0
Kamil Páral 5 1 (20%) 1
Ali Akcaagac 5 0 (0%) 0
Parag Nemade 5 0 (0%) 0
Peter Robinson 5 2 (40%) 0
sumantro 5 0 (0%) 0
Christian Kellner 4 0 (0%) 0
dac.override at gmail.com 4 0 (0%) 0
Dawid Zamirski 4 0 (0%) 0
Jiri Eischmann 4 0 (0%) 0
John Dennis 4 1 (25%) 0
Kalev Lember 4 0 (0%) 0
Milan Zink 4 0 (0%) 0
Miroslav Suchý 4 1 (25%) 0
Petr Pisar 4 0 (0%) 0
satellitgo at gmail.com 4 1 (25%) 0
sedrubal 4 0 (0%) 0
Steven Haigh 4 0 (0%) 0
Dusty Mabe 3 0 (0%) 1
Bhushan Barve 3 0 (0%) 0
Daniel Rindt 3 0 (0%) 0
deadrat 3 0 (0%) 0
Gwendal 3 0 (0%) 0
Henrique Montemor Junqueira 3 0 (0%) 0
Joachim Frieben 3 0 (0%) 0
luke_l at o2.pl 3 0 (0%) 0
Matías Zúñiga 3 0 (0%) 0
Michel Normand 3 0 (0%) 0
Mike FABIAN 3 0 (0%) 0
Mirek Svoboda 3 0 (0%) 0
Nathanael Noblet 3 1 (33%) 0
Quentin Tayssier 3 0 (0%) 0
sebby2k 3 0 (0%) 0
Vadim 3 0 (0%) 0
Vadim Raskhozhev 3 0 (0%) 0
Vladimir Benes 3 0 (0%) 0
Yaakov Selkowitz 3 0 (0%) 0
yucef sourani 3 0 (0%) 0
Zdenek Chmelar 3 1 (33%) 0
…and also 243 other reporters who created less than 3 reports each, but 293 reports combined!

1 The total number of new reports (including “excess reports”). Reopened reports or reports with a changed version are not included, because it was not technically easy to retrieve those. This is one of the reasons why you shouldn’t take the numbers too seriously, but just as interesting and fun data.
2 Excess reports are those that were closed as NOTABUG, WONTFIX, WORKSFORME, CANTFIX or INSUFFICIENT_DATA. Excess reports are not necessarily a bad thing, but they make for interesting statistics. Close manual inspection is required to separate valuable excess reports from those which are less valuable.
3 This only includes reports that were created by that particular user and accepted as blockers afterwards. The user might have proposed other people’s reports as blockers, but this is not reflected in this number.


The post Heroes of Fedora (HoF) – F28 Beta appeared first on Fedora Community Blog.

My talk from the RISC-V workshop in Barcelona

Posted by Richard W.M. Jones on May 21, 2018 09:39 AM

<iframe allowfullscreen="true" class="youtube-player" height="312" src="https://www.youtube.com/embed/HxbpJzU2gkw?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="500"></iframe>

Audacity quick tip: quickly remove background noise

Posted by Fedora Magazine on May 21, 2018 08:00 AM

When recording sounds on a laptop — say for a simple first screencast — many users typically use the built-in microphone. However, these small microphones also capture a lot of background noise. In this quick tip, learn how to use Audacity in Fedora to quickly remove the background noise from audio files.

Installing Audacity

Audacity is an application in Fedora for mixing, cutting, and editing audio files. It supports a wide range of formats out of the box on Fedora — including MP3 and OGG. Install Audacity from the Software application.

If the terminal is more your speed, use the command:

sudo dnf install audacity

Import your Audio, sample background noise

After installing Audacity, open the application, and import your sound using the File > Import menu item. This example uses a sound bite from freesound.org to which noise was added:

<audio class="wp-audio-shortcode" controls="controls" id="audio-20599-1" preload="none" style="width: 100%;"><source src="https://ryanlerch.fedorapeople.org/noise.ogg?_=1" type="audio/ogg">https://ryanlerch.fedorapeople.org/noise.ogg</audio>

Next, take a sample of the background noise to be filtered out. With the tracks imported, select an area of the track that contains only the background noise. Then choose Effect >  Noise Reduction from the menu, and press the Get Noise Profile button.

Filter the Noise

Next, select the area of the track you want to filter the noise from. Do this either by selecting with the mouse, or Ctrl + a to select the entire track. Finally, open the Effect > Noise Reduction dialog again, and click OK to apply the filter.

Additionally, play around with the settings until your tracks sound better. Here is the original file again, followed by the noise reduced track for comparison (using the default settings):

<audio class="wp-audio-shortcode" controls="controls" id="audio-20599-2" preload="none" style="width: 100%;"><source src="https://ryanlerch.fedorapeople.org/sidebyside.ogg?_=2" type="audio/ogg">https://ryanlerch.fedorapeople.org/sidebyside.ogg</audio>

Episode 97 - Automation: Humans are slow and dumb

Posted by Open Source Security Podcast on May 20, 2018 11:18 PM
Josh and Kurt talk about the security of automation as well as automating security. The only way automation will really work long term is full automation. Humans can't be trusted enough to rely on them to do things right.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6584004/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

A new sync primitive in golang

Posted by James Just James on May 20, 2018 10:00 PM
I’ve been working on lots of new stuff in mgmt and I had a synchronization problem that needed solving… Long story short, I built it into a piece of re-usable functionality, exactly like you might find in the sync package. For details and examples, please continue reading… The Problem: I want to multicast a signal to an arbitrary number of goroutines. As you might already know, this can already be done with a chan struct{}.

All Systems Go! 2018 CfP Open

Posted by Lennart Poettering on May 20, 2018 10:00 PM

<large>The All Systems Go! 2018 Call for Participation is Now Open!</large>

The Call for Participation (CFP) for All Systems Go! 2018 is now open. We’d like to invite you to submit your proposals for consideration to the CFP submission site.

ASG image

The CFP will close on July 30th. Notification of acceptance and non-acceptance will go out within 7 days of the closing of the CFP.

All topics relevant to foundational open-source Linux technologies are welcome. In particular, however, we are looking for proposals including, but not limited to, the following topics:

  • Low-level container executors and infrastructure
  • IoT and embedded OS infrastructure
  • BPF and eBPF filtering
  • OS, container, IoT image delivery and updating
  • Building Linux devices and applications
  • Low-level desktop technologies
  • Networking
  • System and service management
  • Tracing and performance measuring
  • IPC and RPC systems
  • Security and Sandboxing

While our focus is definitely more on the user-space side of things, talks about kernel projects are welcome, as long as they have a clear and direct relevance for user-space.

For more information please visit our conference website!

Mastering en Bash ~ Enlaces simbólicos o duros y alias

Posted by Alvaro Castillo on May 20, 2018 04:00 PM

Acorta y vencerás

En Echemos un bitstazo sabemos que no es así realmente el refrán, pero si que es cierto, que si acortamos mucho nos será más fácil aprendernos dónde está cada cosa haciendo uso de los enlaces para los ficheros y directorios y alias para comandos muy largos.


Los alias nos permiten reducir la longitud de una sentencia que queramos ejecutar en nuestro sistema y atajarla con una simple palabra. Por ejemplo, si queremos acceder a un directorio muy concurrido desde termin...

F28-20180515 updated Lives released

Posted by Ben Williams on May 20, 2018 12:21 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated 28 Live ISOs, carrying the 4.16.8-300 kernel.

This set of updated isos will save about 620+ MB of updates after install.  (for new installs.)

Build Directions: https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

We would also like to thank Fedora- QA  for running the following Tests on our ISOs.: https://openqa.fedoraproject.org/tests/overview?distri=fedora&version=28&build=FedoraRespin-28-updates-20180515.0&groupid=1

With this release we will not be producing F27 updated isos. (except by special request).

These can be found at  http://tinyurl.com/live-respins .We would also like to thank the following irc nicks for helping test these isos: dowdle, and Southern_Gentlem.

As always we are always needing Testers to help with our respins. We have a new Badge for People whom help test.  See us in #fedora-respins on Freenode IRC.

Mastering en Bash - Buscando archivos y directorios

Posted by Alvaro Castillo on May 19, 2018 08:11 PM

En busca de Wally

¿A quién no le interesa encontrar a esta buscada ballena en medio de un océano tan grande? Pues a nosotros no la verdad, preferimos buscar otro tipo de cosas como archivos y directorios en nuestro sistema. Para ello haremos uso de los comandos find(1), locate(1), whois(1),whereis(1).


find(1) a diferencia de locate(1) es un comando que busca a tiempo real y tiene muchísimas funcionalidades añadidas como filtrar por nombre, tipo de ejecutable, fecha...

Get started with Apache Cassandra on Fedora

Posted by Fedora Magazine on May 18, 2018 12:26 PM

NoSQL databases are every bit as popular today as more conventional, relational ones. One of the most popular NoSQL systems is Apache Cassandra. It’s designed to deal with big data, and can be scaled across large numbers of servers. This makes it resilient and highly available.

This package is relatively new on Fedora, since it was introduced on Fedora 26. The following article is a short tutorial to set up Cassandra on Fedora for a development environment. Production deployments should use a different set-up to harden the service.

Install and configure Cassandra

The set of database packages in Fedora’s stable repositories are the client tools in the cassandra package. The common library is in the cassandra-java-libs package (required by both client and server). The most important part of the database, the daemon, is available in the cassandra-server package. Some more supporting packages may be listed by running the following command in a terminal.

dnf list cassandra\*

First, install and start the service:

$ sudo dnf install cassandra cassandra-server
$ sudo systemctl start cassandra

To enable the service to automatically start at boot time, run:

$ sudo systemctl enable cassandra

Finally, test the server initialization using the client:

$ cqlsh
Connected to Test Cluster at
[cqlsh 5.0.1 | Cassandra 3.11.1 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> CREATE KEYSPACE k1 WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
cqlsh> USE k1;
cqlsh:k1> CREATE TABLE users (user_name varchar, password varchar, gender varchar, PRIMARY KEY (user_name));
cqlsh:k1> INSERT INTO users (user_name, password, gender) VALUES ('John', 'test123', 'male');
cqlsh:k1> SELECT * from users;

 user_name | gender | password
      John |   male |  test123

(1 rows)

To configure the server, edit the file /etc/cassandra/cassandra.yaml. For more information about how to change configuration, see the the upstream documentation.

Controlling access with users and passwords

By default, authentication is disabled. To enable it, follow these steps:

  1. By default, the authenticator option is set to AllowAllAuthenticator. Change the authenticator option in the cassandra.yaml file to PasswordAuthenticator:
authenticator: PasswordAuthenticator
  1. Restart the service.
$ sudo systemctl restart cassandra
  1. Start cqlsh using the default superuser name and password:
$ cqlsh -u cassandra -p cassandra
  1. Create a new superuser:
cqlsh> CREATE ROLE <new_super_user> WITH PASSWORD = '<some_secure_password>' 
    AND SUPERUSER = true 
    AND LOGIN = true;
  1. Log in as the newly created superuser:
$ cqlsh -u <new_super_user> -p <some_secure_password>
  1. The superuser cannot be deleted. To neutralize the account, change the password to something long and incomprehensible, and alter the user’s status to NOSUPERUSER:
cqlsh> ALTER ROLE cassandra WITH PASSWORD='SomeNonsenseThatNoOneWillThinkOf'
    AND SUPERUSER=false;

Enabling remote access to the server

Edit the /etc/cassandra/cassandra.yaml file, and change the following parameters:

listen_address: external_ip
rpc_address: external_ip
seed_provider/seeds: "<external_ip>"

Then restart the service:

$ sudo systemctl restart cassandra

Other common configuration

There are quite a few more common configuration parameters. For instance, to set the cluster name, which must be consistent for all nodes in the cluster:

cluster_name: 'Test Cluster'

The data_file_directories option sets the directory where the service will write data. Below is the default that is used if unset. If possible, set this to a disk used only for storing Cassandra data.

    - /var/lib/cassandra/data

To set the type of disk used to store data (SSD or spinning):

disk_optimization_strategy: ssd|spinning

Running a Cassandra cluster

One of the main features of Cassandra is the ability to run in a multi-node setup. A cluster setup brings the following benefits:

  • Fault tolerance: Automatically replicates data to multiple nodes for fault-tolerance. Also, it supports replication across multiple data centers. You can replace failed nodes with no downtime.
  • Decentralization: There are no single points of failure, no network bottlenecks, and every node in the cluster is identical.
  • Scalability & elasticity: Can run thousands of nodes with petabytes of data. Read and write throughput both increase linearly as new machines are added, with no downtime or interruption to applications.

The following sections describe how to setup a simple two-node cluster.

Clearing existing data

First, if the server is running now or has ever run before, you must delete all the existing data (make a backup first). This is because all nodes must have the same cluster name and it’s better to choose a different one from the default Test cluster name.

Run the following commands on each node:

$ sudo systemctl stop cassandra
$ sudo rm -rf /var/lib/cassandra/data/system/*

If you deploy a large cluster, you can do this via automation using Ansible.

Configuring the cluster

To setup the cluster, edit the main configuration file /etc/cassandra/cassandra.yaml. Modify these parameters:

  • cluster_name: Name of your cluster.
  • num_tokens: Number of virtual nodes within a Cassandra instance. This option partitions the data and spreads the data throughout the cluster. The recommended value is 256.
  • seeds: Comma-delimited list of the IP address of each node in the cluster.
  • listen_address: The IP address or hostname the service binds to for connecting to other nodes. It defaults to localhost and needs to be changed to the IP address of the node.
  • rpc_address: The listen address for client connections (CQL protocol).
  • endpoint_snitch: Set to a class that implements the IEndpointSnitch. Cassandra uses snitches to locate nodes and route requests. The default is SimpleSnitch, but for this exercise, change it to GossipingPropertyFileSnitch which is more suitable for production environments:
    • SimpleSnitch: Used for single-datacenter deployments or single-zone in public clouds. Does not recognize datacenter or rack information. It treats strategy order as proximity, which can improve cache locality when disabling read repair.
    • GossipingPropertyFileSnitch: Recommended for production. The rack and datacenter for the local node are defined in the cassandra-rackdc.properties file and propagate to other nodes via gossip.
  • auto_bootstrap: This parameter is not present in the configuration file, so add it and set to false. It makes new (non-seed) nodes automatically migrate the right data to themselves.

Configuration files for a two-node cluster follow.

Node 1:

cluster_name: 'My Cluster'
num_tokens: 256
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
        - seeds:,
endpoint_snitch: GossipingPropertyFileSnitch
auto_bootstrap: false

Node 2:

cluster_name: 'My Cluster'
num_tokens: 256
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
        - seeds:,
endpoint_snitch: GossipingPropertyFileSnitch
auto_bootstrap: false

Starting the cluster

The final step is to start each instance of the cluster. Start the seed instances first, then the remaining nodes.

$ sudo systemctl start cassandra

Checking the cluster status

In conclusion, you can check the cluster status with the nodetool utility:

$ sudo nodetool status

Datacenter: datacenter1
|/ State=Normal/Leaving/Joining/Moving
--  Address      Load       Tokens       Owns    Host ID                               Rack
UN  147.48 KB  256          ?       f50799ee-8589-4eb8-a0c8-241cd254e424  rack1
UN  139.04 KB  256          ?       54b16af1-ad0a-4288-b34e-cacab39caeec  rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless

Cassandra in a container

Linux containers are becoming more popular. You can find a Cassandra container image here on DockerHub:


It’s easy to start a container for this purpose without touching the rest of the system. First, install and run the Docker daemon:

$ sudo dnf install docker
$ sudo systemctl start docker

Next, pull the image:

$ sudo docker pull centos/cassandra-3-centos7

Now prepare a directory for data:

$ sudo mkdir data
$ sudo chown 143:143 data

Finally, start the container with a few arguments. The container uses the prepared directory to store data into and creates a user and database.

$ docker run --name cassandra -d -e CASSANDRA_ADMIN_PASSWORD=secret -p 9042:9042 -v `pwd`/data:/var/opt/rh/sclo-cassandra3/lib/cassandra:Z centos/cassandra-3-centos7

Now you have the service running in a container while storing data into the data directory in the current working directory. If the cqlsh client is not installed on your host system, run the one provided by the image with the following command:

$ docker exec -it cassandra 'bash' -c 'cqlsh '`docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' cassandra`' -u admin -p secret'


The Cassandra maintainers in Fedora seek co-maintainers to help keep the package fresh on Fedora. If you’d like to help, simply send them email.

Photo by Glen Jackson on Unsplash.

GIMP 2.10.2 released. How to install it on Fedora or Ubuntu

Posted by Luca Ciavatta on May 18, 2018 10:00 AM

The long-awaited GIMP 2.10.2 is finally here! Discover how to install it on Fedora or Ubuntu Linux distribution. GIMP 2.10.0 was a huge release, which contains the result of 6 long years of work (GIMP 2.8 was released almost exactly 6 years ago!) by a small but dedicated core of contributors. GIMP 2.10.2 is out with 44 bugfixes and some new features such as HEIF file format support. Windows installer and a flatpak for Linux are available, Mac OS X version is coming in near future.

So, Let’s see how to install GIMP on Fedora or Ubuntu.


<figure class="wp-caption alignright" id="attachment_717" style="width: 950px">GIMP 2.10.0 brings updated user interface and initial HiDPI support<figcaption class="wp-caption-text">GIMP 2.10.0 brings updated user interface and initial HiDPI support</figcaption></figure>

GIMP 2.10.0 released. Really a big list of notable changes

GIMP, the GNU Image Manipulation Program, is a cross-platform image editor available for GNU/Linux, OS X, Windows and more operating systems. It is free software, you can change its source code and distribute your changes.

Whether you are a graphic designer, photographer, illustrator, or scientist, GIMP provides you with sophisticated tools to get your job done. You can further enhance your productivity with GIMP thanks to many customization options and 3rd party plugins.

GIMP provides the tools needed for high-quality image manipulation. From retouching to restoring to creative composites, the only limit is your imagination. Here, you can find a glance at the most notable features from the official site:

  • Image processing nearly fully ported to GEGL, allowing high bit depth processing, multi-threaded and hardware accelerated pixel processing and more.
  • Color management is a core feature now, most widgets and preview areas are color-managed.
  • Many improved tools, and several new and exciting tools, such as the Warp transform, the Unified transform, and the Handle transform tools.
  • On-canvas preview for all filters ported to GEGL.
  • Improved digital painting with canvas rotation and flipping, symmetry painting, MyPaint brush support…
  • Support for several new image formats added (OpenEXR, RGBE, WebP, HGT), as well as improved support for many existing formats (in particular more robust PSD importing).
  • Metadata viewing and editing for Exif, XMP, IPTC, and DICOM.
  • Basic HiDPI support: automatic or user-selected icon size.


GIMP 2.10.2 released. With 44 bugfixes and some new features

It’s barely been a month since the dev team released GIMP 2.10.0, and the first bugfix version 2.10.2 is already there. Its main purpose is fixing the various bugs and issues which were to be expected after the 2.10.0 release.

For a complete list of changes please see Overview of Changes from GIMP 2.10.0 to GIMP 2.10.2.

Notable changes are the added support for HEIF image format and new filters. This release brings HEIF image support, both for loading and export. Two new filters have been added, based off GEGL operations: Spherize filter to wrap an image around a spherical cap, based on the gegl:spherize operation, and Recursive Transform filter to create a Droste effect, based on the gegl:recursive-transform operation.


How to install GIMP 2.10.2 on Fedora or Ubuntu distributions

GIMP 2.10.2 is available for all Linux distributions in various packages, through Flatpak packaging system, for Windows, and for Mac OS X. We’re going to look on how to install GIMP on Fedora or Ubuntu.

The suggested way to install the new version of GIMP on Fedora systems is with Flatpak:

The easy way to install the new version of GIMP on Ubuntu systems is through the otto-kesselgulasch PPA:

  • so open a terminal session and digit:

sudo add-apt-repository ppa:otto-kesselgulasch/gimp
sudo apt update
sudo apt install gimp

You can also install GIMP on Ubuntu (or any other Linux distribution) through Flatpak packaging system:

The new version of GIMP ships with far more new features, including new and improved tools, better file formats support, various usability improvements, revamped color management support, a plethora of improvements targeted at digital painters and photographers, metadata editing, and much, much more.

So after discovered how to install GIMP on Fedora or Ubuntu, enjoy your GIMP!

The post GIMP 2.10.2 released. How to install it on Fedora or Ubuntu appeared first on cialu.net.

#FaasFriday – Building a Serverless Microblog in Ruby with OpenFaaS – Part 1

Posted by Keiran "Affix" Smith on May 18, 2018 08:00 AM

Happy #FaaSFriday today we are going to learn how to build our very own serverless microblogging backend using Pure Ruby. Alex Ellis of OpenFaaS believes it would be useful to not rely on any gems that require Native Extensions (MySQL, ActiveRecord, bcrypt, etc…) so thats what I intend to do, This tutorial will use nothing byt pure ruby to keep containers small.

Before you continue reading I highly recommend you checkout OpenFaaS on Twitter and Star the OpenFaas repo on Github


This microblogging platform as is should not be used in a production environment as password encryption is sub-optimal (i.e its plain SHA2 and not salted) but.. it could be changed easily something more suitable like bcrypt.

We make extensive use of Environment variables however You should use secrets read more about that here.

Now thats out of the way lets get stuck in.


As always in order to follow this tutorial there are pre-requisites.

  • You have a working OpenFaaS deployment
    • Mine is on Kubernetes, but you can also use Docker Swarm
  • You have an understanding of Ruby
  • You have access to a mySQL Database
  • You can make calls to an OpenFaaS Function

What is OpenFaas?

With OpenFaaS you can package anything as a serverless function – from Node.js to Golang to CSharp, even binaries like ffmpeg or ImageMagick.


Serverless Microblog ArchitectureOur serverless microblogging platform consists of 5 Functions


Our Register function will accept a JSON String containing a Username, Password, First Name, Last Name and E-Mail address. Once submitted we will check the Database for an existing Username and E-Mail if none is found we will add a new record to the database.


Our Login function will accept a JSON string containing a username and password, We will then return a signed JWT containing the username, first name, last name and e-mail.

Add Post

Our Add Post function will accept a JSON string containing the token(from login) and body of the post.

Before we add the post to the database we will call the Validate JWT function to validate the signature of the JWT Token. We will take the User ID from the token.

List Posts

When we call the list posts function we can call it with a JSON string to filter the posts by user and Paginate. By Default we will return the last 100 posts. To paginate we will add an offset parameter to our JSON.

If we don’t send a filter we will return the latest 100 posts in our database.

Registering Users

In order to add posts we need to have users, Lets get started by registering our first user.

Run the following SQL Query to create the users table.

 password TEXT NOT NULL,
 first_name text NOT NULL,
 last_name int NOT NULL
CREATE UNIQUE INDEX users_id_uindex ON users (id);

Now we have our table lets create our first function.

$ faas-cli new faas-ruby-register --lang ruby

this will download the templates and create our first function. Open the Gemfile and add the ‘ruby-mysql’ gem

source 'https://rubygems.org'

gem 'ruby-mysql'

thats the only Gem we need for this function. ruby-mysql is a pure ruby implementation of a mySQL connector and suits the needs of our project. We will be using this gem extensively.

Now open up handler.rb and we add the following code.

require 'mysql'
require 'json'
require 'digest/sha2'

class Handler
    def run(req)
      @my = Mysql.connect(ENV['mysql_host'], ENV['mysql_user'], ENV['mysql_password'], ENV['mysql_database'])
      json = JSON.parse(req)
      if !user_exists(json["username"], json["email"])
        password = Digest::SHA2::new << json['password']
        stmt = @my.prepare('insert into users (username, password, email, first_name, last_name) values (?, ?, ?, ?, ?)')
        stmt.execute json["username"], password.to_s, json["email"], json["first_name"], json["last_name"]
        return "{'username': #{json["username"]}, 'status': 'created'}"
        return "{'error': 'Username or E-Mail already in use'}"

    def user_exists(username, email)
      @my.query("SELECT username FROM users WHERE username = '#{Mysql.escape_string(username)}' OR email = '#{Mysql.escape_string(email)}'").each do |username|
        return true
      return false

And thats our function, Lets have a look at some of this function in detail. In our runner I declared a class variable @my with our mySQL connection. I then parsed the JSON we passed to the function. I used a user_exists method to determine if a user exists in our database, if not I moved on to create a new user. I hashed the password with SHA2 and used a prepared statement to insert our new user.

Open your faas-ruby-register.yml and make it match the following, Please ensure you use your own image instead of mine if you are making modifications.

  name: faas

    lang: ruby
    handler: ./faas-ruby-register
    image: affixxx/faas-ruby-register
      mysql_host: <HOST>
      mysql_user: <USERNAME>
      mysql_password: <PASSWORD>      
      mysql_database: <DB>

Now lets deploy and test the function!

$ faas-cli build -f faas-ruby-register.yml # Build our function Container
$ faas-cli push -f faas-ruby-register.yml # Push our container to dockerhub
$ faas-cli deploy -f faas-ruby-register.yml # deploy the function
$ echo '{"username": "affixx", "password": "TestPassword", "email":"caontact@keiran.scot", "first_name": "keiran", "last_name":"smith"}' | faas-cli invoke faas-ruby-register
{'username': username, 'status': 'created'}
$ echo '{"username": "affixx", "password": "TestPassword", "email":"caontact@keiran.scot", "first_name": "keiran", "last_name":"smith"}' | faas-cli invoke faas-ruby-register
{'error': 'Username or E-Mail already in use'}

awesome register works. Lets move on.

Logging In

Now we have a user in our database we need to log them in, This will require generating a JWT token so we need to generate an RSA Keypair. On a unix based system run the following commands.

$ openssl genpkey -algorithm RSA -out private_key.pem -pkeyopt rsa_keygen_bits:2048
$ openssl rsa -pubout -in private_key.pem -out public_key.pem

Now we have our keypair we need a base64 representation of the text, using a method of your choice get a base64 representation.

Lets generate our new function with faas-cli

$ faas-cli new faas-ruby-login --lang ruby

Before we start working on the function lets get the faas-ruby-login.yml file ready

  name: faas

    lang: ruby
    handler: ./faas-ruby-login
    image: affixxx/faas-ruby-login
      public_key: <BASE64_RSA_PUBLIC_KEY>
      private_key: <BASE64_RSA_PRIVATE_KEY>
      mysql_host: mysql
      mysql_user: root
      mysql_database: users

Now we can write the function. This one is a little more complex than registration. So open the faas-ruby-login/handler.rb file and replace it with the following.

require 'mysql'
require 'jwt'
require 'digest/sha2'

class Handler
    def run(req)
      my = Mysql.connect(ENV['mysql_host'], ENV['mysql_user'], ENV['mysql_password'], ENV['mysql_database'])
      token = nil
      req = JSON.parse(req)
      username = Mysql.escape_string(req['username'])
      my.query("SELECT email, password, username, first_name, last_name FROM users WHERE username = '#{username}'").each do |email, password, username, first_name, last_name|
        digest = Digest::SHA2.new << req['password']
        if digest.to_s == password
          user = {
            email: email,
            first_name: first_name,
            last_name: last_name,
            username: username

          token = generate_jwt(user)
          return "{'username': '#{username}', 'token': '#{token}'}"
          return "{'error': 'Invalid username/password'}"
      return "{'error': 'Invalid username/password'}"

    def generate_jwt(user)
      payload = {
        nbf: Time.now.to_i - 10,
        iat: Time.now.to_i - 10,
        exp: Time.now.to_i + ((60) * 60) * 4,
        user: user

      priv_key = OpenSSL::PKey::RSA.new(Base64.decode64(ENV['private_key']))

      JWT.encode(payload, priv_key, 'RS256')

The biggest difference between this function and our register function is of course the JWT generator. JWT (JSON Web Tokens) are an open, industry standard RFC 7519 method for representing claims securely between two parties.

Our payload obviously contains our user hash after we fetched this from the database, However there are some other fields required.

nbf: Not Before, Our token is not valid before this timestamp. We subtract 10 seconds from the timestamps to account for clock drift.
iat: Issued at, This is the time we issued the token, Again we set this to 10 seconds in the past to account for time drift.
exp: Expiry, this is when our token will no longer be, we have it set to 14400 seconds (4 hours).

Lets test our login function!

$ echo '{"username": "affixx", "password": "TestPassword"}' | faas invoke faas-ruby-login
{'username': 'affixx', 'token': 'eyJhbGciOiJSUzI1NiJ9.eyJuYmYiOjE1MjY1OTMyMTgsImlhdCI6MTUyNjU5MzIxOCwiZXhwIjoxNTI2NjA3NjI4LCJ1c2VyIjp7ImVtYWlsIjoiY2FvbnRhY3RAa2VpcmFuLnNjb3QiLCJmaXJzdF9uYW1lIjoia2VpcmFuIiwibGFzdF9uYW1lIjoic21pdGgiLCJ1c2VybmFtZSI6ImFmZml4eCJ9fQ.qchkmOk8dsrw7SL6Rhi0nHyIlaHX4pzUNXXAQMEOb6IU0n1uT9AJEhFVptZ7tueriaTauY1zmYjKm79pd_UfekVICU4EMbGKt8bQaWrmlqpSel88PyQwolI_bYZqybW2TwWYsdwHcGgGgfb8A8ssk9y6YhktviKdofQYPUmLmaB5uljFHkMvNIg-ByJQpTYmCnMfAC-JF6mOsh65dKCP3qz78HiSX3gHODG1Gk1OJbePVpyDNmw7pGrO97c7kUgTWs5wVmD7Kgs697tAkPz65pFDavwZHSvdzpPEZ47Bh8NCGfWe73KYpceCjmOZK6tuawIx0MM4YP0XWke7kOtKkg'}

Success! lets check the token on JWT.io

Success we have a valid JWT.

What happens with an invalid username/password

$ echo '{"username": "affixx", "password": "WrongPassword"}' | faas invoke faas-ruby-login
{'error': 'Invalid username/password'}

Exactly as expected!

Thats all folks!

Thanks for reading part 1 of this tutorial, Check back next #FaaSFriday when we will work on posting!

As always the code for this tutorial is available on github, there are also extra tools available!

The post #FaasFriday – Building a Serverless Microblog in Ruby with OpenFaaS – Part 1 appeared first on Keiran.SCOT.

Atom pour remplacer Netbeans

Posted by Guillaume Kulakowski on May 17, 2018 06:52 PM

J’avais, par le passé, fait un article sur mon passage d’Eclipse vers Netbeans. Je dois dire qu’au fil du temps, cet IDE m’a déçu : Cycle de vie assez long. Plugin Python qui n’est plus maintenu depuis près de 3 ans. Lourd (qui a dit Java ?). Oracle, qui via sa politique de rachat se […]

Cet article Atom pour remplacer Netbeans est apparu en premier sur Guillaume Kulakowski's blog.

Blue Sky Discussion: EPEL-next or EPIC

Posted by Stephen Smoogen on May 16, 2018 11:54 PM

EPIC Planning Document

History / Background

Since 2007, Fedora Extra Packages for Enterprise Linux (EPEL) has been rebuilding Fedora Project Linux packages for Red Hat Enterprise Linux and its clones. Originally the goal was to compile packages that RHEL did not ship but were useful in the running of Fedora Infrastructure and other sites. Packages would be forked from the nearest Fedora release (Fedora 3 for EPEL-4, Fedora 6 for EPEL-5) with little updating or moving of packages in order to give similar lifetimes as the EL packages. Emphasis was made on back-porting fixes versus upgrading, and also not making large feature changes which would cause confusion. If a package could not longer be supported, it would be removed from the repository to eliminate security concerns. At the time RHEL lifetimes were thought to be only 5-6 years so back-porting did not look like a large problem.

As RHEL and its clones became more popular, Red Hat began to extend the lifetime of the Enterprise Linux releases from 6 years to 10 years of "active" support. This made trying to back-port fixes harder and many packages in EPEL would be "aged" out and removed. This in turn caused problems for consumers who had tied kick-starts and other scripts to having access to those packages. Attempts to fix this by pushing for release upgrade policies have run into resistance from packagers who find focusing on the main Fedora releases a full time job already and only build EPEL packages as one-offs. Other attempts to update policies have run into needing major updates and changes to build tools and scripting but no time to do so. Finally, because EPEL has not majorly changed in 10 years, conversations about changing fall into "well EPEL has always done it like this" from consumers, packagers, and engineering at different places.

In order to get around many of these resistance points with changing EPEL, I suggest that we frame the problems around a new project called Extra Packages for Inter Communities. The goal of this project would be to build packages from Fedora Project Linux releases to various Enterprise Linux whether they are Red Hat Enterprise Linux, CentOS, Scientific Linux or Oracle Enterprise Linux.

Problems and Proposals

Composer Limitations:

Currently EPEL uses the Fedora build system to compose a release of packages every couple of days. Because each day creates a new compose, the only channels are the various architectures and a testing where future packages can be tested. Updates are not in a separate because EPEL does not track releases.
EPEL packagers currently have to support a package for the 10 year lifetime of an RHEL release. If they have to update a release, all older versions are no longer available. If they no longer want to support a package it is completely removed. While this sounds like it increases security of consumers, Fedora does not remove old packages from older releases.
Proposed Solution
EPIC will match the Enterprise Linux major/minor numbers for releases. This means that a set of packages will be built for say EL5 sub-release 11 (aka 5.11). Those packages would populate for each supported architecture a release, updates and updates-testing directory. This will allow for a set of packages to be composed when the sub-release occurs and then stay until the release is ended.

Once a minor release is done, the old tree will be hard linked to an appropriate archive directory.


A new one will be built and placed in appropriate sub directories. Hard links to the latest will point to the new one, and after some time the old-tree will be removed from the active directory tree.

Channel Limitations:

EPEL is built against a subset of channels that Red Hat Enterprise Linux has for customers, namely the Server, High Availability, Optional, and some sort of Extras. Effort is made to make sure that EPEL does not replace with newer packages anything in those channels. However this does not extend to packages which are in the Workstation, Desktop, and similar channels. This can cause problems where EPEL’s packages replace something in those channels.
Proposed Solution
EPIC will be built against the latest released CentOS minor release using the channels which are enabled by default in the CentOS-Base.repo. These packages are built from source code that Red Hat delivers via a git mechanism to the CentOS project in order to rebuild them for mass consumption. Packages will not be allowed to replace/update according to the standard RPM Name-Epoch-Version-Release (NEVR) mechanism. This will allow EPIC to actually service more clients

Build System Limitations

EPEL is built against Red Hat Enterprise Linux. Because these packages are not meant for general consumption, the Fedora Build-system does not import them but builds them similarly to a hidden build-root. This causes multiple problems:
  • If EPEL has a package with the same name, it supersedes the RHEL one even if the NEVR is newer. This means old packages may get built against and constant pruning needs to be done.
  • If the EPEL package has a newer NEVR, it will replace the RHEL one which may not be what the consumer intended. This may break other software requirements.
  • Because parts of the build are hidden the package build may not be as audit-able as some consumers would like.
Proposed Solution
EPIC will import into the build system the CentOS build it is building against. With this the build is not hidden from view. It also makes it easier to put in rules that an EPIC package will never replace/remove a core build package. Audits of how a build is done can be clearly shown.

Greater Frequency Rebasing

Red Hat Enterprise Linux have been split between competing customer needs. Customers wish to have some packages stay steady for 10 years with only some updates to them, but they have also found that they need rapidly updated software. In order to bridge this, recent RHEL releases have rebased many software packages during a minor release. This has caused problems because EPEL packages were built against older software ABI’s which no longer work with the latest RHEL. This requires the EPEL software to be rebased and rebuilt regularly. Conversely, because of how the Fedora build system sees Red Hat Enterprise Linux packages, it only knows about the latest packages. In the 2-4 weeks between various community rebuilds getting their minor release packages built, EPEL packages may be built against API’s which are not available.

Proposed Solution
The main EPIC releases will be built against specific CentOS releases versus the Continual Release (CR) channel. When the next RHEL minor is announced, the EPIC releng will create new git branch from the current minor version (aka 5.10 → 5.11). Packagers can then make major updates to versions or other needs done. When the CentOS CR is populated with the new rpms, CR will be turned on in koji and packages will be built in the new tree using those packages. After 2 weeks, the EPIC minor release will be frozen and any new packages or fixes will occur in the updates tree.




This release is no longer supported by CentOS and will not be supported by EPIC.


This release is no longer supported by CentOS and will not be supported by EPIC.


This release is supported until Nov 30 2020 (2020-11-30). The base packaging rules for any package would be those used by the Fedora Project during its 12 and 13 releases. Where possible, EPIC will make macros to keep packaging more in line with current packaging rules.


This release is supported until Jun 30 2024 (2024-06-30). The base packaging rules for any package would be those used by the Fedora Project during its 18 and 19 releases. Because EL7 has seen major updates in certain core software, newer packaging rules from newer releases is possible to follow.


Red Hat has not publicly announced what its next release will be, when it will be released, or what its lifetime is. When that occurs, it will be clearer which Fedora release packaging will be based off of.

GIT structure

Currently EPEL uses only 1 branch for every major RHEL release. In order to better match how current RHEL releases contain major differences, EPIC will have a branch for every major.minor release. This is to allow for people who need older versions for their usage to better snapshot and build their own software off of it. There are several naming patterns which need to be researched:


Git module patterns will need to match what upstream delivers for any future EL.

Continuous Integration (CI) Gating


The EL-6 life-cycle is reaching its final sub releases with more focus and growth in EL-7 and the future. Because of this gating will be turned off EPIC-6. Testing of packages can be done at the packagers discretion but is not required.


The EL-7 life-cycle is midstream with 1-2 more minor releases with major API changes. Due to this, it makes sense to research if gating can be put in place for the next minor release. If the time and energy to retrofit tools to the older EL are possible then it can be turned on.


Because gating is built into current Fedora releases, there should be no problem with turning it on for a future release. Packages which do not pass testing will be blocked just as they will be in Fedora 29+ releases.



Because EL-6’s tooling is locked at this point, it does not make sense to investigate modules.


Currently EL-7 does not support Fedora modules and would require updates to yum, rpm and other tools in order to do so. If these show up in some form in a future minor release, then trees for modules can be created and builds done.


The tooling for modules can match how Fedora approaches it. This means that rules for module inclusion will be similar to package inclusion. EPIC-next modules must not replace/conflict with CentOS modules. They may use their own name-space to offer newer versions than what is offered and those modules may be removed in the next minor release if CentOS offers them then.

Build/Update Policy

Major Release

In the past, Red Hat has released a public beta before it finalizes its next major version. If possible, the rebuilders have come out with their versions of this release in order to learn what gotchas they will have when the .0 release occurs. Once the packages for the beta are built, EPIC will make a public call for packages to be released to it. Because packagers may not want to support a beta or they know that there will be other problems, these packages will NOT be auto branched from Fedora.

Minor Release

The current method CentOS uses to build a minor release is to begin rebuilding packages, patching problems and then when ready put those packages in their /cr/ directory. These are then tested for by people while updates are built and ISOs for the final minor release is done. The steps for EPIC release engineering will be the following:
  1. Branch all current packages from X.Y to X.Y+1
  2. Make any Bugzilla updates needed
  3. Rebuild all branched packages against CR
  4. File FTBFS against any packages.
  5. Packagers will announce major updates to mailing list
  6. Packagers will build updates against CR.
  7. 2 weeks in, releng will cull any packages which are still FTBFS
  8. 2 weeks in, releng will compose and lock the X.Y+1 release
  9. symlinks will point to the new minor release.
  10. 4 weeks in, releng will finish archiving off the X.Y release

Between Releases

Updates and new packages between releases will be pushed to the appropriate /updates/X.Y/ tree. Packagers will be encouraged to only make minor non-api breaking updates during this time. Major changes are possible, but need to follow this work flow:
  1. Announce to the EPEL list that a change is required and why
  2. Open a ticket to EPIC steering committee on this change
  3. EPIC steering committee approves/disapproves change
  4. If approved change happens but packages are in updates
  5. If not approved it can be done next minor release.

Build System

Build in Fedora

Currently EPEL is built in Fedora using the Fedora Build system which integrates koji, bodhi, greenwave, other tools together. This could be still used with EPIC.

Build in CentOS

EPIC could be built in the CentOS BuildSystem (CBS) which also uses koji and has some integration to the CentOS Jenkins CI system.

Build in Cloud

Instead of using existing infrastructure, EPIC is built with newly stood up builders in Amazon or similar cloud environments. The reasoning behind this would be to see if other build systems can transition there eventually.


Blue Sky Project
A project with a different name to help eliminate preconceptions with the existing project.
A person who pays for a service either in money, time or goods.
Sometimes called a user. A person who is consuming the service without work put into it.
Extra Packages for Enterprise Linux. A product name which was to be replaced years ago, but no one came up with a better one.
Extra Packages Inter Community.
Red Hat Enterprise Linux

Last updated 2018-05-16 19:10:17 EDT This document was imported from an adoc..

Fantastic kernel patches and where to find them

Posted by Laura Abbott on May 16, 2018 06:00 PM

I've griped before about kernel development being scattered and spread about. A quick grep of MAINTAINERS shows over 200 git trees and even more mailing lists. Today's discussion is a partial enumeration of some common mailing lists, git trees and patchwork instances. You can certainly find some of this in the MAINTAINERS file.

  • LKML. The main mailing list. This is the one everyone thinks of when they think 'kernel'. Really though, it mostly serves as an archive of everything at this point. I do not recommend e-mailing just LKML with no other lists or people. Sometimes you'll get a response but think of it more as writing to your blog that has 10 followers you've never met, 7 of which are bots. Or your twitter. There is a patchwork instance and various mail archives out there. I haven't found one I actually like as much as GMANE unfortunately. The closest corresponding git tree is the master where all releases happen.

  • The stable mailing list. This is where patches go to be picked up for stable releases. The stable release have a set of rules for how patches are picked up. Most important is that the patch must be in Linus' tree before it will be applied to stable. Greg KH is the main stable maintainer. He does a fantastic job for taking care of the large number of patches that come in. In general, if a patch is properly tagged for stable yes it will show up eventually. There is a tree for his queue of patches to be applied along with stable git trees

  • Linux -next. This is the closest thing to an integration tree right now. The goal is to find merge conflicts and bugs before they hit Linus' tree. All the work of merging trees is handled manually. Typically subsystem maintainers have a branch that's designated for -next which gets pulled in on a daily basis. Running -next is not usually recommended for anything more than "does this fix your problem" unless you are willing to actively report bugs. Running -next and learning how to report bugs is a great way to get involved though. There's a tree with tags per day.

  • The -mm tree. This gets its name from memory management but really it's Andrew Morton's queue. Lots of odd fixes end up getting queued through here. Officially, this gets maintained with quilt. The tree for -next "mmotm" (mm of the moment) is available as a series. If you just want the memory management part of the tree, there's a tree available for that.

  • Networking. netdev is the primary mailing list which covers everything from core networking infrastructure to drivers. And there's even a patchwork instance too! David Miller is the top level networking maintainer and has a tree for all your networking needs. He has a separate -next tree. One thing to keep in mind is that networking patches are sent to stable in batches and not just tagged and picked up by Greg KH. This sometimes means a larger gap between when a patch lands in Linus' branch and when it gets into a stable release.

  • Fedora tree. Most of the git trees listed above are "source git/src-git" trees, meaning it's the actual source code. Fedora officially distributes everything in "pkg-git" form. If you look at the official Fedora kernel repository, you'll see it contains a bunch of patches and support files. This is similar to the -mm and -stable-queue. Josh Boyer (Fedora kernel maintainer emeritus) has some scripts to take the Fedora pkg-git and put it on kernel.org. This gets updated automatically with each build.

  • DRM. This is for anything and everything related to graphics. Most everything is hosted a freedesktop.org, including the mailing list. Recently, DRM has switched to a group maintainer model (Daniel Vetter has written about some of this philosophy before). Ultimately though, all the patches will come through the main DRM git repo. There's a DRM -tip for -next like testing of all the latest graphics work. Graphics maintainers may occasionally request you test that tree if you have graphics problems. There's also a patchwork instance.

GNOME Performance Hackfest

Posted by Alberto Ruiz on May 16, 2018 03:04 PM

We’re about to finish the three days long first GNOME Performance Hackfest here in Cambridge.

We started covering a few topics, there are three major areas we’ve covered and in each one of those there has been a bunch of initiatives.





GNOME Shell performance

Jonas Adahl, Marco Trevisan, Eric Anholt, Emmanuele Bassi, Carlos Garnacho and Daniel Stone have been flocking together around Shell performance. There has been some high level discussions about the pipeline, Clutter, Cogl, cairo/gtk3 and gtk4.

The main effort has been around creating probes across the stack to help Christian Hergert with sysprof (in drm, mutter, gjs…) so that we can actually measure performance bottlenecks at different levels and pinpoint culprits.

We’ve been also looking at the story behind search providers and see if we can rearchitect things a bit to have less roundtrips and avoid persistent session daemons to achieve the same results. Discussions are still ongoing on that front.

GNOME Session resource consumption

Hans de Goede put together a summary of the resource consumed in a typical GNOME session in Fedora and tweaks to avoid those, you can check the list in the agenda.

There are some issues specific to Fedora there, but the biggest improvement that we can achieve is shutting down’s GDM’s own gnome-shell instance, for which Hans already has a working patch. This should reduce resource consumption by 280megs of RAM.

The second biggest target is GNOME Software, which we keep running primarily for the shell provider. Richard Hughes was here yesterday and is already working on a solution for this.

We are also looking into the different GNOME Settings Daemon processes and trying to figure out which ones we can shut down until needed.

Surely there’s stuff I’ve missed, and hopefully we’ll see blogposts and patches surfacing soon after we wrap up the event. Hopefully we can follow up during GUADEC and start showing the results.

On Tuesday we enjoyed some drinks out kindly hosted by Collabora.

I’d like to thank the Eben Upton and the Raspberry Pi Foundation for sponsoring the venue and sending Eric Anholt over.


New badge: SELF 2018 !

Posted by Fedora Badges on May 16, 2018 02:18 PM
SELF 2018You visited the Fedora table at SouthEast LinuxFest (SELF) 2018!

New badge: Let's have a party (Fedora 28) !

Posted by Fedora Badges on May 16, 2018 02:09 PM
LetYou organized a party for the release of Fedora 28

New badge: Fedora 28 Release Partygoer !

Posted by Fedora Badges on May 16, 2018 01:50 PM
Fedora 28 Release PartygoerAttended a local release party to celebrate the launch of Fedora 28!

New badge: OSCAL 2018 Attendee !

Posted by Fedora Badges on May 16, 2018 01:42 PM
OSCAL 2018 AttendeeYou visited the Fedora booth at OSCAL 2018 in Tirana, Albania!

Cockpit 168

Posted by Cockpit Project on May 16, 2018 11:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 168.

Improve checks for root privilege availability

Many actions, like joining an IPA domain or rebooting, can only be performed by an administrator (root user).

Cockpit previously checked if the user was a member of the wheel (used in Red Hat Enterprise Linux, CentOS, and Fedora) or sudo (used in Debian and Ubuntu) groups to enable these actions. Simple group checking was insufficient, as different group names are used by other operating systems and configurations, or a system might be set up to rely on custom user-based sudo rules.

Cockpit no longer makes assumptions about special groups. Now, Cockpit simply checks if the user is capable of running commands as root.

As a result of this fix, privileged operations now work by default with FreeIPA, which uses the admins@domain.name group.

Try it out

Cockpit 168 is available now:

New badge: FLISOL 2018 Attendee !

Posted by Fedora Badges on May 16, 2018 09:17 AM
FLISOL 2018 AttendeeYou visited the Fedora booth at FLISOL 2018!

Introducing the 1.8 freedesktop runtime in the gnome nightly builds

Posted by Alexander Larsson on May 16, 2018 08:03 AM

All the current Flatpak runtimes in wide use are based on the 1.6 Freedesktop runtime. This is a two-layered beast where the lower layer is built using Yocto and the upper layer is built using flatpak-builder.

Yocto let us get a basic system bootstrapped, but it is not really a great match for the needs of a Flatpak runtime. We’ve long wanted a base that targeted sandboxed builds which is closer to upstream and without the embedded cross-compiled legacy of Yocto. This is part of the reason why Tristan started working on BuildStream, and why the 1.8 Freedesktop runtime was created from scratch, using it.

After a herculean effort from the people at Codethink (who sponsor this effort) we now have a working Yocto-free version of the Platform and Sdk. You can download the unstable version from here and start playing with it. It is not yet frozen or API/ABI stable, but its getting there.

The next step in this effort is to test it as widely as possible to catch any issues before the release is frozen. In order to do this I rebased the Gnome nightly runtime builds on top of the new Freedesktop version this week. This is a good match for a test release, because it has no real ABI requirements (being rebuilt with the apps daily), yet gets a fair amount of testing.

WARNING: During the initial phase it is likely that there will be problems. Please test your apps extra carefully and report all the issues you find.

In the future, the goal is to also convert the Gnome runtimes to BuildStream. Work on this has started, but for now we want to focus on getting the base runtime stable.

Revue de presse de Fedora 28

Posted by Charles-Antoine Couret on May 15, 2018 03:00 PM

Cela fait depuis Fedora 19 que je publie sur la liste de diffusion de Fedora-fr une revue de presse de chaque sortie d'une nouvelle version. Récapituler quels sites en parle et comment. Je le fais toujours deux semaines après la publication (pour que tout le monde ait le temps d'en parler). Maintenant, place à Fedora 28 !

Bien entendu je passe sous silence mon blog et le forum de fedora-fr.

Sites web d'actualité

Soit 7 sites sur les 25 contactés.

Blogs, sites persos ou sites non contactés

Soit 2 sites.


Le nombre de sites parlant de Fedora 28 est en légère baisse, deux blogs en moins. Beaucoup d'articles se fondent sur ce que j'ai moi même rédigé (que ce soit la version courte ou longue). Notons que developpez.net m'a contacté pour publier la version longue les prochaines fois, ce que je vais essayer de faire bien entendu.

La semaine de sa sortie, nous avons eu une augmentation de visites par rapport à la semaine d'avant de cet ordre là :

  • Forums : 4% (soit 250 visites environ)
  • Documentation : baisse de 1% (soit environ 40 visites)
  • Le site Fedora-fr : 43% (soit 500 visites en plus)
  • Borsalinux-fr : 344% (soit 90 visites en plus)

À tenir compte de la situation particulière avec une sortie le 1er mai qui est un jour férié globalement dans l'espace francophone, hors Québec.

Si vous avez connaissance d'un autre lien, n'hésitez pas à partager ! Rendez-vous pour Fedora 29.

Protect your Fedora system against this DHCP flaw

Posted by Fedora Magazine on May 15, 2018 02:59 PM

A critical security vulnerability was discovered and disclosed earlier today in dhcp-client. This DHCP flaw carries a high risk to your system and data, especially if you use untrusted networks such as a WiFi access point you don’t own. Read more here for how to protect your Fedora system.

Dynamic Host Control Protocol (DHCP) allows your system to get configuration from a network it joins. Your system will make a request for DHCP data, and typically a server such as a router answers. The server provides the necessary data for your system to configure itself. This is how, for instance, your system configures itself properly for networking when it joins a wireless network.

However, an attacker on the local network may be able to exploit this vulnerability. Using a flaw in a dhcp-client script that runs under NetworkManager, the attacker may be able to run arbitrary commands with root privileges on your system. This DHCP flaw puts your system and your data at high risk. The flaw has been assigned CVE-2018-1111 and has a Bugzilla tracking bug.

Guarding against this DHCP flaw

New dhcp packages contain fixes for Fedora 26, 27, and 28, as well as Rawhide. The maintainers have submitted these updates to the updates-testing repositories. They should show up in stable repos within a day or so of this post for most users. The desired packages are:

  • Fedora 26: dhcp-4.3.5-11.fc26
  • Fedora 27: dhcp-4.3.6-10.fc27
  • Fedora 28: dhcp-4.3.6-20.fc28
  • Rawhide: dhcp-4.3.6-21.fc29

Updating a stable Fedora system

To update immediately on a stable Fedora release, use this command with sudo. Type your password at the prompt, if necessary:

sudo dnf --refresh --enablerepo=updates-testing update dhcp-client

Later, use the standard stable repos to update. To update your Fedora system from the stable repos, use this command:

sudo dnf --refresh update dhcp-client

Updating a Rawhide system

If your system is on Rawhide, use these commands to download and update the packages immediately:

mkdir dhcp && cd dhcp
koji download-build --arch={x86_64,noarch} dhcp-4.3.6-21.fc29
sudo dnf update ./dhcp-*.rpm

After the nightly Rawhide compose, simply run sudo dnf update to get the update.

Fedora Atomic Host

The fixes for Fedora Atomic Host are in ostree version 28.20180515.1. To get the update, run this command:

atomic host upgrade -r

This command reboots your system to apply the upgrade.

Photo by Markus Spiske on Unsplash.

Fedora BoF report from Summit 2018

Posted by Paul W. Frields on May 15, 2018 12:00 AM

The post Fedora BoF report from Summit 2018 appeared first on The Grand Fallacy.

Last week I attended the Red Hat Summit 2018. There I interacted with customers, partners, community leaders, and some friends from around Red Hat. I enjoy going every year and look forward to it, despite the exhaustion factor. This year included a fun event for Fedora — a birds of a feather (or BoF) session. Read on for my report.

FPL Matthew Miller arranged the BoF, and several people assisted there, including Brian Exelbierd and me. Several core contributors attended as well. But out of the 30-35 people who attended, the vast majority were not community members, but rather people interested in or using Fedora.

We split into groups and ran each in a Lean Coffee style. My group included about 12 or so people. I don’t recall every topic in our prioritized cards, but notable ones included:

  • Use of legally encumbered codecs
  • NVidia drivers
  • How to get started contributing
  • Whence the minimal install

Our group members voted up the codecs + NVidia topics by a factor of more than 2 over the next highest rated topic! That’s why it made me very happy to report to them that Fedora 28 includes the GNOME Software function that lets users decide whether to enable selected repositories outside Fedora.

I know a few people grumble about this function. I used to rail about the topic myself. As I’ve grown older and met more people in different walks of life, I’ve realized I no longer want to interfere with their agency to make their own choices. The BoF attendees responded well to this new feature and were overjoyed to hear about the way this is now available. I’m pretty sure we made a few instant Fedora converts on the spot.

I enjoyed the BoF overall, but this particular topic resonated with me strongly. I know how hard several of the Workstation working group members worked to make this new feature a reality in Fedora 28. This experience showed me it has an impact on people’s interest in using Fedora. If it also makes them more interested in contributing, that’s great for Fedora in the long run.

Also, I realize I haven’t written on this blog in a very long time. I’ll try not to take so long next time, but no promises. Life is very busy!

Minicom to a Juniper SRX-220

Posted by Adam Young on May 14, 2018 08:28 PM

Cluster computing requires a cluster of computers. For the past several years, I have been attempting to get work down without having a home cluster. This is no longer tenable, and I need to build my own.

One of the requirements for a home cluster is a progammable network device. I’ve purchased a second hand Junper SRC22.

Juniper SRX 220

Juniper SRX 220

Here are my configuration notes.

To start, I needed a Get console cable. RJ456 on one end, serial port on the other…excpet I don’t have any more serial ports on my workstations, so an addtional USB to Serial port cable to connect to that.

Console Cables

Yeah, just get a USB to RJ45 Console cable. But don’t expect to be able to buy one at your local office supply store.

Configure Minicom

<preclass>sudo minicom -s

pick A


poweron SRX220

Can’t login

Reset root password


push and hold recessed “config” button until status turns Amber

hit spacebar.
hit spacebar
boot -s
configure shared
set system root-authentication plain-text-password

Got this error

root# commit
 [edit interfaces]
 HA management port cannot be configured
 error: configuration check-out failed

Delete Old Interfaces

SRX cluster

<iframe class="wp-embedded-content" data-secret="Qhxsri0zJF" frameborder="0" height="329" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="http://rtodto.net/srx-cluster/embed/#?secret=Qhxsri0zJF" title="“SRX cluster” — Tech Notes / RtoDto.net" width="584"></iframe>

delete interfaces ge0/0/6
 delete interfaces ge0/0/7

 commit complete

A closer look at power and PowerPole

Posted by Daniel Pocock on May 14, 2018 07:25 PM

The crowdfunding campaign has so far raised enough money to buy a small lead-acid battery but hopefully with another four days to go before OSCAL we can reach the target of an AGM battery. In the interest of transparency, I will shortly publish a summary of the donations.

The campaign has been a great opportunity to publish some information that will hopefully help other people too. In particular, a lot of what I've written about power sources isn't just applicable for ham radio, it can be used for any demo or exhibit involving electronics or electrical parts like motors.

People have also asked various questions and so I've prepared some more details about PowerPoles today to help answer them.

OSCAL organizer urgently looking for an Apple MacBook PSU

In an unfortunate twist of fate while I've been blogging about power sources, one of the OSCAL organizers has a MacBook and the Apple-patented PSU conveniently failed just a few days before OSCAL. It is the 85W MagSafe 2 PSU and it is not easily found in Albania. If anybody can get one to me while I'm in Berlin at Kamailio World then I can take it to Tirana on Wednesday night. If you live near one of the other OSCAL speakers you could also send it with them.

If only Apple used PowerPole...

Why batteries?

The first question many people asked is why use batteries and not a power supply. There are two answers for this: portability and availability. Many hams like to operate their radios away from their home sometimes. At an event, you don't always know in advance whether you will be close to a mains power socket. Taking a battery eliminates that worry. Batteries also provide better availability in times of crisis: whenever there is a natural disaster, ham radio is often the first mode of communication to be re-established. Radio hams can operate their stations independently of the power grid.

Note that while the battery looks a lot like a car battery, it is actually a deep cycle battery, sometimes referred to as a leisure battery. This type of battery is often promoted for use in caravans and boats.

Why PowerPole?

Many amateur radio groups have already standardized on the use of PowerPole in recent years. The reason for having a standard is that people can share power sources or swap equipment around easily, especially in emergencies. The same logic applies when setting up a demo at an event where multiple volunteers might mix and match equipment at a booth.

WICEN, ARES / RACES and RAYNET-UK are some of the well known groups in the world of emergency communications and they all recommend PowerPole.

Sites like eBay and Amazon have many bulk packs of PowerPoles. Some are genuine, some are copies. In the UK, I've previously purchased PowerPole packs and accessories from sites like Torberry and Sotabeams.

The pen is mightier than the sword, but what about the crimper?

The PowerPole plugs for 15A, 30A and 45A are all interchangeable and they can all be crimped with a single tool. The official tool is quite expensive but there are many after-market alternatives like this one. It takes less than a minute to insert the terminal, insert the wire, crimp and make a secure connection.

Here are some packets of PowerPoles in every size:

Example cables

It is easy to make your own cables or to take any existing cables, cut the plugs off one end and put PowerPoles on them.

Here is a cable with banana plugs on one end and PowerPole on the other end. You can buy cables like this or if you already have cables with banana plugs on both ends, you can cut them in half and put PowerPoles on them. This can be a useful patch cable for connecting a desktop power supply to a PowerPole PDU:

Here is the Yaesu E-DC-20 cable used to power many mobile radios. It is designed for about 25A. The exposed copper section simply needs to be trimmed and then inserted into a PowerPole 30:

Many small devices have these round 2.1mm coaxial power sockets. It is easy to find a packet of the pigtails on eBay and attach PowerPoles to them (tip: buy the pack that includes both male and female connections for more versatility). It is essential to check that the devices are all rated for the same voltage: if your battery is 12V and you connect a 5V device, the device will probably be destroyed.

Distributing power between multiple devices

There are a wide range of power distribution units (PDUs) for PowerPole users. Notice that PowerPoles are interchangeable and in some of these devices you can insert power through any of the inputs. Most of these devices have a fuse on every connection for extra security and isolation. Some of the more interesting devices also have a USB charging outlet. The West Mountain Radio RigRunner range includes many permutations. You can find a variety of PDUs from different vendors through an Amazon search or eBay.

In the photo from last week's blog, I have the Fuser-6 distributed by Sotabeams in the UK (below, right). I bought it pre-assembled but you can also make it yourself. I also have a Windcamp 8-port PDU purchased from Amazon (left):

Despite all those fuses on the PDU, it is also highly recommended to insert a fuse in the section of wire coming off the battery terminals or PSU. It is easy to find maxi blade fuse holders on eBay and in some electrical retailers:

Need help crimping your cables?

If you don't want to buy a crimper or you would like somebody to help you, you can bring some of your cables to a hackerspace or ask if anybody from the Debian hams team will bring one to an event to help you.

I'm bringing my own crimper and some PowerPoles to OSCAL this weekend, if you would like to help us power up the demo there please consider contributing to the crowdfunding campaign.

Moving to Fedora Atomic 28

Posted by Fabian Affolter on May 14, 2018 04:30 PM

Whenever you see the black circle aka bullet marking an old entry…

$ sudo rpm-ostree status
State: idle; auto updates disabled
Version: 28.20180425.0 (2018-04-25 19:14:57)
Commit: 94a9d06eef34aa6774c056356d3d2e024e57a0013b6f8048dbae392a84a137ca
GPGSignature: Valid signature by 128CF232A9371991C8A65695E08E7E629DB62FB1

● ostree://fedora-atomic:fedora/27/x86_64/atomic-host
Version: 27.105 (2018-03-25 21:28:49)
Commit: c4015063c00515ddbbaa4c484573d38376db270b09adb22a4859faa0a39d5d93
GPGSignature: Valid signature by 860E19B0AFA800A1751881A6F55E7430F5282EE4

…then it’s time for a reboot. The upgrade of an Atomic host is very smooth.

$ sudo systemctl reboot

GSoC 2018: Kicking off the Coding

Posted by Amitosh Swain Mahapatra on May 14, 2018 03:55 PM

It’s May 14, and this is when we officially start coding for GSoC, 2018 edition. This time, I would be working on improving the Fedora Community App with the Fedora project. This marks the beginning of a journey of 3 months of coding, patching, debugging, git (mess) and the awesome discussions with my mentors and the community.

The Fedora App is a central location for Fedora users and innovators to stay updated on The Fedora Project. News updates, social posts, Ask Fedora, as well as articles from Fedora Magazine are all held under this app.

For the first month of my GSoC coding, I will be working of improving the code quality by completing the TypeScript conversion I started earlier as well as I will also be working on bring offline capabilities to Fedora Magazine reader and the Fedora Social.

Why TypeScript?

TypeScript (TS) is a super-set of JavaScript (JS) that brings optional static typing features to JS. TS is the language used by Ionic - the framework on which the Fedora Community App is built. I personally believe, in a medium to large scale project, a static type system is necessary - it especially helps to catch errors very early and allows to perform safe refactors, but I also agree that there are cases where it’s useful to fallback to dynamic typing.

TS does just this. It is implemented as a syntax extension to JavaScript which is transpiled into JS using the TS compiler. Every valid JS syntax is also valid TS syntax, so it becomes easier to port a project - even partially to use TS features, without incurring the cost of a full project rewrite.

In my earlier work in updating the source code to the latest version of Ionic, I started a conversion from JS to TS. Essential parts are already in place, but still there is a big chunk left for conversion. In my first week of coding, I will be working on this conversion.

Offline Capabilities in Fedora App

Currently, the app lacks any offline capabilities - you always require an internet connection to perform most of the actions.

My work will be to bring offline storage and sync to Fedora Magazine and Fedora Social sections of the app. This will both improve the apps usability and performance. From a UX perspective, we will start syncing data rather than blocking the user from doing anything (as it is currently). The user can continue to act on cached items, while we continue to fetch new items in the background.

Further in second week, I will be working on implementing a lightweight blog reader and caching complete offline copies of user-selected articles.

Kerberos authentication in a container

Posted by Tomas Tomecek on May 14, 2018 12:47 PM

This is a quick one.

We have a bot which uses Kerberos for authentication with other services. Of course we run our bot army in containers within OpenShift.

How do we do it? How can we use Kerberos inside linux containers?

…and not get eaten by errors such as

klist: Invalid UID in persistent keyring name while resolving ccache KEYRING:persistent:1000:krb_ccache_H2AfxtO


klist: Invalid UID in persistent keyring name while resolving ccache KEYRING:persistent:1000


klist: Invalid UID in persistent keyring name while getting default ccache


The main issue is that Kerberos by default stores credentials inside kernel keyring. Keyring is not namespaced, so this is a privileged operation.

[pid 19198] keyctl(KEYCTL_GET_PERSISTENT, 1000, KEY_SPEC_PROCESS_KEYRING) = -1 EPERM (Operation not permitted)

Solution is really easy. Just change the method how the ticket granting ticket should be stored and that’s it. Therefore we’ll just store it in a file and we’re done.

So let’s launch a container using podman, we’ll bind-mount the Kerberos configuration from host inside the container. Notice, no --cap-add nor --privileged.

+ sudo podman run -it -v /etc/krb5.conf:/etc/krb5.conf -v /etc/krb5.conf.d/:/etc/krb5.conf.d/ registry.fedoraproject.org/fedora:28 bash
Trying to pull registry.fedoraproject.org/fedora:28...Getting image source signatures
Copying blob sha256:548d1dae8c2b61abb3d4d28a10a67e21d5278d42d1f282428c0dcbba06844c2c
 85.59 MB / 85.59 MB [====================================================] 1m6s
Copying config sha256:426866d6fa419873f97e5cbd320eeb22778244c1dfffa01c944db3114f55772e
 1.27 KB / 1.27 KB [========================================================] 0s
Writing manifest to image destination
Storing signatures

We should install Kerberos tooling now:

[root@101ff1a35d4d /]# dnf install krb5-workstation
Fedora 28 - x86_64 - Updates                                                  5.0 MB/s | 7.1 MB     00:01
Fedora 28 - x86_64                                                            4.6 MB/s |  60 MB     00:13
Last metadata expiration check: 0:00:03 ago on Mon May 14 13:11:33 2018.
Dependencies resolved.
 Package                       Arch              Version                       Repository                  Size
 krb5-workstation              x86_64            1.16.1-2.fc28                 updates                    913 k
 krb5-libs                     x86_64            1.16.1-2.fc28                 updates                    801 k
Installing dependencies:
 libkadm5                      x86_64            1.16.1-2.fc28                 updates                    176 k
 libss                         x86_64            1.43.8-2.fc28                 fedora                      51 k

Transaction Summary
Install  3 Packages
Upgrade  1 Package

Total download size: 1.9 M
Is this ok [y/N]: y
Downloading Packages:
(1/4): libss-1.43.8-2.fc28.x86_64.rpm                                           750 kB/s |  51 kB     00:00
(2/4): libkadm5-1.16.1-2.fc28.x86_64.rpm                                        1.5 MB/s | 176 kB     00:00
(3/4): krb5-workstation-1.16.1-2.fc28.x86_64.rpm                                2.6 MB/s | 913 kB     00:00
(4/4): krb5-libs-1.16.1-2.fc28.x86_64.rpm                                       988 kB/s | 801 kB     00:00
Total                                                                           726 kB/s | 1.9 MB     00:02
Running transaction check
Transaction check succeeded.
Running transaction test
Transaction test succeeded.
Running transaction
  Preparing        :                                                                                        1/1
  Upgrading        : krb5-libs-1.16.1-2.fc28.x86_64                                                         1/5
warning: /etc/krb5.conf created as /etc/krb5.conf.rpmnew
  Installing       : libkadm5-1.16.1-2.fc28.x86_64                                                          2/5
  Installing       : libss-1.43.8-2.fc28.x86_64                                                             3/5
  Running scriptlet: libss-1.43.8-2.fc28.x86_64                                                             3/5
  Installing       : krb5-workstation-1.16.1-2.fc28.x86_64                                                  4/5
  Cleanup          : krb5-libs-1.16-21.fc28.x86_64                                                          5/5
  Running scriptlet: krb5-libs-1.16-21.fc28.x86_64                                                          5/5
  Verifying        : krb5-workstation-1.16.1-2.fc28.x86_64                                                  1/5
  Verifying        : libkadm5-1.16.1-2.fc28.x86_64                                                          2/5
  Verifying        : libss-1.43.8-2.fc28.x86_64                                                             3/5
  Verifying        : krb5-libs-1.16.1-2.fc28.x86_64                                                         4/5
  Verifying        : krb5-libs-1.16-21.fc28.x86_64                                                          5/5

  krb5-workstation.x86_64 1.16.1-2.fc28         libkadm5.x86_64 1.16.1-2.fc28
  libss.x86_64 1.43.8-2.fc28

  krb5-libs.x86_64 1.16.1-2.fc28


And now we’ll do the magic trick: we’ll tell Kerberos to store the TGT inside /tmp/tgt:

[root@101ff1a35d4d /]# export KRB5CCNAME=FILE:/tmp/tgt


[root@101ff1a35d4d /]# kinit ttomecek@FEDORAPROJECT.ORG
Password for ttomecek@FEDORAPROJECT.ORG:
[root@101ff1a35d4d /]# klist
Ticket cache: FILE:/tmp/tgt
Default principal: ttomecek@FEDORAPROJECT.ORG

Valid starting     Expires            Service principal
05/14/18 13:12:57  05/15/18 13:12:51  krbtgt/FEDORAPROJECT.ORG@FEDORAPROJECT.ORG
        renew until 05/21/18 13:12:51

Obviously, this is insecure since everyone can find that file easily. Please make sure that your containers are secure and you know what you are running inside.

Also doing the same thing with docker:

+ docker run -it -v /etc/krb5.conf:/etc/krb5.conf -v /etc/krb5.conf.d/:/etc/krb5.conf.d/ fedora-with-krb5-workstation bash
[ddbd@a9c95325be85 ~]$ export KRB5CCNAME=FILE:/tmp/tgt
[ddbd@a9c95325be85 ~]$ kinit ttomecek@FEDORAPROJECT.ORG
Password for ttomecek@FEDORAPROJECT.ORG:
[ddbd@a9c95325be85 ~]$ klist
Ticket cache: FILE:/tmp/tgt
Default principal: ttomecek@FEDORAPROJECT.ORG

Valid starting       Expires              Service principal
05/14/2018 13:08:29  05/15/2018 13:08:14  krbtgt/FEDORAPROJECT.ORG@FEDORAPROJECT.ORG
        renew until 05/21/2018 13:08:14

Fedora 28: Better smart card support in OpenSSH

Posted by Fedora Magazine on May 14, 2018 11:38 AM

Smart card support was introduced around 2010 with OpenSSH 5.4. The inital scope was restricted to the RSA keys — the only supported key type at that time in OpenSSH — other than legacy DSA keys. Previously, users needed to specify the PKCS#11 driver for the smart card. Additionally, the OpenSSH client had to query the server with all the stored keys in the card, until an acceptable key was found.  This slowed down authentication, and reveals public keys to the server that might not be necessary (e.g., if we have a single card with keys for distinct servers).

Over the years, OpenSSH gained support for additional authentication keys, such as ECDSA and later EdDSA. However, the smart card subsystem has not changed much since the early days. Cards with ECDSA keys are not yet supported, and there is no option for the user to specify the key to use when connecting to a server. Fedora 28 addresses these limitations. This article describes these improvements, the background behind them, and how they can be used.

Support for ECDSA keys

OpenSSH has supported ECDSA keys OpenSSH 5.7 (released in 2011). This  includes widespread support for smart cards such as Yubikey and Nitrokey. Nevertheless, ECDSA support was not reflected in the OpenSSH PKCS#11 subsystem, which is still RSA-only. A patch to support the ECDSA keys in PKCS#11 was submitted to the OpenSSH project in 2015, but despite a long history of revisions it is still not incorporated. The motivation to use ECDSA keys can be either to avoid hardware RSA key vulnerabilities (ROCA: CVE-2017-15361) or to use shorter keys for faster connection times.

In Fedora 28 we include that patch set to allow you to use ECDSA keys from your security tokens. You can list them with ssh-keygen as any other keys:

$ ssh-keygen -D /usr/lib64/pkcs11/opensc-pkcs11.so
ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHA[...]k3J0hkYnnsM=

Or use them for authentication, if the server is configured to accept this key:

$ ssh -vvv -I /usr/lib64/pkcs11/opensc-pkcs11.so example.com
debug1: Offering public key: ECDSA SHA256:5BrE5wevULd5wipj2bXYAr4gXQIICiywfV+kF5hA9X8 ECDSA jjelen@example.com
debug1: Offering public key: ECDSA SHA256:q4zxb5Woucr1HlUSh9Fq66sKdv3r5hlxIIqQQtaQKy4 pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so
debug1: Server accepts key: pkalg ecdsa-sha2-nistp256 blen 104
debug2: input_userauth_pk_ok: fp SHA256:q4zxb5Woucr1HlUSh9Fq66sKdv3r5hlxIIqQQtaQKy4
debug3: sign_and_send_pubkey: ECDSA SHA256:q4zxb5Woucr1HlUSh9Fq66sKdv3r5hlxIIqQQtaQKy4
Enter PIN for 'PIV Card Holder pin (PIV_II)': 
debug1: Authentication succeeded (publickey).

Specify the key to use

Historically, applications were using various ways how to reference keys in PKCS#11 modules, because there was no standard way to do so. Alternatively some applications were using just everything that was made available by a PKCS#11 module.

This works fine if the smart card is a company-issued with few keys. But this does not work well if one has several private keys to access different services or there are more security tokens aggregated in a single system-wide PKCS#11 module. In such cases, you need to map private keys to services, rather than to leave it on the tool to try all of them sequentially. Selecting an explicit key will always be faster, and prevents exceeding maximum of authentication tries. Furthermore, it allows to the use of different identities, for example on Github.

PKCS#11 URIs defined in RFC7512 provide a standard way to identify a specific object/key on PKCS#11 module according to their attributes. That, when supported universally, enables you to configure all the system and applications with the same configuration string in form of URI.

Additionally, as Fedora 28 provides p11-kit proxy which acts as a wrapper over the registered smart card drivers in the system,  we took advantage of it and it allows you to avoid the path to shared object altogether. The unique URI scheme allows you to specify the PKCS#11 URI in every place, where the path to a local private key file would be, including the configuration file, ssh-add or command-line ssh.


You can list all the keys provided by OpenSC PKCS#11 module (including their PKCS#11 URIs):

$ ssh-keygen -D /usr/lib64/pkcs11/opensc-pkcs11.so
ssh-rsa AAAAB3NzaC1yc2E...KKZMzcQZzx pkcs11:id=%02;object=SIGN%20pubkey;token=SSH%20key;manufacturer=piv_II?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so
ecdsa-sha2-nistp256 AAA...J0hkYnnsM= pkcs11:id=%01;object=PIV%20AUTH%20pubkey;token=SSH%20key;manufacturer=piv_II?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so

To connect to the server example.com using the ECDSA key from above (referenced by ID), you can use just a subset of the URI, which uniquely references our key:

$ ssh -i pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so example.com
Enter PIN for 'SSH key':
[example.com] $

You can use the same URI string in ~/.ssh/config to make the configuration permanent:

$ cat ~/.ssh/config
IdentityFile pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so
$ ssh example.com
Enter PIN for 'SSH key':
[example.com] $

Since the OpenSC PKCS#11 module is registered by default to p11-kit in Fedora, which is via p11-kit-proxy, used by OpenSSH, you can simplify the above commands:

$ ssh -i pkcs11:id=%01 example.com
Enter PIN for 'SSH key':
[example.com] $

The ssh-agent interface accepts the same URIs. For example, you can add only the ECDSA key to the agent and connect to example.com (this was not previously possible):

$ ssh-add pkcs11:id=%01
Enter passphrase for PKCS#11: 
Card added: pkcs11:id=%01
$ ssh example.com
[example.com] $

If you skip the ID, OpenSSH will load all the keys that are available in the proxy module, which usually matches the previous behavior, but is much less typing:

$ ssh -i pkcs11: example.com
Enter PIN for 'SSH key':
[example.com] $

Known issues

While the smart cards work generally fine, there are some corner cases, that are not handled ideally yet. Most of them are tracked upstream bugzilla.

From the most painful ones, we can note that the keys stored in ssh-agent do not support reauthentication. Therefore once you physically remove the token, you also need to remove and re-add it in the ssh-agent. Its interface does not provide a functionality to this automatically, so there is still some work to be done.

The ssh-agent also does not allow you to use keys that require PIN verification before signature (ALWAYS_AUTHENTICATE flag). This is very common in PIV cards to ensure non-repudiation of digital signatures. In this case, the easy workaround is to use different key, that does not enforce this policy.


The new OpenSSH comes with few improvements for private keys stored in smart cards and security tokens. These changes make the usage of security tokens easier for new users, allow them to take advantage of secure storage, in comparison to keys on disk. They also integrate OpenSSH into Fedora which already supports the RFC7512 identifiers for objects stored in tokens and uses p11-kit for smart card driver registration.

If you found an issue with the above functionality, or you have and idea what could be improved, feel free to comment, open a bug or start a discussion on Fedora or OpenSSH mailing lists.

The aforementioned features are available in Fedora, but not yet in the upstream OpenSSH releases. We include these changes in the hope they will be useful for users and it can make the upstream adoption of them faster.

Share Certs Data into a container.

Posted by Dan Walsh on May 14, 2018 07:04 AM

Last week, on the Fedora Users list someone was asking a question about getting SElinux to work with a container.  The mailer said that he was sharing certs into the container but SELinux as blocking access.

Here are the AVC's that were reported. 

Fri May 11 03:35:19 2018 type=AVC msg=audit(1526024119.640:1052): avc:  denied  { write } for   pid=13291 comm="touch" name="php-fpm.access" dev="dm-2" ino=20186094 scontext=system_u:system_r:container_t:s0:c581,c880 tcontext=system_u:object_r:user_home_t:s0 tclass=file permissive=0 

Looks like there is a container (container_t) that is attempting to write some content in you homedir (user_home_t). 

I surmised that the mailer must have been volume mounting a directory from his homedir into the container.

I responded to him with:

Private to container:

If these certs are only going to be used within one container you should add a :Z to the volume mount. 

podman run -d -v ~/container-Cert-Dir:/PATHINCONTAINER:Z fedora-app

Or if you are still using Docker.

docker run -d -v ~/container-Cert-Dir:/PATHINCONTAINER:Z fedora-app

This causes  the container runtime to relabel the volume with a SELinux label private to the container.

Shared with other Containers

If you want the container-Cert-Dir to be shared between multiple containers, and it can be shared read/only I would add the :z,ro flags

podman run -d -v ~/container-Cert-Dir:/PATHINCONTAINER:z,ro fedora-app

Using Docker.

docker run -d -v ~/container-Cert-Dir:/PATHINCONTAINER:z,ro fedora-app

This causes  the container runtime(s) to relabel the volume with an SELinux label that can be shared between containers.  Of course if the containers need to write to the directory, you would remove the ,ro.

Shared with other confined domains on host

If you want to share the Cert Directory with other confined apps on  your system, then you will need to disable SELinux separation in the  container.

podman run -d --security-opt label:disable -v ~/container-Cert-Dir:/PATHINCONTAINER fedora-app

Using Docker.

docker run -d --security-opt label:disable -v ~/container-Cert-Dir:/PATHINCONTAINER fedora-app

This causes  the container runtime(s) to launch the container with a unconfined type, allowing the container to read/write the volume without relabeling.

Episode 96 - Are legal backdoors a good idea?

Posted by Open Source Security Podcast on May 14, 2018 06:43 AM
Josh and Kurt talk about backdoors in code and products that have been put there on purpose. We talk about unlocking phones. Encryption backdoors with a focus on why they won't work.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6583352/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

X server pointer acceleration analysis - part 4

Posted by Peter Hutterer on May 14, 2018 05:08 AM

This post is part of a four part series: Part 1, Part 2, Part 3, Part 4.

In the first three parts, I covered the X server and synaptics pointer acceleration curves and how libinput compares to the X server pointer acceleration curve. In this post, I will compare libinput to the synaptics acceleration curve.

Comparison of synaptics and libinput

libinput has multiple different pointer acceleration curves, depending on the device. In this post, I will only consider the one used for touchpads. So let's compare the synaptics curve with the libinput curve at the default configurations:

But this one doesn't tell the whole story, because the touchpad accel for libinput actually changes once we get faster. So here are the same two curves, but this time with the range up to 1000mm/s. These two graphs show that libinput is both very different and similar. Both curves have an acceleration factor less than 1 for the majority of speeds, they both decelerate the touchpad more than accelerating it. synaptics has two factors it sticks to and a short curve, libinput has a short deceleration curve and its plateau is the same or lower than synaptics for the most part. Once the threshold is hit at around 250 mm/s, libinput's acceleration keeps increasing until it hits a maximum much later.

So, anything under ~20mm/s, libinput should be the same as synaptics (ignoring the <7mm/s deceleration). For anything less than 250mm/s, libinput should be slower. I say "should be" because that is not actually the case, synaptics is slower so I suspect the server scaling slows down synaptics even further. Hacking around in the libinput code, I found that moving libinput's baseline to 0.2 matches the synaptics cursor's speed. However, AFAIK that scaling depends on the screen size, so your mileage may vary.

Comparing configuration settings

Let's overlay the libinput speed toggles. In Part 2 we've seen the synaptics toggles and they're open-ended, so it's a bit hard to pick a specific set to go with to compare. I'll be using the same combined configuration options from the diagram there.

And we need the diagram from 0-1000mm/s as well. There isn't much I can talk about here in direct comparison, the curves are quite different and the synaptics curves vary greatly with the configuration options (even though the shape remains the same).


It's fairly obvious that the acceleration profiles are very different once depart from the default settings. Most notable, only libinput's slowest speed setting matches the 0.2 speed that is the synaptics default setting. In other words, if your touchpad is too fast compared to synaptics, it may not be possible to slow it down sufficiently. Likewise, even at the fastest speed, the baseline is well below the synaptics baseline for e.g. 0.6 [1], so if your touchpad is too slow, you may not be able to speed it up sufficiently (at least for low speeds). That problem won't exist for the maximum acceleration factor, the main question here is simply whether they are too high. Answer: I don't know.

So the base speed of the touchpad in libinput needs a higher range, that's IMO a definitive bug that I need to work on. The rest... I don't know. Let's see how we go.

[1] A configuration I found suggested in some forum when googling for MinSpeed, so let's assume there's at least one person out there using it.

X server pointer acceleration analysis - part 3

Posted by Peter Hutterer on May 14, 2018 05:08 AM

This post is part of a four part series: Part 1, Part 2, Part 3, Part 4.

In Part 1 and Part 2 I showed the X server acceleration code as used by the evdev and synaptics drivers. In this part, I'll show how it compares against libinput.

Comparison to libinput

libinput has multiple different pointer acceleration curves, depending on the device. In this post, I will only consider the default one used for mice. A discussion of the touchpad acceleration curve comes later. So, back to the graph of the simple profile. Let's overlay this with the libinput pointer acceleration curve:

Turns out the pointer acceleration curve, mostly modeled after the xserver behaviour roughly matches the xserver behaviour. Note that libinput normalizes to 1000dpi (provided MOUSE_DPI is set correctly) and thus the curves only match this way for 1000dpi devices.

libinput's deceleration is slightly different but I doubt it is really noticeable. The plateau of no acceleration is virtually identical, i.e. at slow speeds libinput moves like the xserver's pointer does. Likewise for speeds above ~33mm/s, libinput and the server accelerate by the same amount. The actual curve is slightly different. It is a linear curve (I doubt that's noticeable) and it doesn't have that jump in it. The xserver acceleration maxes out at roughly 20mm/s. The only difference in acceleration is for the range of 10mm/s to 33mm/s.

30mm/s is still a relatively slow movement (just move your mouse by 30mm within a second, it doesn't feel fast). This means that for all but slow movements, the current server and libinput acceleration provides but a flat acceleration at whatever the maximum acceleration is set to.

Comparison of configuration options

The biggest difference libinput has to the X server is that it exposes a single knob of normalised continuous configuration (-1.0 == slowest, 1.0 == fastest). It relies on settings like MOUSE_DPI to provide enough information to map a device into that normalised range.

Let's look at the libinput speed settings and their effect on the acceleration profile (libinput 1.10.x).

libinput's speed setting is a combination of changing thresholds and accel at the same time. The faster you go, the sooner acceleration applies and the higher the maximum acceleration is. For very slow speeds, libinput provides deceleration. Noticeable here though is that the baseline speed is the same until we get to speed settings of less than -0.5 (where we have an effectively flat profile anyway). So up to the (speed-dependent) threshold, the mouse speed is always the same.

Let's look at the comparison of libinput's speed setting to the accel setting in the simple profile:

Clearly obvious: libinput's range is a lot smaller than what the accel setting allows (that one is effectively unbounded). This obviously applies to the deceleration as well: I'm not posting the threshold comparison, as Part 1 shows it does not effect the maximum acceleration factor anyway.


So, where does this leave us? I honestly don't know. The curves are different but the only paper I could find on comparing acceleration curves is Casiez and Roussel' 2011 UIST paper. It provides a comparison of the X server acceleration with the Windows and OS X acceleration curves [1]. It shows quite a difference between the three systems but the authors note that no specific acceleration curve is definitely superior. However, the most interesting bit here is that both the Windows and the OS X curve seem to be constant acceleration (with very minor changes) rather than changing the curve shape.

Either way, there is one possible solution for libinput to implement: to change the base plateau with the speed. Otherwise libinput's acceleration curve is well defined for the configurable range. And a maximum acceleration factor of 3.5 is plenty for a properly configured mouse (generally anything above 3 is tricky to control). AFAICT, the main issues with pointer acceleration come from mice that either don't have MOUSE_DPI set or trackpoints which are, unfortunately, a completely different problem.

I'll probably also give the windows/OS X approaches a try (i.e. same curve, different constant deceleration) and see how that goes. If it works well, that may be a a solution because it's easier to scale into a large range. Otherwise, *shrug*, someone will have to come with a better solution.

[1] I've never been able to reproduce the same gain (== factor) but at least the shape and x axis seems to match.

X server pointer acceleration analysis - part 2

Posted by Peter Hutterer on May 14, 2018 05:08 AM

This post is part of a four part series: Part 1, Part 2, Part 3, Part 4.

In Part 1 I showed the X server acceleration code as used by the evdev driver (which leaves all acceleration up to the server). In this part, I'll show the acceleration code as used by the synaptics touchpad driver. This driver installs a device-specific acceleration profile but beyond that the acceleration is... difficult. The profile itself is not necessarily indicative of the real movement, the coordinates are scaled between device-relative, device-absolute, screen-relative, etc. so often that it's hard to keep track of what the real delta is. So let's look at the profile only.

Diagram generation

Diagrams were generated by gnuplot, parsing .dat files generated by the ptrveloc tool in the git repo. Helper scripts to regenerate all data are in the repo too. Default values unless otherwise specified:

  • MinSpeed: 0.4
  • MaxSpeed: 0.7
  • AccelFactor: 0.04
  • dpi: 1000 (used for converting units to mm)
All diagrams are limited to 100 mm/s and a factor of 5 so they are directly comparable. From earlier testing I found movements above over 300 mm/s are rare, once you hit 500 mm/s the acceleration doesn't really matter that much anymore, you're going to hit the screen edge anyway.

The choice of 1000 dpi is a difficult one. It makes the diagrams directly comparable to those in Part 1but touchpads have a great variety in their resolution. For example, an ALPS DualPoint touchpad may have resolutions of 25-32 units/mm. A Lenovo T440s has a resolution of 42 units/mm over PS/2 but 20 units/mm over the newer SMBus/RMI4 protocol. This is the same touchpad. Overall it doesn't actually matter that much though, see below.

The acceleration profile

This driver has a custom acceleration profile, configured by the MinSpeed, MaxSpeed and AccelFactor options. The former two put a cap on the factor but MinSpeed also adjusts (overwrites) ConstantDeceleration. The AccelFactor defaults to a device-specific size based on the device diagonal.

Let's look at the defaults of 0.4/0.7 for min/max and 0.04 (default on my touchpad) for the accel factor:

The simple profile from part 1 is shown in this graph for comparison. The synaptics profile is printed as two curves, one for the profile output value and one for the real value used on the delta. Unlike the simple profile you cannot configure ConstantDeceleration separately, it depends on MinSpeed. Thus the real acceleration factor is always less than 1, so the synaptics driver doesn't accelerate as such, it controls how much the deltas are decelerated.

The actual acceleration curve is just a plain old linear interpolation between the min and max acceleration values. If you look at the curves closer you'll find that there is no acceleration up to 20mm/s and flat acceleration from 25mm/s onwards. Only in this small speed range does the driver adjust its acceleration based on input speed. Whether this is in intentional or just happened, I don't know.

The accel factor depends on the touchpad x/y axis. On my T440s using PS/2, the factor defaults to 0.04. If I get it to use SMBus/RMI4 instead of PS/2, that same device has an accel factor of 0.09. An ALPS touchpad may have a factor of 0.13, based on the min/max values for the x/y axes. These devices all have different resolutions though, so here are the comparison graphs taking the axis range and the resolution into account:

The diagonal affects the accel factor, so these three touchpads (two curves are the same physical touchpad, just using a different bus) get slightly different acceleration curves. They're more similar than I expected though and for the rest of this post we can get away we just looking at the 0.04 default value from my touchpad.

Note that due to how synaptics is handled in the server, this isn't the whole story, there is more coordinate scaling etc. happening after the acceleration code. The synaptics acceleration profile also does not acccommodate for uneven x/y resolutions, this is handled in the server afterwards. On touchpads with uneven resolutions the velocity thus depends on the vector, moving along the x axis provides differently sized deltas than moving along the y axis. However, anything applied later isn't speed dependent but merely a constant scale, so these curves are still a good representation of what happens.

The effect of configurations

What does the acceleration factor do? It changes when acceleration kicks in and how steep the acceleration is.

And how do the min/max values play together? Let's adjust MinSpeed but leave MaxSpeed at 0.7.

MinSpeed lifts the baseline (i.e. the minimum acceleration factor), somewhat expected from a parameter named this way. But it looks again like we have a bug here. When MinSpeed and MaxSpeed are close together, our acceleration actually decreases once we're past the threshold. So counterintuitively, a higher MinSpeed can result in a slower cursor once you move faster.

MaxSpeed is not too different here:

The same bug is present, if the MaxSpeed is smaller or close to MinSpeed, our acceleration actually goes down. A quick check of the sources didn't indicate anything enforcing MinSpeed < MaxSpeed either. But otherwise MaxSpeed lifts the maximum acceleration factor.

These graphs look at the options in separation, in reality users would likely configure both MinSpeed and MaxSpeed at the same time. Since both have an immediate effect on pointer movement, trial and error configuration is simple and straightforward. Below is a graph of all three adjusted semi-randomly:

No suprises in there, the baseline (and thus slowest speed) changes, the maximum acceleration changes and how long it takes to get there changes. The curves vary quite a bit though, so without knowing the configuration options, it's impossible to predict how a specific touchpad behaves.


The graphs above show the effect of configuration options in the synaptics driver. I purposely didn't put any specific analysis in and/or compare it to libinput. That comes in a future post.

X server pointer acceleration analysis - part 1

Posted by Peter Hutterer on May 14, 2018 05:07 AM

This post is part of a four part series: Part 1, Part 2, Part 3, Part 4.

Over the last few days, I once again tried to tackle pointer acceleration. After all, I still get plenty of complaints about how terrible libinput is and how the world was so much better without it. So I once more tried to understand the X server's pointer acceleration code. Note: the evdev driver doesn't do any acceleration, it's all handled in the server. Synaptics will come in part two, so this here focuses mostly on pointer acceleration for mice/trackpoints.

After a few failed attempts of live analysis [1], I finally succeeded extracting the pointer acceleration code into something that could be visualised. That helped me a great deal in going back and understanding the various bits and how they fit together.

The approach was: copy the ptrveloc.(c|h) files into a new project, set up a meson.build file, #define all the bits that are assumed to be there and voila, here's your library. Now we can build basic analysis tools provided we initialise all the structs the pointer accel code needs correctly. I think I succeeded. The git repo is here if anyone wants to check the data. All scripts to generate the data files are in the repository.

A note on language: the terms "speed" and "velocity" are subtly different but for this post the difference doesn't matter. The code uses "velocity" but "speed" is more natural to talk about, so just assume equivalence.

The X server acceleration code

There are 15 configuration options for pointer acceleration (ConstantDeceleration, AdaptiveDeceleration, AccelerationProfile, ExpectedRate, VelocityTrackerCount, Softening, VelocityScale, VelocityReset, VelocityInitialRange, VelocityRelDiff, VelocityAbsDiff, AccelerationProfileAveraging, AccelerationNumerator, AccelerationDenominator, AccelerationThreshold). Basically, every number is exposed as configurable knob. The acceleration code is a product of a time when we were handing out configuration options like participation medals at a children's footy tournament. Assume that for the rest of this blog post, every behavioural description ends with "unless specific configuration combinations apply". In reality, I think only four options are commonly used: AccelerationNumerator, AccelerationDenominator, AccelerationThreshold, and ConstantDeceleration. These four have immediate effects on the pointer movement and thus it's easy to do trial-and-error configuration.

The server has different acceleration profiles (called the 'pointer transfer function' in the literature). Each profile is a function that converts speed into a factor. That factor is then combined with other things like constant deceleration, but eventually our output delta forms as:

deltaout(x, y) = deltain(x, y) * factor * deceleration
The output delta is passed back to the server and the pointer saunters over by few pixels, happily bumping into any screen edge on the way.

The input for the acceleration profile is a speed in mickeys, a threshold (in mickeys) and a max accel factor (unitless). Mickeys are a bit tricky. This means the acceleration is device-specific, the deltas for a mouse at 1000 dpi are 20% larger than the deltas for a mouse at 800 dpi (assuming same physical distance and speed). The "Resolution" option in evdev can work around this, but by default this means that the acceleration factor is (on average) higher for high-resolution mice for the same physical movement. It also means that that xorg.conf snippet you found on stackoverflow probably does not do the same on your device.

The second problem with mickeys is that they require a frequency to map to a physical speed. If a device sends events every N ms, delta/N gives us a speed in units/ms. But we need mickeys for the profiles. Devices generally have a fixed reporting rate and the speed of each mickey is the same as (units/ms * reporting rate). This rate defaults to 10 in the server (the VelocityScaling default value) and thus matches a device reporting at 100Hz (a discussion of this comes later). All graphs below were generated with this default value.

Back to the profile function and how it works: The threshold(usually) defines the mimimum speed at which acceleration kicks in. The max accel factor (usually) limits the acceleration. So the simplest algorithm is

if (velocity < threshold)
return base_velocity;
factor = calculate_factor(velocity);
if (factor > max_accel)
return max_accel;
return factor;
In reality, things are somewhere between this simple and "whoops, what have we done".

Diagram generation

Diagrams were generated by gnuplot, parsing .dat files generated by the ptrveloc tool in the git repo. Helper scripts to regenerate all data are in the repo too. Default values unless otherwise specified:

  • threshold: 4
  • accel: 2
  • dpi: 1000 (used for converting units to mm)
  • constant deceleration: 1
  • profile: classic
All diagrams are limited to 100 mm/s and a factor of 5 so they are directly comparable. From earlier testing I found movements above over 300 mm/s are rare, once you hit 500 mm/s the acceleration doesn't really matter that much anymore, you're going to hit the screen edge anyway.

Acceleration profiles

The server provides a number of profiles, but I have seen very little evidence that people use anything but the default "Classic" profile. Synaptics installs a device-specific profile. Below is a comparison of the profiles just so you get a rough idea what each profile does. For this post, I'll focus on the default Classic only.

First thing to point out here that if you want to have your pointer travel to Mars, the linear profile is what you should choose. This profile is unusable without further configuration to bring the incline to a more sensible level. Only the simple and limited profiles have a maximum factor, all others increase acceleration indefinitely. The faster you go, the more it accelerates the movement. I find them completely unusable at anything but low speeds.

The classic profile transparently maps to the simple profile, so the curves are identical.

Anyway, as said above, profile changes are rare. The one we care about is the default profile: the classic profile which transparently maps to the simple profile (SimpleSmoothProfile() in the source).

Looks like there's a bug in the profile formula. At the threshold value it jumps from 1 to 1.5 before the curve kicks in. This code was added in ~2008, apparently no-one noticed this in a decade.

The profile has deceleration (accel factor < 1 and thus decreasing the deltas) at slow speeds. This provides extra precision at slow speeds without compromising pointer speed at higher physical speeds.

The effect of config options

Ok, now let's look at the classic profile and the configuration options. What happens when we change the threshold?

First thing that sticks out: one of these is not like the others. The classic profile changes to the polynomial profile at thresholds less than 1.0. *shrug* I think there's some historical reason, I didn't chase it up.

Otherwise, the threshold not only defines when acceleration starts kicking in but it also affects steepness of the curve. So higher threshold also means acceleration kicks in slower as the speed increases. It has no effect on the low-speed deceleration.

What happens when we change the max accel factor? This factor is actually set via the AccelerationNumerator and AccelerationDenominator options (because floats used to be more expensive than buying a house). At runtime, the Xlib function of your choice is XChangePointerControl(). That's what all the traditional config tools use (xset, your desktop environment pre-libinput, etc.).

First thing that sticks out: one is not like the others. When max acceleration is 0, the factor is always zero for speeds exceeding the threshold. No user impact though, the server discards factors of 0.0 and leaves the input delta as-is.

Otherwise it's relatively unexciting, it changes the maximum acceleration without changing the incline of the function. And it has no effect on deceleration. Because the curves aren't linear ones, they don't overlap 100% but meh, whatever. The higher values are cut off in this view, but they just look like a larger version of the visible 2 and 4 curves.

Next config option: ConstantDeceleration. This one is handled outside of the profile but at the code is easy-enough to follow, it's a basic multiplier applied together with the factor. (I cheated and just did this in gnuplot directly)

Easy to see what happens with the curve here, it simply stretches vertically without changing the properties of the curve itself. If the deceleration is greater than 1, we get constant acceleration instead.

All this means with the default profile, we have 3 ways of adjusting it. What we can't directly change is the incline, i.e. the actual process of acceleration remains the same.

Velocity calculation

As mentioned above, the profile applies to a velocity so obviously we need to calculate that first. This is done by storing each delta and looking at their direction and individual velocity. As long as the direction remains roughly the same and the velocity between deltas doesn't change too much, the velocity is averaged across multiple deltas - up to 16 in the default config. Of course you can change whether this averaging applies, the max time deltas or velocity deltas, etc. I'm honestly not sure anyone ever used any of these options intentionally or with any real success.

Velocity scaling was explained above (units/ms * reporting rate). The default value for the reporting rate is 10, equivalent to 100Hz. Of the 155 frequencies currently defined in 70-mouse.hwdb, only one is 100 Hz. The most common one here is 125Hz, followed by 1000Hz followed by 166Hz and 142Hz. Now, the vast majority of devices don't have an entry in the hwdb file, so this data does not represent a significant sample set. But for modern mice, the default velocity scale of 10 is probably off between 25% and a factor 10. While this doesn't mean much for the local example (users generally just move the numbers around until they're happy enough) it means that the actual values are largely meaningless for anyone but those with the same hardware.

Of note: the synaptics driver automatically sets VelocityScale to 80Hz. This is correct for the vast majority of touchpads.


The graphs above show the X server's pointer acceleration for mice, trackballs and other devices and the effects of the configuration toggles. I purposely did not put any specific analysis in and/or comparison to libinput. That will come in a future post.

[1] I still have a branch somewhere where the server prints yaml to the log file which can then be extracted by shell scripts, passed on to python for processing and ++++ out of cheese error. redo from start ++++

PHP version 7.1.18RC1 and 7.2.6RC1

Posted by Remi Collet on May 13, 2018 07:20 PM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.2.6RC1 are available as SCL in remi-test repository and as base packages in the remi-php72-test repository for Fedora 25-27 and Enterprise Linux.

RPM of PHP version 7.1.18RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26-27 or remi-php71-test repository for Fedora 25 and Enterprise Linux.

PHP version 7.0 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.2.2RC1 is also available in Fedora rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.5.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php71, php72)

Base packages (php)

Mon installation domotique

Posted by Guillaume Kulakowski on May 13, 2018 06:50 PM

J’ai commencé à m’intéresser à la domotique il y a un peu plus de 2 ans, lors de la construction de ma maison. Cependant, je n’ai sauté le pas qu’à l’occasion d’un deal sur des détecteurs de fumée Fibaro FGSD-002. J’ai alors recyclé un Raspberry Pi 2B en box domotique en lui greffant une clef […]

Cet article Mon installation domotique est apparu en premier sur Guillaume Kulakowski's blog.

Xilinx Virtex 7 FPGA bitstream reverse engineered

Posted by Richard W.M. Jones on May 13, 2018 02:29 PM

While my article on HN is getting no traction I might as well post on here some fantastic news: The Xilinx Virtex 7 FPGA bitstream has been reverse engineered by Clifford Wolf.

For some context this is a very popular and cheap series of FPGA devices. For example you can buy the Arty board which has one of these FPGAs for $99, or the slightly more advanced Nexys 4 DDR for $265.

Currently you must use the Xilinx Vivado tool which is a 40 GB download [no, that isn’t a typo], requires a paid license to unlock the full features, and is generally awful to use.

This work should eventually lead to a complete open source toolchain to program these devices, just like Project IceStore for the Lattice devices.

Mastering en Bash Trabajando con textos

Posted by Alvaro Castillo on May 13, 2018 10:35 AM

Trabajando con textos

En esta entrega vamos a ver como trabajar con textos en Bash, tenemos una serie de comandos para trabajar con ellos que nos ayudarán cuando estemos avanzando en nuestra senda por este querido mundo que es el Software Libre.

Comando less(1)

Este comando se utiliza mucho para visualizar textos desde el principio. Nos podemos olvidar de hacer un cat(1) y empezar a subir la parrafada bien con un scroll en una terminal gráfica o bien haciendo uso de Shift + Page Up/P...

InvoicePrinter 1.2

Posted by Josef Strzibny on May 13, 2018 10:11 AM

A new version of my Ruby gem for generating PDF invoices InvoicePrinter is out! This time bringing in a bundled server that can be handy for applications not running on Ruby.

Not every app out there is a Ruby application and I wanted for people on different stacks to be able to benefit from super simple PDF invoicing that InvoicePrinter enable. This is the reason why I implemented JSON support and a command line in version 1.1 and why am I adding the server in 1.2. You can run it as a standalone server or mount it in any Rack application and use its JSON API to generate the documents.

Here is the documentation.

Read more about how to use this feature in my previous posts:

And as always I would love to hear your feedback and ideas.

Flisol 2018 Slides: Fedora loves Python (spanish)

Posted by Alberto Rodriguez (A.K.A bt0) on May 13, 2018 02:06 AM

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="420" scrolling="no" src="http://slides.com/albertorodriguezsanchez/prueba/embed" width="576"></iframe>

github Repo:


Flisol 2018 Slides: Fedora for Developers (spanish)

Posted by Alberto Rodriguez (A.K.A bt0) on May 13, 2018 02:01 AM

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="420" scrolling="no" src="http://slides.com/albertorodriguezsanchez/prueba-1/embed" width="576"></iframe>

FLISoL 2018 Mexico report

Posted by Alberto Rodriguez (A.K.A bt0) on May 13, 2018 01:49 AM

In Mexico the Fedora Project has at least 2 events listed as follow:

The Rule

Located in the heart of the downtown in the Mexico city, “El Rule” is a cultural center with a vast quantity of activities, this year was a one of the most publicized FLISoL organizers in the city.

The Fedora Project was present with one of the most prominent and famous Fedora Ambassadors in México: Efren Robledo  (A.K.A Sr. Kraken). During the event, he promote the Fedora Linux installation and generating strong ties of friendship with other communities like Mozilla Mexico, Red Hat Mexico, OpenSuse and more.

At the same time, but far far away:

Toluca Institute of Technology

in the Toluca Institute of Technology (spanish) ,  Alberto Rodriguez (A.K.A bt0dotNinja), gave a talk about Fedora how to use fedora for developing and one workshop introducing Python and why the Fedora is a Python centric Project.

the git repository of the talks and workshops are here: https://github.com/bt0DotNinja/FLISoL2018_Toluca

Cheers 🙂

Testing out Sumatra: a tool for managing iterations of simulations/analyses

Posted by Ankur Sinha "FranciscoD" on May 12, 2018 07:53 PM

In the ~4 years that I've spent on my PhD now, I've run hundreds, nay, thousands of simulations. Research work is incredibly iterative. I (and I assume others too) make small changes to their methods and then study how these changes produce different results, and this cycle continues until a proposed hypothesis has either been accepted or refuted (or a completely new insight gained, which happens quite often too!).

Folders, and Dropbox? Please, no.

Keeping track of all these iterations is quite a task. I've seen multiple methods that people use to do this. A popular method is to make a different folder for each different version of code, and then use something like Dropbox to store them all.

Since I come from a computing background, I firmly believe that this is not a good way of going about it. It may work for folks---people I know and work with use this method---but it is simply a bad way of going about it. This PhDComic does a rather good job of showing an example situation. Sure, this is about a document, but when source code is kept in different folders, a similar situation arises. You get the idea.


Version control, YES!

If there weren't tools designed to track and manage such projects, one could still argue for using such methods, but the truth is that there is a plethora of version control tools available under Free/Open Source licenses. Not only do these tools manage projects, they also make collaborating over source code simple.

All my simulation code, for example, lives in a Git repository (which will be made available under a Free/Open source license as soon as my paper goes out to ensure that others can read, verify, and build on it). The support scripts that I use to set up simulations and then analyse the data they produce already live here on GitHub, for example. Please go ahead and use them if they fit your purpose.

I have different Git branches for different features that I add to the simulations---the different hypothesis that I'm testing out. I also keep a rather meticulous record of everything I do in a research journal in LaTeX that also lives in a Git repository, and uses Calliope (a simple helper script to manage various journaling tasks). Everything goes in here---graphs, images, sometimes patches and source code even, and the deductions and other comments/thoughts too.

My rather simple system is as follows:

  • Each new feature/hypothesis gets its own Git branch.
  • Each version of its implementation, therefore, gets its own unique commit (a snapshot of code that Git saves for the user with a unique identifier and a complete record of the changes that were made to the project, when they were made and so on.)
  • For each run of a snapshot, the generated data is stored in a folder that is named YYYYMMDDHHMM (Year, month, day, time), which, unless you figure out how to go back in time, is also unique.
  • The commit hash + YYYYMMDD become a unique identifier for each code snapshot and the results that it generated.
  • A new chapter in my research journal holds a summary of the simulation, and all the analysis that I do. I even name the chapter "git-hash/YYYYMMDDHHMM".
XKCD on Git.

I know that learning a version control system has a steep initial curve, but I really do think that this is one tool that is well worth the time.

Using a version control system has many advantages, some of which are:

  • It lets you keep the full history of your source code, and go back to any previous version.
  • You know exactly what you changed between two snapshots.
  • If multiple people work on the code, everyone knows exactly who authored what.
  • These tools make changing code, trying out things, and so on, very very easy. Try something out in a different branch, if it worked, yay, keep the branch running; maybe even merge it to the main branch? If it didn't make a note, delete the branch, and move on!
  • With services like GitHub, BitBucket, and GitLab, collaboration becomes really easy.
  • Ah, and note, that every collaborator has a copy of the source code, so it has been backed up too! Even if you work alone, there's always another copy on GitHub (or whatever service you use).

Here's a quick beginners guide to using Git and GitHub: http://journals.plos.org/ploscompbiol/article?id=10.1371/journal.pcbi.1004668 There are many more all over the WWW, of course. Duckduckgo is your friend. (Why Duckduckgo and not Google?)

What's Sumatra about, then?

Sumatra: a tool to manage and track simulation runs.

I've been meaning to try Sumatra out for a while now. What Sumatra does is sort of bring the functions of all my scripts together into one well-designed tool. Sumatra can do the running bit, then save the generated data in a unique location, and it even lets users add comments about the simulation. Sumatra even has a web based front end for those that would prefer a graphical interface instead of the command line. Lastly, Sumatra is written in Python, so it works on pretty much all systems. Note that Sumatra forces the use of a version control system (from what I've seen yet).

A quick walk-through

The documentation contains all of this already, but I'll show the steps here too. I used a dummy repository to test it out.

Installing Sumatra is as easy as a pip command. I would suggest setting up a virtual-environment, though:

python3 -m venv --system-site-packages sumatra-virtual

We then activate the virtual-environment, and install Sumatra:

source sumatra-virtual/bin/activate
pip install sumatra

Once it finishes installing, simply mark a version controlled source repository as managed by Sumatra:

cd my-awesome-project
smt init my-awesome-project

Then, one can see the information that Sumatra has on the project, for example:

smt info
Project name        : test-repo
Default executable  : Python (version: 3.6.5) at /home/asinha/dump/sumatra-virt/bin/python3
Default repository  : GitRepository at /home/asinha/Documents/02_Code/00_repos/00_mine/sumatra-nest-cluster-test (upstream: git@github.com:sanjayankur31/sumatra-nest-cluster-test.git)
Default main file   : test.py
Default launch mode : serial
Data store (output) : /home/asinha/Documents/02_Code/00_repos/00_mine/sumatra-nest-cluster-test/Data
.          (input)  : /
Record store        : Django (/home/asinha/Documents/02_Code/00_repos/00_mine/sumatra-nest-cluster-test/.smt/records)
Code change policy  : error
Append label to     : None
Label generator     : timestamp
Timestamp format    : %Y%m%d-%H%M%S
Plug-ins            : []
Sumatra version     : 0.7.4

My test script only prints a short message. Here's how one would run it using Sumatra:

# so that we don't have to specify this for each run
smt configure --executable=python3 --main=test.py

smt run
Hello Sumatra World!
Record label for this run: '20180512-200859'
No data produced.

One can now see all the runs of this simulation that have been made!

smt list --long
Label            : 20180512-200859
Timestamp        : 2018-05-12 20:08:59.761849
Reason           :
Outcome          :
Duration         : 0.050611019134521484
Repository       : GitRepository at /home/asinha/Documents/02_Code/00_repos/00_mine/sumatra-nest-
                 : cluster-test (upstream: git@github.com:sanjayankur31/sumatra-nest-cluster-
                 : test.git)
Main_File        : test.py
Version          : 6f4e1bf05f223a0100ca6f843c11ef4fd70490f3
Script_Arguments :
Executable       : Python (version: 3.6.5) at /home/asinha/dump/sumatra-virt/bin/python3
Parameters       :
Input_Data       : []
Launch_Mode      : serial
Output_Data      : []
User             : Ankur Sinha (Ankur Sinha Gmail) <sanjay.ankur@gmail.com>
Tags             :
Repeats          : None
Label            : 20180512-181422
Timestamp        : 2018-05-12 18:14:22.668655
Reason           :
Outcome          : Well that worked
Duration         : 0.05211901664733887
Repository       : GitRepository at /home/asinha/Documents/02_Code/00_repos/00_mine/sumatra-nest-
                 : cluster-test (upstream: git@github.com:sanjayankur31/sumatra-nest-cluster-
                 : test.git)
Main_File        : test.py
Version          : 4f151a368b1fee1fa8f21026c3b6d2c6b2531da8
Script_Arguments :
Executable       : Python (version: 3.6.5) at /home/asinha/dump/sumatra-virt/bin/python3
Parameters       :
Input_Data       : []
Launch_Mode      : serial
Output_Data      : []
User             : Ankur Sinha (Ankur Sinha Gmail) <sanjay.ankur@gmail.com>
Tags             :
Repeats          : None

There's a lot more that can be done, of course. I'll quickly show the GUI version here.

One can run the webversion using:

smtweb -p 8001 #whatever port number one wants to use

Then, it'll open up in your default web-browser at

Sumatra initial interface.

For each project, one can see the various runs, with all the associated information too.

Records for a project in Sumatra

One can then add more information about a run. Sumatra already stores lots of important information as the image shows:

More information on each record in Sumatra

Pretty neat, huh?

I run my simulations on a cluster, and so have my own system to submit jobs to the queue system. Sumatra can run jobs in parallel on a cluster, but I've still got to check if it also integrates with the queue system that our cluster runs. Luckily, Sumatra also provides an API, so I should be able to write a few Python scripts to handle that bit too. It's on my TODO list now.

Please use version control and a Sumatra style record keeper

I haven't found another tool that does what Sumatra does yet. Maybe Jupyter notebooks would come close, but one would have to add some sort of wrapper around them to keep proper records. It'll probably be similar to my current system.

In summary, please use version control, and use a record keeper to manage and track simulations. Not only does it make it easier for you, the researcher, it also makes it easier for others to replicate the simulation since the record keeper provides all the information required to re-run the simulation.

Free/Open source software promotes Open Science

<video controls="controls" height="390" poster="//static.fsf.org/nosvn/FSF30-video/fsf30-poster.png" width="640"> <source src="//static.fsf.org/nosvn/FSF30-video/FSF_30_720p.webm" type="video/webm"> <track default="default" kind="subtitles" label="English" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.en.vtt" srclang="en"> <track kind="subtitles" label="Spanish" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_es.vtt" srclang="es"> <track kind="subtitles" label="French" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.fr.vtt" srclang="fr"> <track kind="subtitles" label="German" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.de.vtt" srclang="en"> <track kind="subtitles" label="русский" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.ru.vtt" srclang="ru"> <track kind="subtitles" label="italiano" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.it.vtt" srclang="it"> <track kind="subtitles" label="português" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.pt.vtt" srclang="pt"> <track kind="subtitles" label="српски" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.sr.vtt" srclang="sr"> <track kind="subtitles" label="fārsi" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.fa.vtt" srclang="fa"> <track kind="subtitles" label="nederlands" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.nl.vtt" srclang="nl"> <track kind="subtitles" label="magyar" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.hu.vtt" srclang="hu"> <track kind="subtitles" label="svenska" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.se.vtt" srclang="se"> <track kind="subtitles" label="română" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.ro.vtt" srclang="ro"> <track kind="subtitles" label="lietuvių" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.lt.vtt" srclang="lt"> <track kind="subtitles" label="hebrew" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.he.vtt" srclang="lt"> <track kind="subtitles" label="português do Brasil" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.pt-br.vtt" srclang="pt-br"> <track kind="subtitles" label="chinese" src="//static.fsf.org/nosvn/FSF30-video/captions/FSF_30_720p.zh-cn.vtt" srclang="lt">

User liberation video at the Free Software Foundation.


(The original video is at the Free Software Foundation's website.)

As a concluding plea, I request everyone to please use Free/Open source software for all research. Not only are these available free of cost, they provide everyone with the right to read, validate, study, copy, share, and modify the software. One can learn so much from reading how research tools are built. One can be absolutely sure of their results if they can see the code that carries out the analysis. One can build on others' work if the source is available for all to use and change. How easy does replication become when the source and all related resources are given out for all to use?

Do not use Microsoft Word, for example. Not everyone, even today, has access to Microsoft software. Should researchers be required to buy a Microsoft license to be able to collaborate with us? The tools are here to enable science, not hamper it. Proprietary software and formats do not enable science, they restrict it to those that can pay for such software. This is not a restriction we should endorse in any way.

Yes, I know that sometimes there aren't Free/Open source software alternatives that carry the same set of features, but a little bit of extra work, for me, is an investment towards Open Science. Instead of Word, as an example, use Libreoffice, or LaTeX. Use Open formats. There will be bugs, but until we report them, they will not be fixed. Until these Free/Open source tools replace restricted software as the standard for science, they will only have small communities around them that build and maintain them.

Open Science is a necessity. Researchers from the neuroscience community recently signed this letter committing to the use of Free/Open source software for their research. There are similar initiatives in other fields too, and of course, one must be aware of the Open Access movement etc.

I've made this plea in the context of science, but the video should also show you how in everyday life, it is important to use Free/Open source resources. Please use Free/Open source resources, as much as possible.

Escuchando música con MPD y ncmpcp

Posted by Alvaro Castillo on May 12, 2018 02:05 PM

¿Vida sin música?

¿Cuántos de nosotr@s nos hemos hecho esta pregunta? ¿Qué sería de la vida sin música? ¿Qué sería de la vida sin letras de canciones que denuncien, que estimen, que compartan dolor, amor, qué irradien ira o revolución? ¿Qué solo contenga ritmo para hacer ejercicio físico sin problemas? ¿Cuándo estamos solos y afligidos, cuándo queremos compartir momentos de celebración? Parece mentira pero la música nos despierta y nos anima. Pero más nos anima si la escuch...

Mastering en Bash: grep y sus amigos

Posted by Alvaro Castillo on May 12, 2018 12:00 PM

Volvemos a la carga

Como hemos comentado en el post anterior relacionado con este Mastering, queríamos dedicarle una publicación directamente a los comando grep(1) y cut(1), pues aquí esta uno de ellos. Ya hemos visto lo siguiente en este Mastering en Bash:

  • Qué es una shell y qué es Bash
  • Qué son las entradas y salidas en una shell
  • Qué son los modificadores en los comandos
  • Trabajando con directorios y archivos
    • Mostrando archivos ocultos
    • Qué son las rutas absolutas y relativas
    • ...