Fedora People

openSUSE Conference 2018

Posted by Fabiano Fidêncio on May 26, 2018 08:36 PM
This year openSUSE conference was held in Prague and, thanks to both my employer and openSUSE conference organizers, I've been able to spend almost a full day there.

I've headed to Prague with a Fleet Commander talk accepted and, as openSUSE Leap 15.0 was released Yesterday, also with the idea to show an unattended ("express") installation of the "as fresh as possible" Leap 15.0 happening on GNOME Boxes.

The conference was not so big, which helped to easy spot some old friends (Fridrich Strba, seriously? Meeting you after almost 7 years ... I have no words to describe my happiness on seeing you there!), some known faces (as Scott, with whom I just meet at conferences :-)) and also meet some people who either helped me a lot in the past (here I can mention the whole autoyast team who gave me some big support when I was writing down the autoinst.xml for libosinfo, which provides the support to do openSUSE's express installations via GNOME Boxes) or who have some interest in some of the work I've been doing (as Richard Brown who's a well-know figure around SUSE/openSUSE community, a GNOME Boxes user and also an enthusiastic supporter of our work done in libosiinfo/osinfo-db).

About the talks ...

I've re-shaped the very same Fleet Commander talk presented at FOSDEM'18 and also have prepared a demo to show the audience the magic happening on a CentOS 7.5 environment. In the audience we had around 20 people and the talk went considerably well considering that the demo just exploded. After leaving the conference room I took some time to debug what happened and seems that my master machine just hung at some point, thus the client machine wasn't able to download the desktop-profiles data from it (and it hung for so long that the DataProvider was marked as offline) and as I didn't have time to do a "live-debug" session I ended up proceeding with the rest of the talk (curiously, when writing this blog post I've logged in the client machine in order to debug the issue and the first thing that I see is the "pink background"!!!). We've even gotten a few questions! :-)
Sincerely, thanks to everyone who attended the talk!
I'm taking as an action item from this to write down a blog post on how to debug those issues (end-to-end) as, due to amount of components involved, something can go wrong on different parts, different projects and on.

By the end of my Fleet Commander talk, I've taken 5 minutes to say that I'm also a libosinfo maintainer (with a strong interest in the "tooling" part of the virtualization world :-)) and mention that during the trip from Brno to Prague I've crafted some patches adding support to openSUSE Leap 15.0 that was just released Yesterday and I'd like to show them an express installation performed via GNOME Boxes. In order to do so, I've booted the ISO, set up my username and password, clicked on "Create" and left my laptop in the presentation desk till the end of the next presenter's talk (who was Carlos Soriano presenting a nice "DevOps for GNOME with Flatpak" talk). By the end of Carlos' talk, I've just got back the mic and the screen and showed people that the installation have just worked. :-). The patches enabling this were submitted and hopefully we'll have them on both Fedora and openSUSE packages by Wednesday! :-)

So, summing up ... half of the demos worked, I've left both demos with action items (write a troubleshoot page and upstream the patches, which is already done) and I've met some really nice people in an equally nice environment!

Looking forward to attend next openSUSE Conference and thanks a lot for having me there!

Fedora 28 : Can you fix your Fedora !?

Posted by mythcat on May 26, 2018 08:54 AM
Fedora Packages have a different installation and development process thanks to testing and development teams.
Note: The common user may have networking problems and these will still lead to further errors.
A common error that should be more documented and constrained is:
Error: Failed to synchronize cache for repo 'updates'.

If you did not have intruders in the computer or provider changes to modify this process, then here are some elements that help you understand how it works.
The process of cleanup of temporary files kept for repositories and update / upgrade is still a testing and development process.
This includes any such data left behind from disabled or removed repositories as well as for different distribution release versions.
  • dnf clean dbcache : this removes cache files generated from the repository metadata and forces DNF to regenerate the cache files the next time it is run.
  • dnf clean expire-cache : this marks the repository metadata expired and will re-validate the cache for each repo the next time it is used.
  • dnf clean metadata : just removes repository metadata files which is uses to determine the remote availability of packages and will make to download all the metadata the next time it is run.
  • dnf clean packages :  removes any cached packages from the system.
  • dnf clean all : does all of the above.
You can mix these commands so you can get the desired effect with the shell tool dnf , see this example:
dnf update --refresh
This will make a update but with this feature: --refresh will set metadata as expired before running the command.

Fedora AMIs gets ENA support

Posted by Sayan Chowdhury on May 26, 2018 05:52 AM

It’s been a while that Amazon introduced Elastic Network Adapter (ENA) Support to their cloud. Amazon EC2 provides enhanced networking capabilities to C5, C5 with instance storage, F1, G3, H1, I3, m4.16xlarge, M5, P2, P3, R4, and X1 instances through the Elastic Network Adapter (ENA).

With the Fedora 28 release, the Fedora AMIs come with the ENA support.

To check whether an AMI has the Enhanced Networking with ENA-Enabled, you can query the AWS EC2 API, using the AWS Command Line Interface. You need to check if the enaSupport attribute set. If the attribute is set, the response is true.

➜  ~ aws ec2 describe-images --region us-east-1 --image-id ami-cceb7ab3 --query 'Images[].EnaSupport'
|  True        |

Also with Fedora 28 release, Fedora deprecates the release of paravirtual (PV) images.

Tracking Quota

Posted by Adam Young on May 26, 2018 05:46 AM

This OpenStack summit marks the third that I have attended where we’ve discussed the algorithms to try and record quota in Keystone but not update it on each resource allocation and free.

We were stumped, again. The process we had planned on using was game-able and thus broken. I was kinda bummed.

Fortunately, I had a long car ride from Vancouver to Seattle and talked it over with Morgan Fainberg.

We also discussed the Pig War. Great piece of history from the region.

By the time we got to the airport the next day, I think we had it solved. Morgan came to the solution first, and I followed, slowly.  Here’s what we think will work.

First, lets get a 3 Level deep project setup to use for out discussion.

The rule is simple:  even if a quota is subdivided, you still need to check the overall quota of the project  and all the parent projects.

In the example structure above,  lets assume that project A gets a quota of 100 units of some resource: VMs, GB memory, network ports, hot-dogs, very small rocks, whatever.  I’ll use VMs in this example.

There are a couple ways this could be further managed.  The simplest is that, any resource allocated anywhere else in this tree is counted against this quota.  There are 9 total projects in the tree.  If each allocate 11 VMs, there will be 99 created and counted against the quota.  The next VM created uses up the quota.  The request after that will fail due to lack of available quota.

Lets say, however, that the users in project C33 are greedy, and allocate all 100 VMs.  The people in C11 Are filled with righteous indignation.  They need VMs too.

The Admins wipe everything out and we start all over.  They set up a system to fragment the quota by allowing A project to split its quota assignment up and allocate some of it to subordinate projects.

Project A says “I’m going to keep 50 VMs for myself, and allocate 25 to B1 and B2.”

Project B1 Says I am going to keep 10 for Me and I’m going to allocate 5 to each C11, C12, C13.  And the B1 Tree is happy.

B2 is a manipulative schemer and decides to play around.  B2 Allocates his entire quota of 25 to C21.  C21 Creates 25 VMs.

B2 now withdraws his quota from C21.  There is no communication with Nova.  The VMs keep running.  He then allocates his entire quota of 25 VMs to C22, and C22 creates 25 VMs.

Nova says “What project is this?  C22?   What is its quota?  25?  All good.”

But in reality, B2 has doubled his quota.  His subordinates have allocated 50 VMs total.  He does this again with project C33, gets up to 75 VMs, and contemplates creating yet another project C34 just to keep up the pattern.  This would allocate more VMs than project A was originally allocated.

The admins notice this and get mad, wipe everything out, and start over again.  This time they’ve made a change.  Whenever the check quota on a project, they also will go and check quota on the parent project, counting all VMs underneath that parent.  Essentially, they will record that a VM created in project C11 is also reducing the original quota on B1  and on A.  In essence, they record a table.  If the user creates a VM in Project C11, the following will be recorded and check for quota.

VM Project
VM1 B1
VM1 C11


When a User then creates a VM in C21 the table will extend to this:

VM Project
VM1 B1
VM1 C11
VM2 B2
VM2 C21

In addition, when creating the VM2, Nova will check quota and see that, after creation:

  • C21 now has 1 out of 25 allocated
  • B2 now has 1 out of 25 allocated
  • A now has 2 out of 100 allocated

(quota is allocated prior to the creation of the resource to prevent a race condition)

Note that the quota is checked against the original amount, and not the amount reduced by sub allocating the quota.  If project C21 allocates 24 more VMs, to quota check will show:

  • C21 now has 25 out of 25 allocated
  • B2 now has 25 out of 25 allocated
  • A now has 26 out of 100 allocated

If B2 tried to play games, and removes the quota from C21 and gives it to C22, project C21 will be over quota, but Nova will have no way to trap this.  However, the only people this affects is other people within projects B2, C21, C22, and C23.  If C22 attempts to allocate a virtual machine, the quota check will show that B2 has allocated its full quota and cannot create any more.  The quota check will fail.

You might have noticed that the higher level projects can rob quota from the child projects in this scheme.  For example.  If Project A allocates 74 more VMs now, project B1 and children will still have allocated quota, but their quota check will fail because A is at full.  This could be mitigated by having 2 checks for project A: total quota (max 100), and directly allocated quota (max 50).

This scheme removes the quota violation by gaming the system.  I promised to write it up so we could continue to try and poke holes in it.

Bodhi 3.8.0 released

Posted by Bodhi on May 25, 2018 07:25 PM


  • Container releases may now have a trailing "C" in their name (#2250).
  • The number of days an update has been in its current state is now displayed by the CLI
    (#2176 and #2269).
  • Composes are no longer batched by category (security vs. non-security, updates vs. testing)
    as this was not found to be beneficial and did slow the compose process down (68c7936).
  • A fedmsg is now transmitted when an update's time in testing is met (99923f1).
  • New states for updates that are related to side tags have been documented (d7b5432).


  • Bodhi no longer considers HTTP codes > 200 and < 300 to be errors (#2361).
  • Do not apply null Koji tags to ejected updates during compose (#2368).

Development improvements

  • The container composer has been refactored to use a cleaner helper function (#2259).
  • Bodhi's models now support side tags, a planned feature for an upcoming Bodhi release
  • Compose.from_updates() returns a list in Python 3 (#2291).
  • Some silliness was removed from the universe, as bodhi.server.models.BodhiBase.get() no
    longer requires a database session to be passed to it (#2298).
  • The in-memory dogpile cache backend is used for development by default (#2300).
  • The CI container no longer installs Pungi, which speeds the CI testing time up (#2306).
  • Dropped support for str arguments from util.cmd() (#2332).
  • Python 3 line test coverage has increased to 85%.

Server upgrade instructions

This update contains a migration to add two new updates states for side tags. After installing the
new server packages, you need to run the migrations:

    $ sudo -u apache /usr/bin/alembic -c /etc/bodhi/alembic.ini upgrade head


The following developers contributed to Bodhi 3.8.0:

  • Mattia Verga
  • Eli Young
  • Lumir Balhar
  • Patrick Uiterwijk
  • Ralph Bean
  • Paul W. Frields
  • Randy Barlow

F28-20180524 Updated iso released

Posted by Ben Williams on May 25, 2018 06:56 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated F28-20180524 Live ISOs, carrying the 4.16.11-300 kernel.

The 4.16.11 kernel which was a security release, with several mitigations against Spectre Variant 4 (Speculation Store Bypass) 

This set of updated isos will save about 706+ MB of updates after install.  (for new installs.)

We would also like to thank Fedora- QA  for running the following Tests on our ISOs.: https://openqa.fedoraproject.org/tests/overview?distri=fedora&version=28&build=FedoraRespin-28-updates-20180524.0&groupid=1

These can be found at  http://tinyurl.com/live-respins .We would also like to thank the following irc nicks for helping test these isos: dowdle, and Southern_Gentlem.

As always we are always needing Testers to help with our respins. We have a new Badge for People whom help test.  See us in #fedora-respins on Freenode IRC.

PHP version 7.1.18 and 7.2.6

Posted by Remi Collet on May 25, 2018 01:23 PM

RPM of PHP version 7.2.6 are available in remi repository for Fedora 28 and in remi-php72 repository for Fedora 25-27 and Enterprise Linux  6 (RHEL, CentOS).

RPM of PHP version 7.1.18 are available in remi repository for Fedora 26-27 and in remi-php71 repository for Fedora 25 and Enterprise Linux (RHEL, CentOS).

emblem-notice-24.pngNo security fix this month, so no update for version 5.6.36 and 7.0.30.

emblem-important-2-24.pngPHP version 5.5 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

Version announcements:

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.2 installation (simplest):

yum-config-manager --enable remi-php72
yum update php\*

Parallel installation of version 7.2 as Software Collection (x86_64 only):

yum install php72

Replacement of default PHP by version 7.1 installation (simplest):

yum-config-manager --enable remi-php71
yum update

Parallel installation of version 7.1 as Software Collection (x86_64 only):

yum install php71

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.5
  • EL6 rpm are build using RHEL-6.9
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php56 / php70 / php71 / php72)

Getting started with the Python debugger

Posted by Fedora Magazine on May 25, 2018 10:26 AM

The Python ecosystem is rich with many tools and libraries that improve developers’ lives. For example, the Magazine has previously covered how to enhance your Python with a interactive shell. This article focuses on another tool that saves you time and improves your Python skills: the Python debugger.

Python Debugger

The Python standard library provides a debugger called pdb. This debugger provides most features needed for debugging such as breakpoints, single line stepping, inspection of stack frames, and so on.

A basic knowledge of pdb is useful since it’s part of the standard library. You can use it in environments where you can’t install another enhanced debugger.

Running pdb

The easiest way to run pdb is from the command line, passing the program to debug as an argument. Considering the following script:

# pdb_test.py

from time import sleep

def countdown(number):
    for i in range(number, 0, -1):

if __name__ == "__main__":
    seconds = 10

You can run pdb from the command line like this:

$ python3 -m pdb pdb_test.py
> /tmp/pdb_test.py(1)<module>()
-> from time import sleep

Another way to use pdb is to set a breakpoint in the program. To do this, import the pdb module and use the set_trace function:

1 # pdb_test.py
2 #!/usr/bin/python3
4 from time import sleep
7 def countdown(number):
8     for i in range(number, 0, -1):
9         import pdb; pdb.set_trace()
10         print(i)
11        sleep(1)
14 if __name__ == "__main__":
15     seconds = 10
16     countdown(seconds)
$ python3 pdb_test.py
> /tmp/pdb_test.py(6)countdown()
-> print(i)

The script stops at the breakpoint, and pdb displays the next line in the script. You can also execute the debugger after a failure. This is known as postmortem debugging.

Navigate the execution stack

A common use case in debugging is to navigate the execution stack. Once the Python debugger is running, the following commands are useful :

  • w(here) : Shows which line is currently executed and where the execution stack is.
$ python3 test_pdb.py 
> /tmp/test_pdb.py(10)countdown()
-> print(i)
(Pdb) w
-> countdown(seconds)
> /tmp/test_pdb.py(10)countdown()
-> print(i)
  • l(ist) : Shows more context (code) around the current the location.
$ python3 test_pdb.py 
> /tmp/test_pdb.py(10)countdown()
-> print(i)
(Pdb) l
7 def countdown(number):
8 for i in range(number, 0, -1):
9 import pdb; pdb.set_trace()
10 -> print(i)
11 sleep(1)
14 if __name__ == "__main__":
15 seconds = 10
  • u(p)/d(own) : Navigate the call stack up or down.
$ py3 test_pdb.py 
> /tmp/test_pdb.py(10)countdown()
-> print(i)
(Pdb) up
> /tmp/test_pdb.py(16)<module>()
-> countdown(seconds)
(Pdb) down
> /tmp/test_pdb.py(10)countdown()
-> print(i)

Stepping through a program

pdb provides the following commands to execute and step through code:

  • n(ext): Continue execution until the next line in the current function is reached, or it returns
  • s(tep): Execute the current line and stop at the first possible occasion (either in a function that is called or in the current function)
  • c(ontinue): Continue execution, only stopping at a breakpoint.
$ py3 test_pdb.py 
> /tmp/test_pdb.py(10)countdown()
-> print(i)
(Pdb) n
> /tmp/test_pdb.py(11)countdown()
-> sleep(1)
(Pdb) n
> /tmp/test_pdb.py(8)countdown()
-> for i in range(number, 0, -1):
(Pdb) n
> /tmp/test_pdb.py(9)countdown()
-> import pdb; pdb.set_trace()
(Pdb) s
> /usr/lib64/python3.6/pdb.py(1584)set_trace()
-> def set_trace():
(Pdb) c
> /tmp/test_pdb.py(10)countdown()
-> print(i)
(Pdb) c
> /tmp/test_pdb.py(9)countdown()
-> import pdb; pdb.set_trace()

The example shows the difference between next and step. Indeed, when using step the debugger stepped into the pdb module source code, whereas next would have just executed the set_trace function.

Examine variables content

Where pdb is really useful is examining the content of variables stored in the execution  stack. For example, the a(rgs) command prints the variables of the current function, as shown below:

py3 test_pdb.py 
> /tmp/test_pdb.py(10)countdown()
-> print(i)
(Pdb) where
-> countdown(seconds)
> /tmp/test_pdb.py(10)countdown()
-> print(i)
(Pdb) args
number = 10

pdb prints the value of the variable number, in this case 10.

Another command that can be used to print variables value is p(rint).

$ py3 test_pdb.py 
> /tmp/test_pdb.py(10)countdown()
-> print(i)
(Pdb) list
7 def countdown(number):
8 for i in range(number, 0, -1):
9 import pdb; pdb.set_trace()
10 -> print(i)
11 sleep(1)
14 if __name__ == "__main__":
15 seconds = 10
(Pdb) print(seconds)
(Pdb) p i
(Pdb) p number - i

As shown in the example’s last command, print can evaluate an expression before displaying the result.

The Python documentation contains the reference and examples for each of the pdb commands. This is a useful read for someone starting with the Python debugger.

Enhanced debugger

Some enhanced debuggers provide a better user experience. Most add useful extra features to pdb, such as syntax highlighting, better tracebacks, and introspection. Popular choices of enhanced debuggers include IPython’s ipdb and pdb++.

These examples show you how to install these two debuggers in a virtual environment. These examples use a new virtual environment, but in the case of debugging an application, the application’s virtual environment should be used.

Install IPython’s ipdb

To install the IPython ipdb, use pip in the virtual environment:

$ python3 -m venv .test_pdb
$ source .test_pdb/bin/activate
(test_pdb)$ pip install ipdb

To call ipdb inside a script, you must use the following command. Note that the module is called ipdb instead of pdb:

import ipdb; ipdb.set_trace()

IPython’s ipdb is also available in Fedora packages, so you can install it using Fedora’s package manager dnf:

$ sudo dnf install python3-ipdb

Install pdb++

You can install pdb++ similarly:

$ python3 -m venv .test_pdb
$ source .test_pdb/bin/activate
(test_pdb)$ pip install pdbp

pdb++ overrides the pdb module, and therefore you can use the same syntax to add a breakpoint inside a program:

import pdb; pdb.set_trace()


Learning how to use the Python debugger saves you time when investigating problems with an application. It can also be useful to understand how a complex part of an application or some libraries work, and thereby improve your Python developer skills.

Fedora 28 : Video about development and fix packages.

Posted by mythcat on May 25, 2018 09:49 AM
The fedora 28 distribution is an advanced Linux distribution that includes tools and is supported by the fedora community.
The clear and essential reason for the right development is the learning area.
Here's a great video tutorial about developing fedora packets and that's very useful.

I do not know what it is with this group name the Factory 2.0 devel group , but they are very useful information.

Dockerisation du service cobbler

Posted by Didier Fabert (tartare) on May 24, 2018 04:49 PM

Étude d’un système cobbler existant

Un système cobbler comporte plusieurs services:

  • le service cobbler à proprement parlé
  • un service web (apache) pour distribuer les RPMs
  • un service de transfert de fichier basique (tftp) pour distribuer le noyau bootable permettant d’afficher le menu, et tout ce qui va avec.

L’analyse du système existant nous montre

  1. Nous n’avons pas besoin de gérer le DHCP et le DNS, ceux-ci sont déjà gérés sur notre réseau et seul l’adresse du next_server sera mis à jour dans la configuration du DHCP.
  2. On utilise plusieurs snippets additionnels pour configurer la cible dans la phase post-installation (notamment une mise à jour de la distribution, l’installation du dépôt epel, etc…).
  3. Seulement quelques répertoires contiennent des données personnalisées:
    • /var/lib/cobbler: Ce répertoire contient deux types de données:
      • Les Données dynamiques propres à chaque installation et qui doivent être persistantes. Elles seront donc intégrées dans un ou plusieurs volumes de données: /var/lib/cobbler/config (la configuration dynamique) et /var/lib/cobbler/backup (le répertoire des sauvegardes)
      • Les Données semi-statiques. Elles seront intégrées directement dans l’image docker: les modèles de kickstart (définition de l’installation), les snippets (bout de code ré-utilisable), etc…
    • /var/www/cobbler: qui contient le dépôt de paquets RPM des distributions importées. Ces fichiers sont servis par le serveur web et ce répertoire sera intégré dans un volume de données.
    • /var/lib/tftp: qui contient le noyau bootable (pxelinux.0) , le menu d’installation, etc… Ces fichiers sont servis par le serveur tftpd et ce répertoire sera intégré dans un volume de données
    • On aura besoin d’un cinquième point de montage (/mnt sur le conteneur) où les isos des distributions seront montées par le serveur hôte.
      mkdir /mnt/centos
      mount -o ro /path/to/iso/CentOS-7-x86_64-DVD-1804.iso /mnt/centos
    • Et même d’un sixième point de montage, si l’hôte du conteneur a un noyau inférieur à 4.7, à cause du bug concernant les sockets unix sur overlayfs (lien): /var/run/supervisor

Bien entendu, pour ajouter un modèle de kickstart ou un snippet, l’image docker devra être reconstruite. Mais à l’usage, on se rend compte qu’on ajoute/modifie moins de fichiers snippets ou kickstart, que la disponibilité des mises à jour du paquet cobbler.


L’image peut être trouvée sur docker.io

docker pull tartarefr/docker-cobbler

Les variables d’environnement

Le service docker peut être personnalisé via les variables d’environnement suivantes:

  • HOST_IP_ADDR: Cette variable est la seule variable obligatoire car c’est elle qui permet la connexion à l’API cobbler et ne peut pas avoir de valeur par défaut. Elle doit prendre la valeur de l’adresse IP de l’hôte hostname --ip-address | awk '{print $1}'
  • HOST_HTTP_PORT: Port de l’hôte branché sur le port 80 du conteneur. Par défaut c’est 80 (-p 80:80)
  • DEFAULT_ROOT_PASSWD: Mot de passe en clair du compte superutilisateur root qui sera configuré sur les cibles (défaut: cobbler)
  • COBBLER_WEB_USER: Login de cobbler web (défaut: cobbler)
  • COBBLER_WEB_PASSWD: Mot de passe de cobbler web (défaut: cobbler)
  • COBBLER_WEB_REALM: Royaume de cobbler web car c’est de l’authentification digest (défaut: cobbler)
  • COBBLER_LANG: La langue à configurer lors de l’installation des cibles (défaut: fr_FR)
  • COBBLER_KEYBOARD: Le clavier à configurer lors de l’installation des cibles (défaut: fr-latin9)
  • COBBLER_TZ: Le fuseau horaire à configurer lors de l’installation des cibles (défaut: Europe/Paris)

Mise en place avant le premier démarrage

  1. On commence par télécharger l’iso DVD de centos 7
  2. Il va nous falloir construire 5 volumes docker pour un conteneur (même ratio que le pastis)
    docker volume create cobbler_www
    docker volume create cobbler_tftp
    docker volume create cobbler_config
    docker volume create cobbler_backup
    docker volume create cobbler_run
  3. On créé le point de montage de notre iso
    sudo mkdir /mnt/centos
  4. On monte notre iso dans /mnt/centos
    sudo mount -o ro /path/to/iso/CentOS-7-x86_64-DVD-1804.iso /mnt/centos


Je conseille fortement de modifier la valeur de DEFAULT_ROOT_PASSWD. En effet c’est un des tout premiers mot de passe root essayé par les pirates.

Contrairement à la valeur par défaut, notre serveur web (api et cobbler_web) sera accessible depuis l’hôte sur le port 60080.

docker run -d --privileged \
           -v cobbler_www:/var/www/cobbler:z \
           -v cobbler_tftp:/var/lib/tftp:z \
           -v cobbler_config:/var/lib/cobbler/config:z \
           -v cobbler_backup:/var/lib/cobbler/backup:z \
           -v cobbler_run:/var/run/supervisor:z \
           -v /mnt/centos:/mnt:z \
           -e DEFAULT_ROOT_PASSWD=cobbler \
           -e HOST_IP_ADDR=$(hostname --ip-address | awk '{print $1}') \
           -e HOST_HTTP_PORT=60080 \
           -e COBBLER_WEB_USER=cobbler \
           -e COBBLER_WEB_PASSWD=cobbler \
           -e COBBLER_WEB_REALM=Cobbler \
           -e COBBLER_LANG=fr_FR \
           -e COBBLER_KEYBOARD=fr-latin9 \
           -e COBBLER_TZ=Europe/Paris \
           -p 69:69/udp \
           -p 60080:80 \
           -p 60443:443 \
           -p 25151:25151 \
           --name cobbler \


Une fois à l’intérieur du conteneur, on va ajouter la cible memtest à notre menu d’installation, puis on va importer notre distribution centos 7, ajouter un profile supplémentaire qui permettra d’installer centos 7 avec un environnement graphique et synchroniser le tout.

  1. Immersion dans notre conteneur
    docker exec -ti cobbler /bin/bash
  2. Ajout de la cible memtest. On vérifie au préalable que le fichier /boot/memtest86+-5.01 existe bien (la version peut être modifiée et la ligne suivante doit être adaptée en conséquence)
    cobbler image add --name=memtest86+ --file=/boot/memtest86+-5.01 --image-type=direct
  3. Import de la distribution centos 7
    cobbler import --path=/mnt --name=CentOS-7-x86_64
  4. Ajout du profile pour une installation d’une centos desktop
    cobbler profile add --name=CentOS-7-x86_64-Desktop \
        --distro=CentOS-7-x86_64 \
        --kickstart=/var/lib/cobbler/kickstarts/sample_end.ks \
        --virt-file-size=12 \
    cobbler profile edit --name CentOS-7-x86_64-Desktop --ksmeta="type=desktop"
  5. Synchronisation
    cobbler sync

Malheureusement, le changement de fichier iso (démontage et le montage d’un autre iso sur le même point de montage) sur l’hôte ne met pas à jour le contenu du répertoire /mnt dans le conteneur.
Pour ajouter une autre distribution, il faudra:

  1. stopper le conteneur
  2. détruire le conteneur
  3. démonter l’iso sur l’hôte
  4. monter le nouvel iso
  5. démarrer un nouveau conteneur

Modification de l’image

Pour ajouter un modèle de kickstart ou un snippet, il suffit de placer le fichier dans le répertoire idoine et de modifier le fichier Dockerfile:

  1. Ajouter une instruction docker de copie du fichier dans l’image du conteneur
  2. Si c’est un snippet s’éxecutant dans la phase post-installation, ajouter le nom à la liste des snippets post-installation à l’instruction d’activation (Activate personnal snippets)
  3. Reconstruire l’image
  4. Déployer la nouvelle image

Quelques mots sur l’image docker

J’ai préféré construire une image en partant de zéro, car celles disponibles sont soit plus mise à jour, soit elles utilisent systemd. Je préfère garder cette option pour les cas particuliers, inutile de sortir le tank pour écraser un moustique.

Supervisord est un bien meilleur candidat pour le poste d’orchestrateur dans un conteneur (le bon outil pour la bonne tâche) et permet de redémarrer un ou plusieurs services.

supervisorctl restart cobblerd

L’ensemble du projet cobbler est versionné sur gitlab, et en copie sur github (pour la construction automatique des images sur docker.io).

Fichier Dockerfile
FROM centos:7

MAINTAINER Didier FABERT (tartare) <didier@tartarefr.eu>

RUN yum install -y \
    epel-release \
    && yum clean all \
    && rm -rf /var/cache/yum

RUN yum update -y \
    && yum clean all \
    && rm -rf /var/cache/yum

RUN yum install -y \
  cobbler \
  cobbler-web \
  pykickstart \
  debmirror \
  curl wget \
  rsync \
  supervisor \
  net-tools \
  memtest86+ \
  && yum clean all \
  &&  rm -rf /var/cache/yum

# Copy supervisor conf
COPY supervisord/supervisord.conf /etc/supervisord.conf
COPY supervisord/cobblerd.ini /etc/supervisord.d/cobblerd.ini
COPY supervisord/tftpd.ini /etc/supervisord.d/tftpd.ini
COPY supervisord/httpd.ini /etc/supervisord.d/httpd.ini

# Copy personnal snippets
COPY snippets/partition_config /var/lib/cobbler/snippets/partition_config
COPY snippets/configure_X /var/lib/cobbler/snippets/configure_X
COPY snippets/add_repos /var/lib/cobbler/snippets/add_repos
COPY snippets/disable_prelink /var/lib/cobbler/snippets/disable_prelink
COPY snippets/systemd_persistant_journal /var/lib/cobbler/snippets/systemd_persistant_journal
COPY snippets/rkhunter /var/lib/cobbler/snippets/rkhunter
COPY snippets/enable_X /var/lib/cobbler/snippets/enable_X
COPY snippets/yum_update /var/lib/cobbler/snippets/yum_update

# Copy personnal kickstart

# Activate personnal snippets
RUN for kickstart in sample sample_end legacy ; \
    do \
        additional_post_snippets="" ; \
        for snippet in \
                        add_repos \
                        disable_prelink \
                        systemd_persistant_journal \
                        rkhunter \
                        enable_X \
                        yum_update ; \
        do \
          additional_post_snippets="${additional_post_snippets}\n\$SNIPPET('${snippet}')" ; \
        done ; \
        sed -i \
           -e "/post_anamon/ s/$/${additional_post_snippets}/" \
           -e "/^autopart/ s/^.*$/\$SNIPPET('partition_config')/" \
           -e "/^skipx/ s/^.*$/\$SNIPPET('configure_X')/" \
       /var/lib/cobbler/kickstarts/${kickstart}.ks ; \

# Install vim-enhanced by default and desktop packages if profile have el_type set to desktop (ksmeta)
RUN echo -e "@core\n\nvim-enhanced\n#set \$el_type = \$getVar('type', 'minimal')\n#if \$el_type == 'desktop'\n@base\n@network-tools\n@x11\n@graphical-admin-tools\n#set \$el_version = \$getVar('os_version', None)\n#if \$el_version == 'rhel6'\n@desktop-platform\n@basic-desktop\n#else if \$el_version == 'rhel7'\n@gnome-desktop\n#end if\n#end if\nkernel" >> /var/lib/cobbler/snippets/func_install_if_enabled

COPY first-sync.sh /usr/local/bin/first-sync.sh
COPY entrypoint.sh /entrypoint.sh
RUN chmod 755 /entrypoint.sh /usr/local/bin/first-sync.sh

EXPOSE 69 80 443 25151

VOLUME [ "/var/www/cobbler", "/var/lib/tftp", "/var/lib/cobbler/config", "/var/lib/cobbler/backup", "/var/run/supervisor", "/mnt" ]

ENTRYPOINT /entrypoint.sh

Lancement de la construction

docker build -t local/cobbler .

En cas de problème, ne pas hésiter à regarder le fichier /var/log/cobbler/cobbler.log

Tilix en Fedora 28

Posted by Bernardo C. Hermitaño Atencio on May 24, 2018 06:18 AM

Si eres un usuario que te gusta el terminal, Tilix es ideal pasa sacarle el máximo provecho a su pc. Los más atractivo de esta aplicación es que puede dividir tu ventana en el nro. que deseas e ir ingresando órdenes también haciendo el seguimiento respectivo.

Presentamos algunas imágenes de la instalación y puesta a prueba de esta gran aplicación de seguró quedarás prendado para administrar tu computador.


1. Buscar la aplicación en los repositorios de Fedora.


2. Instalación, esta aplicación se encuentra en los repositorios de Fedora.


3. Ingresar a aplicación Tilix.


4. Es posible agregar las divisiones que se desean para ingresar y controlar de manera más efectiva las órdenes ingresadas.


5. En las opciones de configuración, ingresar a apariencia y se desea cambiar al tema light o dark como se observa en la imagen.


6. En la opción Default y ficha Colors configurar los colores que se desea como fondo y los colores de la fuente. Además deslizar el controlador de apariencia de acuerdo a su preferencia.


7. Finalmente ingresar las órdenes que se desean de acuerdo a tus necesidades.


All systems go

Posted by Fedora Infrastructure Status on May 24, 2018 12:26 AM
Service 'Fedora Infrastructure Cloud' now has status: good: Everything seems to be working.

Major service disruption

Posted by Fedora Infrastructure Status on May 24, 2018 12:11 AM
Service 'Fedora Infrastructure Cloud' now has status: major: Cloud is down for emergency disk maintenance

RPKG guide from Tito user

Posted by Jakub Kadlčík on May 24, 2018 12:00 AM

Since the beginning of the rpkg project, it was known as a client tool for DistGit. Times changed and a new era for rpkg is here. It was enhanced with project management features, so we can safely label it as a tito alternative.

A features review, pros and cons and user guide is a theme for a whole new article. In this short post, I, as a long-time tito user, want to show rpkg alternatives for the tito commands, that I frequently use.

For more information about the rpkg, please read the documentation.

Cheat sheet

Tito command rpkg alternative
tito build --srpm --test rpkg srpm
tito build --rpm --test rpkg local
tito build --tgz --test rpkg spec --sources
tito tag rpkg tag
Undo a tito tag rpkg tag -d <tagname>
Push a tito tag rpkg push
tito release <copr-releaser> rpkg build
tito build ... --install Not implemented yet
tito build ... --rpmbuild-options=--nocheck Not implemented yet

Working with last tag

You may notice, that all tito build commands in the cheat sheet table have --test parameter. That’s because rpkg always works with the last commit. So how to build a package from the last tag? We need to checkout it first.

git checkout <tag>
rpkg local

Set up zsh on your Fedora system

Posted by Fedora Magazine on May 23, 2018 08:00 AM

For some people, the terminal can be scary. But a terminal is more than just a black screen to type in. It usually runs a shell, so called because it wraps around the kernel. The shell is a text-based interface that lets you run commands on the system. It’s also sometimes called a command line interpreter or CLI. Fedora, like most Linux distributions, comes with bash as the default shell.  However, it isn’t the only shell available; several other shells can be installed. This article focuses on the Z Shell, or zsh.

Bash is a rewrite of the old Bourne shell (sh) that shipped in UNIX. Zsh is intended to be friendlier than bash, through better interaction. Some of its useful features are:

  • Programmable command line completion
  • Shared command history between running shell sessions
  • Spelling correction
  • Loadable modules
  • Interactive selection of files and folders

Zsh is available in the Fedora repositories. To install, run this command:

$ sudo dnf install zsh

Using zsh

To start using it, just type zsh and the new shell prompts you with a first run wizard. This wizard helps you configure initial features, like history behavior and auto-completion. Or you can opt to keep the rc file empty:

zsh First Run Wizzard

First-run wizard

If you type 1 the configuration wizard starts. The other options launch the shell immediately.

Note that the user prompt is % and not $ as with bash. A significant feature here is the auto-completion that allows you to move among files and directories with the Tab key, much like a menu:

zsh cd Feature

Using the auto-completion feature with the cd command

Another interesting feature is spelling correction, which helps when writing filenames with mixed cases:

zsh Auto Completion

Auto completion performing spelling correction

Making zsh your default shell

Zsh offers a lot of plugins, like zsh-syntax-highlighting, and the famous “Oh my zsh” (check out its page here). You might want to make it the default, so it runs whenever you start a session or open a terminal. To do this, use the chsh (“change shell”) command:

$ chsh -s $(which zsh)

This command tells your system that you want to set (-s) your default shell to the correct location of the shell (which zsh).

Photo by Kate Ter Haar from Flickr (CC BY-SA).

Testing the “wide walls” design principle in the wild

Posted by Sayamindu Dasgupta on May 23, 2018 04:00 AM

Seymour Papert is credited as saying that tools to support learning should have “high ceilings” and “low floors.” The phrase is meant to suggest that tools should allow learners to do complex and intellectually sophisticated things but should also be easy to begin using quickly. Mitchel Resnick extended the metaphor to argue that learning toolkits should also have “wide walls” in that they should appeal to diverse groups of learners and allow for a broad variety of creative outcomes. In a new paper, Benjamin Mako Hill and I attempted to provide the first empirical test of Resnick’s wide walls theory. Using a natural experiment in the Scratch online community, we found causal evidence that “widening walls” can, as Resnick suggested, increase both engagement and learning.

Over the last ten years, the “wide walls” design principle has been widely cited in the design of new systems. For example, Resnick and his collaborators relied heavily on the principle in the design of the Scratch programming language. Scratch allows young learners to produce not only games, but also interactive art, music videos, greetings card, stories, and much more. As part of that team, I was guided by “wide walls” principle when I designed and implemented the Scratch cloud variables system in 2011-2012.

While designing the system, I hoped to “widen walls” by supporting a broader range of ways to use variables and data structures in Scratch. Scratch cloud variables extend the affordances of the normal Scratch variable by adding persistence and shared-ness. A simple example of something possible with cloud variables, but not without them, is a global high-score leaderboard in a game (example code is below). After the system was launched, I saw many young Scratch users using the system to engage with data structures in new and incredibly creative ways.

<figure style="width:40%;"> cloud variable script <figcaption>Example of Scratch code that uses a cloud variable to keep track of high-scores among all players of a game.</figcaption> </figure>

Although these examples reflected powerful anecdotal evidence, I was also interested in using quantitative data to reflect the causal effect of the system. Understanding the causal effect of a new design in real world settings is a major challenge. To do so, we took advantage of a “natural experiment” and some clever techniques from econometrics to measure how learners’ behavior changed when they were given access to a wider design space.

Understanding the design of our study requires understanding a little bit about how access to the Scratch cloud variable system is granted. Although the system has been accessible to Scratch users since 2013, new Scratch users do not get access immediately. They are granted access only after a certain amount of time and activity on the website (the specific criteria are not public). Our “experiment” involved a sudden change in policy that altered the criteria for who gets access to the cloud variable feature. Through no act of their own, more than 14,000 users were given access to feature, literally overnight. We looked at these Scratch users immediately before and after the policy change to estimate the effect of access to the broader design space that cloud variables afforded.

We found that use of data-related features was, as predicted, increased by both access to and use of cloud variables. We also found that this increase was not only an effect of projects that use cloud variables themselves. In other words, learners with access to cloud variables—and especially those who had used it—were more likely to use “plain-old” data-structures in their projects as well.

The graph below visualizes the results of one of the statistical models in our paper and suggests that we would expect that 33% of projects by a prototypical “average” Scratch user would use data structures if the user in question had never used used cloud variables but that we would expect that 60% of projects by a similar user would if they had used the system.

<figure style="width:70%;"> probability graph <figcaption>Model-predicted probability that a project made by a prototypical Scratch user will contain data structures (w/o counting projects with cloud variables)</figcaption> </figure>

It is important to note that the estimated effective above is a “local average effect” among people who used the system because they were granted access by the sudden change in policy (this is a subtle but important point that we explain this in some depth in the paper). Although we urge care and skepticism in interpreting our numbers, we believe our results are encouraging evidence in support of the “wide walls” design principle.

Of course, our work is not without important limitations. Critically, we also found that rate of adoption of cloud variables was very low. Although it is hard to pinpoint the exact reason for this from the data we observed, it has been suggested that widening walls may have a potential negative side-effect of making it harder for learners to imagine what the new creative possibilities might be in the absence of targeted support and scaffolding. Also important to remember is that our study measures “wide walls” in a specific way in a specific context and that it is hard to know how well our findings will generalize to other contexts and communities. We discuss these caveats, as well as our methods, models, and theoretical background in detail in our paper which now available for download as an open-access piece from the ACM digital library.

This blog post, and the open access paper that it describes, is a collaborative project with Benjamin Mako Hill. Financial support came from the eScience Institute and the Department of Communication at the University of Washington. Quantitative analyses for this project were completed using the Hyak high performance computing cluster at the University of Washington.

container_t versus svirt_lxc_net_t

Posted by Dan Walsh on May 22, 2018 06:12 PM

For some reason recently I have been asked via email and twitter about what the difference is between the container_t type and the svirt_lxc_net_t type. Or similarly between container_file_t and svirt_sandbox_file_t.  Bottom line, NOTHING.  They are aliases of each other.

In SELinux policy language they have a typealias  command.

typealias container_t alias svirt_lxc_net_t;

typealias container_file_t alias svirt_sandbox_file_t;

When I first started working on containers and SELinux prior to Docker, we were writing a tool called virt-sandbox that used libvirt to launch containers, specifically it used libvirt-lxc.  We had labeled all of the VMs launched by libvirt, svirt_t.  This stood for secure virt.  When I decided to write policy for the libvirt_lxc containers, I created a type called svirt_lxc_t.  This type was not allowed to do network access, so I added another type called svirt_lxc_net_t that had full network access.  The type for content that he svirt_lxc types could manage as svirt_sandbox_file_t.  (svirt_file_t was already used for virtual machine images.)  Why I did not call it svirt_lxc_file_t, I don't know. 

When Docker exploded on the scene we quickly adapted the SELinux policy we had written for virt-sandbot to work with Docker and that is how these types got associated with containers.  After a few years of using this and constantly telling people that svirt_lxc_net_t was the type of a container process, I figured it was time to change the type name.  I created container_t and container_file_t and then just aliased the old names to the new.  

One problem was that RHEL policy updated much slower and we were not able to get these changes in untile RHEL7.5 (Or maybe RHEL7.4, I don't remember).   But for now on we will use the new type names.  

Google Memory

One issue you have with technology is the old howto's and information out on the internet never goes away.  If someone googles how to label a volume so that it works with a container and SELinux, they are likely to get told to label the content svirt_sandbox_file_t.

This is not an issue with the type aliases.  If you have scripts or customer policy modules that use the old names you are fine. Since the old names will still work. 

But I would prefer that everyone just use the newer easily understandable types.

GSoC 2018: Week 1

Posted by Amitosh Swain Mahapatra on May 22, 2018 05:24 PM

This time, I am working on improving the Fedora Community App with the Fedora project. It’s been a week since we started off our coding on may 14.

The Fedora App is a central location for Fedora users and innovators to stay updated on The Fedora Project. News updates, social posts, Ask Fedora, as well as articles from Fedora Magazine are all held under this app.

Progress Report

Here is the summary of my work in the first week:

  1. We now have a system for loading configuration such as API keys, depending on the environment we are running. It also allows us to load them from external sources (#55). This would help us to remove the need to store API keys under version control (#52).
  2. The JS to TS conversion I started earlier (#4, #6) is finally complete. All of our code is now fully checked by TypeScript (TS) compiler for type safety (#54, #56). Except for JSON responses, where typing things will be a waste of time as TS does not provide run-time type safety, all other functions and services are now checked using TS interfaces. (#57).
  3. Our code is now following Angular patterns even more closely. I have standardized the Providers who use to return a callback or a Promise to return Observables. We now load network data a bit faster due to improved concurrency in the code.
  4. The documentation coverage of our code has increased. As the part of conversion, I have added TS doc comments describing the usage of various Providers, Services and Components, what the expect and what they return.
  5. The annoying white screen on launch (#16) in certain devices is now gone! (#47)
  6. After the restructuring, we no longer have any in-memory caching. I will be working on offline storage and caching implementation in this week.

What’s next ?

I am working to bring offline storage and sync to Fedora Magazine and Fedora Social sections of the app. This will both improve the usability and performance of the app. From a UX perspective, we will start syncing data rather than blocking the user from doing anything.

Java exceptions from container

Posted by ABRT team on May 22, 2018 02:15 PM

As you know from a previous blog post, lately we have been dealing with catching unhandled exceptions from inside of containers.

In this blog post, we would like to introduce a new language we are capable of doing that for. The language is JAVA.

We’ve taken ABRT’s tool abrt-java-connector which catches java’s exceptions and we’ve added a new option cel there. The cel option turns on writing exceptions to container-exception-logger. For more information about abrt-java-connector options see abrt-java-connector readme. Also, a new package abrt-java-connector-container was released. It contains a minimal set of files needed for container exception logging.

How to make it work in a container

If you use executables which load /usr/share/java-utils/java-functions file (for instance will_java_throw shipped by package will-crash), you only need to have package abrt-java-connector-container installed. This is default in Fedora.

Content of will_java_throw executable:

$ cat /usr/bin/will_java_throw
 export MAIN_CLASS
 . /usr/share/java-utils/java-functions
 set_classpath "willcrash/willuncaught.jar"

In other cases you have to load shared native library (.so) /usr/lib/abrt-java-connector/libabrt-java-connector.so into VM using either -agentlib or -agentpath command line parameter. The first parameter requires the library’s file name and works only if the library is placed in one of the ld searched directories or a directory included in LD_LIBRARY_PATH environment variable. The latter command line argument works with an arbitrary, valid absolute path pointing to the library.

$ java -agentlib:abrt-java-connector=cel=on $MyClass -platform.jvmtiSupported true

For more information see abrt-java-connector readme.

Example (exception in container -> host’s systemd journal log)

Inside of container:
[root@049e3e4b7233 tmp]# will_java_throw
Can't open '/usr/share/abrt/conf.d/plugins/java.conf': No such file or directory
Can't open '/etc/abrt/plugins/java.conf': No such file or directory
Exception in thread "main" java.lang.NullPointerException
	at WontCatchNullPointerException.die_hard(WontCatchNullPointerException.java:30)
	at WontCatchNullPointerException.die_hard(WontCatchNullPointerException.java:33)
	at WontCatchNullPointerException.die_hard(WontCatchNullPointerException.java:33)
	at WontCatchNullPointerException.die_hard(WontCatchNullPointerException.java:33)
	at WontCatchNullPointerException.die(WontCatchNullPointerException.java:21)
	at WontCatchNullPointerException.main(WontCatchNullPointerException.java:38)
Host’s systemd journal:
# journalctl -f
May 22 14:40:19 localhost.localdomain dockerd-current[972]:
container-exception-logger - {"type": "Java", "executable":
"/usr/share/java/willcrash/willuncaught.jar", "reason": "Uncaught exception
java.lang.NullPointerException in method
WontCatchNullPointerException.die_hard()", "backtrace": "Exception in thread
"main" java.lang.NullPointerException\n        at
at WontCatchNullPointerException.die(WontCatchNullPointerException.java:21)
at WontCatchNullPointerException.main(WontCatchNullPointerException.java:38)
"uid": "0", "abrt-java-connector": "1.1.1-3ec4e6c"}

Heroes of Fedora (HoF) – F28 Final

Posted by Fedora Community Blog on May 22, 2018 10:35 AM

Let’s look at some testing stats for Fedora 28 Final!

Hello testers, hope you’re ready for some stats concerning the release of Fedora 28 Final! The purpose of Heroes of Fedora is to provide a summation of testing activity on each milestone release of Fedora. So, without further ado, let’s get started!

Updates Testing

Test period: Fedora 28 Final (2018-04-17 – 2018-05-01)
Testers: 117
Comments1: 785

Name Updates commented
Dmitri Smirnov (cserpentis) 124
Björn Esser (besser82) 118
Pete Walter (pwalter) 81
Filipe Rosset (filiperosset) 71
Peter Smith (smithp) 40
Charles-Antoine Couret (renault) 20
Nie Lili (lnie) 19
Adam Williamson (adamwill) 18
Martti Kuosmanen (kuosmanen) 15
bojan 15
Daniel Lara Souza (danniel) 13
anonymous 12
Eugene Mah (imabug) 11
Lukas Brabec (lbrabec) 9
Luis Roca (roca) 9
mastaiza 9
Ankur Sinha (ankursinha) 7
Paul Whalen (pwhalen) 7
mzink 7
lruzicka 7
Hans Müller (cairo) 6
Luis Enrique Bazán De León (lbazan) 6
František Zatloukal (frantisekz) 6
Peter Robinson (pbrobinson) 6
Jared Smith (jsmith) 5
Wolfgang Ulbrich (raveit65) 4
Rory Gallagher (earthwalker) 4
sassam 4
Mirek Svoboda (goodmirek) 4
leigh123linux 4
Héctor H. Louzao P. (hhlp) 4
Stephen Gallagher (sgallagh) 4
Alessio Ciregia (alciregi) 4
Randy Barlow (bowlofeggs) 3
Major Hayden (mhayden) 3
jonathan haas (jonha) 3
Juan Orti (jorti) 3
Sumantro Mukherjee (sumantrom) 3
Dusty Mabe (dustymabe) 3
Colin J Thomson (g6avk) 3
Masami Ichikawa (masami) 3
Kay Sievers (kay) 3
…and also 75 other reporters who created less than 3 reports each, but 85 reports combined!

1 If a person provides multiple comments to a single update, it is considered as a single comment. Karma value is not taken into account.

Validation Testing

Test period: Fedora 28 Final (2018-04-17 – 2018-05-01)
Testers: 17
Reports: 305
Unique referenced bugs: 2

Name Reports submitted Referenced bugs1
pwhalen 92
frantisekz 27
kparal 27 1571786 (1)
sumantrom 20
pschindl 19
lnie 15
lbrabec 15
tenk 15 1562024 (1)
sinnykumari 15
alciregi 12
sgallagh 12
lruzicka 11
satellit 8
kevin 6
adamwill 5
wadadli 4
cmurf 2

1 This is a list of bug reports referenced in test results. The bug itself may not be created by the same person.

Bug Reports

Test period: Fedora 28 Final (2018-04-17 – 2018-05-01)
Reporters: 226
New reports: 388

Name Reports submitted1 Excess reports2 Accepted blockers3
ricky.tigg at gmail.com 17 3 (17%) 0
Florian Weimer 10 0 (0%) 0
Nerijus Baliūnas 9 0 (0%) 0
roel.lerma at gmail.com 9 0 (0%) 0
Adam Williamson 7 0 (0%) 0
David Dreggors 7 0 (0%) 0
Alessio 6 1 (16%) 0
Ankur Sinha (FranciscoD) 6 0 (0%) 0
Chris Murphy 6 0 (0%) 0
Hayden 6 0 (0%) 0
Juan Orti 6 1 (16%) 0
mastaiza 6 0 (0%) 0
Gwendal 5 1 (20%) 0
Kamil Páral 5 2 (40%) 0
Hedayat Vatankhah 4 0 (0%) 0
Kyriakos Fytrakis 4 0 (0%) 0
Paul Whalen 4 0 (0%) 0
Robbie Harwood 4 0 (0%) 0
Akira TAGOH 3 0 (0%) 0
Christian Heimes 3 0 (0%) 0
James Ettle 3 0 (0%) 0
Luis 3 0 (0%) 0
Lukas Ruzicka 3 0 (0%) 0
mithrial at gmail.com 3 0 (0%) 0
mvharlan 3 1 (33%) 0
Paul Montalvan 3 0 (0%) 0
Pavel Roskin 3 0 (0%) 0
PHolder+RedHatBugzilla at gmail.com 3 0 (0%) 0
Sam Varshavchik 3 0 (0%) 0
Severin Gehwolf 3 0 (0%) 0
Vladimir Benes 3 0 (0%) 0
…and also 195 other reporters who created less than 3 reports each, but 228 reports combined!

1 The total number of new reports (including “excess reports”). Reopened reports or reports with a changed version are not included, because it was not technically easy to retrieve those. This is one of the reasons why you shouldn’t take the numbers too seriously, but just as interesting and fun data.
2 Excess reports are those that were closed as NOTABUG, WONTFIX, WORKSFORME, CANTFIX or INSUFFICIENT_DATA. Excess reports are not necessarily a bad thing, but they make for interesting statistics. Close manual inspection is required to separate valuable excess reports from those which are less valuable.
3 This only includes reports that were created by that particular user and accepted as blockers afterwards. The user might have proposed other people’s reports as blockers, but this is not reflected in this number.

The post Heroes of Fedora (HoF) – F28 Final appeared first on Fedora Community Blog.

Pinning Deployments in OSTree Based Systems

Posted by Dusty Mabe on May 22, 2018 12:00 AM
Introduction RPM-OSTree/OSTree conveniently allows you to rollback if you upgrade and don’t like the upgraded software. This is done by keeping around the old deployment; the old software you booted in to. After a single upgrade you’ll have a booted deployement and the rollback deployment. On the next upgrade the current rollback deployment will be discarded and the current booted deployment will become the new rollback deployment. Typically these two deployments are all that is kept around.

OSCAL'18 Debian, Ham, SDR and GSoC activities

Posted by Daniel Pocock on May 21, 2018 08:44 PM

Over the weekend I've been in Tirana, Albania for OSCAL 2018.

Crowdfunding report

The crowdfunding campaign to buy hardware for the radio demo was successful. The gross sum received was GBP 110.00, there were Paypal fees of GBP 6.48 and the net amount after currency conversion was EUR 118.29. Here is a complete list of transaction IDs for transparency so you can see that if you donated, your contribution was included in the total I have reported in this blog. Thank you to everybody who made this a success.

The funds were used to purchase an Ultracell UCG45-12 sealed lead-acid battery from Tashi in Tirana, here is the receipt. After OSCAL, the battery is being used at a joint meeting of the Prishtina hackerspace and SHRAK, the amateur radio club of Kosovo on 24 May. The battery will remain in the region to support any members of the ham community who want to visit the hackerspaces and events.

Debian and Ham radio booth

Local volunteers from Albania and Kosovo helped run a Debian and ham radio/SDR booth on Saturday, 19 May.

The antenna was erected as a folded dipole with one end joined to the Tirana Pyramid and the other end attached to the marquee sheltering the booths. We operated on the twenty meter band using an RTL-SDR dongle and upconverter for reception and a Yaesu FT-857D for transmission. An MFJ-1708 RF Sense Switch was used for automatically switching between the SDR and transceiver on PTT and an MFJ-971 ATU for tuning the antenna.

I successfully made contact with 9A1D, a station in Croatia. Enkelena Haxhiu, one of our GSoC students, made contact with Z68AA in her own country, Kosovo.

Anybody hoping that Albania was a suitably remote place to hide from media coverage of the British royal wedding would have been disappointed as we tuned in to GR9RW from London and tried unsuccessfully to make contact with them. Communism and royalty mix like oil and water: if a deceased dictator was already feeling bruised about an antenna on his pyramid, he would probably enjoy water torture more than a radio transmission celebrating one of the world's most successful hereditary monarchies.

A versatile venue and the dictator's revenge

It isn't hard to imagine communist dictator Enver Hoxha turning in his grave at the thought of his pyramid being used for an antenna for communication that would have attracted severe punishment under his totalitarian regime. Perhaps Hoxha had imagined the possibility that people may gather freely in the streets: as the sun moved overhead, the glass facade above the entrance to the pyramid reflected the sun under the shelter of the marquees, giving everybody a tan, a low-key version of a solar death ray from a sci-fi movie. Must remember to wear sunscreen for my next showdown with a dictator.

The security guard stationed at the pyramid for the day was kept busy chasing away children and more than a few adults who kept arriving to climb the pyramid and slide down the side.

Meeting with Debian's Google Summer of Code students

Debian has three Google Summer of Code students in Kosovo this year. Two of them, Enkelena and Diellza, were able to attend OSCAL. Albania is one of the few countries they can visit easily and OSCAL deserves special commendation for the fact that it brings otherwise isolated citizens of Kosovo into contact with an increasingly large delegation of foreign visitors who come back year after year.

We had some brief discussions about how their projects are starting and things we can do together during my visit to Kosovo.

Workshops and talks

On Sunday, 20 May, I ran a workshop Introduction to Debian and a workshop on Free and open source accounting. At the end of the day Enkelena Haxhiu and I presented the final talk in the Pyramid, Death by a thousand chats, looking at how free software gives us a unique opportunity to disable a lot of unhealthy notifications by default.

All systems go

Posted by Fedora Infrastructure Status on May 21, 2018 08:22 PM
Service 'Pagure' now has status: good: Everything seems to be working.

Heroes of Fedora (HoF) – F28 Beta

Posted by Fedora Community Blog on May 21, 2018 10:35 AM

It’s time for some stats concerning Fedora 28 Beta!

Welcome back to another installation of Heroes of Fedora, where we’ll look at the stats concerning the testing of Fedora 28 Beta. The purpose of Heroes of Fedora is to provide a summation of testing activity on each milestone release of Fedora. So, without further ado, let’s get started!


Updates Testing

Test period: Fedora 28 Beta (2018-03-06 – 2018-04-17)
Testers: 166
Comments1: 1467

Name Updates commented
Pete Walter (pwalter) 219
Dmitri Smirnov (cserpentis) 148
Filipe Rosset (filiperosset) 111
Martti Kuosmanen (kuosmanen) 90
Björn Esser (besser82) 88
Alexander Kurtakov (akurtakov) 76
Hans Müller (cairo) 71
Peter Smith (smithp) 70
Charles-Antoine Couret (renault) 55
sassam 54
Nie Lili (lnie) 41
Piotr Drąg (piotrdrag) 37
Daniel Lara Souza (danniel) 32
Adam Williamson (adamwill) 24
anonymous 22
lruzicka 15
Randy Barlow (bowlofeggs) 12
Eugene Mah (imabug) 12
Parag Nemade (pnemade) 10
Peter Robinson (pbrobinson) 9
František Zatloukal (frantisekz) 9
Matthias Runge (mrunge) 9
Paul Whalen (pwhalen) 8
Tom Sweeney (tomsweeneyredhat) 7
Alessio Ciregia (alciregi) 7
mastaiza 6
Zbigniew Jędrzejewski-Szmek (zbyszek) 5
bluepencil 5
Sérgio Monteiro Basto (sergiomb) 5
sobek 5
Héctor H. Louzao P. (hhlp) 5
zdenek 4
Itamar Reis Peixoto (itamarjp) 4
Neal Gompa (ngompa) 4
leigh123linux 4
Nathan (nathan95) 4
Wolfgang Ulbrich (raveit65) 3
Luis Roca (roca) 3
Rory Gallagher (earthwalker) 3
Jiri Eischmann (eischmann) 3
Dusty Mabe (dustymabe) 3
Miro Hrončok (churchyard) 3
Christian Kellner (gicmo) 3
Kevin Fenzi (kevin) 3
Colin Walters (walters) 3
…and also 121 other reporters who created less than 3 reports each, but 153 reports combined!

1 If a person provides multiple comments to a single update, it is considered as a single comment. Karma value is not taken into account.

Validation Testing

Test period: Fedora 28 Beta (2018-03-06 – 2018-04-17)
Testers: 23
Reports: 991
Unique referenced bugs: 35

Name Reports submitted Referenced bugs1
pwhalen 367 1520580 1553488 1565217 1566593 (4)
lruzicka 170 1558027 1566566 1569411 on bare metal (4)
alciregi 108 1520580 1536356 1541868 1552130 1553935 1554072 1554075 1556951 1557472 1561072 1564784 1568119 (12)
tablepc 56 1560314 (1)
lnie 51 1557655 1557659 (2)
coremodule 34
satellit 23 1519042 1533310 1554996 1558671 1561304 (5)
lbrabec 22
tenk 20 1539499 1561304 1562024 (3)
sumantrom 20
kparal 18 1555752 1557472 1560738 1562087 (4)
frantisekz 17 1561115 (1)
sgallagh 15
table1pc 15
adamwill 12 1560738 1561768 (2)
sinnykumari 10
fab 7
siddharthvipul1 6
pschindl 6
kevin 5
jdoss 4
mohanboddu 4
hobbes1069 1 1561284 (1)

1 This is a list of bug reports referenced in test results. The bug itself may not be created by the same person.

Bug Reports

Test period: Fedora 28 Beta (2018-03-06 – 2018-04-17)
Reporters: 320
New reports: 1918

Name Reports submitted1 Excess reports2 Accepted blockers3
Fedora Release Engineering 1104 47 (4%) 0
Lukas Ruzicka 31 6 (19%) 2
lnie 30 9 (30%) 0
Adam Williamson 25 0 (0%) 7
Alessio 22 8 (36%) 0
Chris Murphy 18 1 (5%) 0
Heiko Adams 16 2 (12%) 0
Florian Weimer 12 0 (0%) 0
mastaiza 12 0 (0%) 0
ricky.tigg at gmail.com 11 3 (27%) 0
Juan Orti 10 0 (0%) 0
Menanteau Guy 10 6 (60%) 0
Christian Heimes 9 1 (11%) 0
Daniel Mach 9 0 (0%) 0
Stephen Gallagher 8 0 (0%) 1
Hayden 8 0 (0%) 0
Jared Smith 8 1 (12%) 0
Joseph 8 0 (0%) 0
pmkellly at frontier.com 8 1 (12%) 0
René Genz 8 1 (12%) 0
Paul Whalen 7 0 (0%) 1
Anass Ahmed 7 0 (0%) 0
František Zatloukal 7 0 (0%) 0
Hedayat Vatankhah 7 0 (0%) 0
Leslie Satenstein 7 0 (0%) 0
Miro Hrončok 7 0 (0%) 0
Randy Barlow 7 0 (0%) 0
shirokuro005 at gmail.com 7 0 (0%) 0
Andrey Motoshkov 6 0 (0%) 0
Ankur Sinha (FranciscoD) 6 0 (0%) 0
J. Haas 6 0 (0%) 0
Jens Petersen 6 0 (0%) 0
John Reiser 6 0 (0%) 0
Langdon White 6 1 (16%) 0
Martin Pitt 6 0 (0%) 0
Merlin Mathesius 6 1 (16%) 0
Mikhail 6 0 (0%) 0
Stephen 6 0 (0%) 0
Zbigniew Jędrzejewski-Szmek 6 0 (0%) 0
Kamil Páral 5 1 (20%) 1
Ali Akcaagac 5 0 (0%) 0
Parag Nemade 5 0 (0%) 0
Peter Robinson 5 2 (40%) 0
sumantro 5 0 (0%) 0
Christian Kellner 4 0 (0%) 0
dac.override at gmail.com 4 0 (0%) 0
Dawid Zamirski 4 0 (0%) 0
Jiri Eischmann 4 0 (0%) 0
John Dennis 4 1 (25%) 0
Kalev Lember 4 0 (0%) 0
Milan Zink 4 0 (0%) 0
Miroslav Suchý 4 1 (25%) 0
Petr Pisar 4 0 (0%) 0
satellitgo at gmail.com 4 1 (25%) 0
sedrubal 4 0 (0%) 0
Steven Haigh 4 0 (0%) 0
Dusty Mabe 3 0 (0%) 1
Bhushan Barve 3 0 (0%) 0
Daniel Rindt 3 0 (0%) 0
deadrat 3 0 (0%) 0
Gwendal 3 0 (0%) 0
Henrique Montemor Junqueira 3 0 (0%) 0
Joachim Frieben 3 0 (0%) 0
luke_l at o2.pl 3 0 (0%) 0
Matías Zúñiga 3 0 (0%) 0
Michel Normand 3 0 (0%) 0
Mike FABIAN 3 0 (0%) 0
Mirek Svoboda 3 0 (0%) 0
Nathanael Noblet 3 1 (33%) 0
Quentin Tayssier 3 0 (0%) 0
sebby2k 3 0 (0%) 0
Vadim 3 0 (0%) 0
Vadim Raskhozhev 3 0 (0%) 0
Vladimir Benes 3 0 (0%) 0
Yaakov Selkowitz 3 0 (0%) 0
yucef sourani 3 0 (0%) 0
Zdenek Chmelar 3 1 (33%) 0
…and also 243 other reporters who created less than 3 reports each, but 293 reports combined!

1 The total number of new reports (including “excess reports”). Reopened reports or reports with a changed version are not included, because it was not technically easy to retrieve those. This is one of the reasons why you shouldn’t take the numbers too seriously, but just as interesting and fun data.
2 Excess reports are those that were closed as NOTABUG, WONTFIX, WORKSFORME, CANTFIX or INSUFFICIENT_DATA. Excess reports are not necessarily a bad thing, but they make for interesting statistics. Close manual inspection is required to separate valuable excess reports from those which are less valuable.
3 This only includes reports that were created by that particular user and accepted as blockers afterwards. The user might have proposed other people’s reports as blockers, but this is not reflected in this number.


The post Heroes of Fedora (HoF) – F28 Beta appeared first on Fedora Community Blog.

My talk from the RISC-V workshop in Barcelona

Posted by Richard W.M. Jones on May 21, 2018 09:39 AM

<iframe allowfullscreen="true" class="youtube-player" height="312" src="https://www.youtube.com/embed/HxbpJzU2gkw?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" type="text/html" width="500"></iframe>

Audacity quick tip: quickly remove background noise

Posted by Fedora Magazine on May 21, 2018 08:00 AM

When recording sounds on a laptop — say for a simple first screencast — many users typically use the built-in microphone. However, these small microphones also capture a lot of background noise. In this quick tip, learn how to use Audacity in Fedora to quickly remove the background noise from audio files.

Installing Audacity

Audacity is an application in Fedora for mixing, cutting, and editing audio files. It supports a wide range of formats out of the box on Fedora — including MP3 and OGG. Install Audacity from the Software application.

If the terminal is more your speed, use the command:

sudo dnf install audacity

Import your Audio, sample background noise

After installing Audacity, open the application, and import your sound using the File > Import menu item. This example uses a sound bite from freesound.org to which noise was added:

<audio class="wp-audio-shortcode" controls="controls" id="audio-20599-1" preload="none" style="width: 100%;"><source src="https://ryanlerch.fedorapeople.org/noise.ogg?_=1" type="audio/ogg">https://ryanlerch.fedorapeople.org/noise.ogg</audio>

Next, take a sample of the background noise to be filtered out. With the tracks imported, select an area of the track that contains only the background noise. Then choose Effect >  Noise Reduction from the menu, and press the Get Noise Profile button.

Filter the Noise

Next, select the area of the track you want to filter the noise from. Do this either by selecting with the mouse, or Ctrl + a to select the entire track. Finally, open the Effect > Noise Reduction dialog again, and click OK to apply the filter.

Additionally, play around with the settings until your tracks sound better. Here is the original file again, followed by the noise reduced track for comparison (using the default settings):

<audio class="wp-audio-shortcode" controls="controls" id="audio-20599-2" preload="none" style="width: 100%;"><source src="https://ryanlerch.fedorapeople.org/sidebyside.ogg?_=2" type="audio/ogg">https://ryanlerch.fedorapeople.org/sidebyside.ogg</audio>

Episode 97 - Automation: Humans are slow and dumb

Posted by Open Source Security Podcast on May 20, 2018 11:18 PM
Josh and Kurt talk about the security of automation as well as automating security. The only way automation will really work long term is full automation. Humans can't be trusted enough to rely on them to do things right.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/6584004/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

A new sync primitive in golang

Posted by James Just James on May 20, 2018 10:00 PM
I’ve been working on lots of new stuff in mgmt and I had a synchronization problem that needed solving… Long story short, I built it into a piece of re-usable functionality, exactly like you might find in the sync package. For details and examples, please continue reading… The Problem: I want to multicast a signal to an arbitrary number of goroutines. As you might already know, this can already be done with a chan struct{}.

All Systems Go! 2018 CfP Open

Posted by Lennart Poettering on May 20, 2018 10:00 PM

<large>The All Systems Go! 2018 Call for Participation is Now Open!</large>

The Call for Participation (CFP) for All Systems Go! 2018 is now open. We’d like to invite you to submit your proposals for consideration to the CFP submission site.

ASG image

The CFP will close on July 30th. Notification of acceptance and non-acceptance will go out within 7 days of the closing of the CFP.

All topics relevant to foundational open-source Linux technologies are welcome. In particular, however, we are looking for proposals including, but not limited to, the following topics:

  • Low-level container executors and infrastructure
  • IoT and embedded OS infrastructure
  • BPF and eBPF filtering
  • OS, container, IoT image delivery and updating
  • Building Linux devices and applications
  • Low-level desktop technologies
  • Networking
  • System and service management
  • Tracing and performance measuring
  • IPC and RPC systems
  • Security and Sandboxing

While our focus is definitely more on the user-space side of things, talks about kernel projects are welcome, as long as they have a clear and direct relevance for user-space.

For more information please visit our conference website!

Mastering en Bash ~ Enlaces simbólicos o duros y alias

Posted by Alvaro Castillo on May 20, 2018 04:00 PM

Acorta y vencerás

En Echemos un bitstazo sabemos que no es así realmente el refrán, pero si que es cierto, que si acortamos mucho nos será más fácil aprendernos dónde está cada cosa haciendo uso de los enlaces para los ficheros y directorios y alias para comandos muy largos.


Los alias nos permiten reducir la longitud de una sentencia que queramos ejecutar en nuestro sistema y atajarla con una simple palabra. Por ejemplo, si queremos acceder a un directorio muy concurrido desde termin...

F28-20180515 updated Lives released

Posted by Ben Williams on May 20, 2018 12:21 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated 28 Live ISOs, carrying the 4.16.8-300 kernel.

This set of updated isos will save about 620+ MB of updates after install.  (for new installs.)

Build Directions: https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

We would also like to thank Fedora- QA  for running the following Tests on our ISOs.: https://openqa.fedoraproject.org/tests/overview?distri=fedora&version=28&build=FedoraRespin-28-updates-20180515.0&groupid=1

With this release we will not be producing F27 updated isos. (except by special request).

These can be found at  http://tinyurl.com/live-respins .We would also like to thank the following irc nicks for helping test these isos: dowdle, and Southern_Gentlem.

As always we are always needing Testers to help with our respins. We have a new Badge for People whom help test.  See us in #fedora-respins on Freenode IRC.

Mastering en Bash - Buscando archivos y directorios

Posted by Alvaro Castillo on May 19, 2018 08:11 PM

En busca de Wally

¿A quién no le interesa encontrar a esta buscada ballena en medio de un océano tan grande? Pues a nosotros no la verdad, preferimos buscar otro tipo de cosas como archivos y directorios en nuestro sistema. Para ello haremos uso de los comandos find(1), locate(1), whois(1),whereis(1).


find(1) a diferencia de locate(1) es un comando que busca a tiempo real y tiene muchísimas funcionalidades añadidas como filtrar por nombre, tipo de ejecutable, fecha...

Get started with Apache Cassandra on Fedora

Posted by Fedora Magazine on May 18, 2018 12:26 PM

NoSQL databases are every bit as popular today as more conventional, relational ones. One of the most popular NoSQL systems is Apache Cassandra. It’s designed to deal with big data, and can be scaled across large numbers of servers. This makes it resilient and highly available.

This package is relatively new on Fedora, since it was introduced on Fedora 26. The following article is a short tutorial to set up Cassandra on Fedora for a development environment. Production deployments should use a different set-up to harden the service.

Install and configure Cassandra

The set of database packages in Fedora’s stable repositories are the client tools in the cassandra package. The common library is in the cassandra-java-libs package (required by both client and server). The most important part of the database, the daemon, is available in the cassandra-server package. Some more supporting packages may be listed by running the following command in a terminal.

dnf list cassandra\*

First, install and start the service:

$ sudo dnf install cassandra cassandra-server
$ sudo systemctl start cassandra

To enable the service to automatically start at boot time, run:

$ sudo systemctl enable cassandra

Finally, test the server initialization using the client:

$ cqlsh
Connected to Test Cluster at
[cqlsh 5.0.1 | Cassandra 3.11.1 | CQL spec 3.4.4 | Native protocol v4]
Use HELP for help.
cqlsh> CREATE KEYSPACE k1 WITH REPLICATION = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 };
cqlsh> USE k1;
cqlsh:k1> CREATE TABLE users (user_name varchar, password varchar, gender varchar, PRIMARY KEY (user_name));
cqlsh:k1> INSERT INTO users (user_name, password, gender) VALUES ('John', 'test123', 'male');
cqlsh:k1> SELECT * from users;

 user_name | gender | password
      John |   male |  test123

(1 rows)

To configure the server, edit the file /etc/cassandra/cassandra.yaml. For more information about how to change configuration, see the the upstream documentation.

Controlling access with users and passwords

By default, authentication is disabled. To enable it, follow these steps:

  1. By default, the authenticator option is set to AllowAllAuthenticator. Change the authenticator option in the cassandra.yaml file to PasswordAuthenticator:
authenticator: PasswordAuthenticator
  1. Restart the service.
$ sudo systemctl restart cassandra
  1. Start cqlsh using the default superuser name and password:
$ cqlsh -u cassandra -p cassandra
  1. Create a new superuser:
cqlsh> CREATE ROLE <new_super_user> WITH PASSWORD = '<some_secure_password>' 
    AND SUPERUSER = true 
    AND LOGIN = true;
  1. Log in as the newly created superuser:
$ cqlsh -u <new_super_user> -p <some_secure_password>
  1. The superuser cannot be deleted. To neutralize the account, change the password to something long and incomprehensible, and alter the user’s status to NOSUPERUSER:
cqlsh> ALTER ROLE cassandra WITH PASSWORD='SomeNonsenseThatNoOneWillThinkOf'
    AND SUPERUSER=false;

Enabling remote access to the server

Edit the /etc/cassandra/cassandra.yaml file, and change the following parameters:

listen_address: external_ip
rpc_address: external_ip
seed_provider/seeds: "<external_ip>"

Then restart the service:

$ sudo systemctl restart cassandra

Other common configuration

There are quite a few more common configuration parameters. For instance, to set the cluster name, which must be consistent for all nodes in the cluster:

cluster_name: 'Test Cluster'

The data_file_directories option sets the directory where the service will write data. Below is the default that is used if unset. If possible, set this to a disk used only for storing Cassandra data.

    - /var/lib/cassandra/data

To set the type of disk used to store data (SSD or spinning):

disk_optimization_strategy: ssd|spinning

Running a Cassandra cluster

One of the main features of Cassandra is the ability to run in a multi-node setup. A cluster setup brings the following benefits:

  • Fault tolerance: Automatically replicates data to multiple nodes for fault-tolerance. Also, it supports replication across multiple data centers. You can replace failed nodes with no downtime.
  • Decentralization: There are no single points of failure, no network bottlenecks, and every node in the cluster is identical.
  • Scalability & elasticity: Can run thousands of nodes with petabytes of data. Read and write throughput both increase linearly as new machines are added, with no downtime or interruption to applications.

The following sections describe how to setup a simple two-node cluster.

Clearing existing data

First, if the server is running now or has ever run before, you must delete all the existing data (make a backup first). This is because all nodes must have the same cluster name and it’s better to choose a different one from the default Test cluster name.

Run the following commands on each node:

$ sudo systemctl stop cassandra
$ sudo rm -rf /var/lib/cassandra/data/system/*

If you deploy a large cluster, you can do this via automation using Ansible.

Configuring the cluster

To setup the cluster, edit the main configuration file /etc/cassandra/cassandra.yaml. Modify these parameters:

  • cluster_name: Name of your cluster.
  • num_tokens: Number of virtual nodes within a Cassandra instance. This option partitions the data and spreads the data throughout the cluster. The recommended value is 256.
  • seeds: Comma-delimited list of the IP address of each node in the cluster.
  • listen_address: The IP address or hostname the service binds to for connecting to other nodes. It defaults to localhost and needs to be changed to the IP address of the node.
  • rpc_address: The listen address for client connections (CQL protocol).
  • endpoint_snitch: Set to a class that implements the IEndpointSnitch. Cassandra uses snitches to locate nodes and route requests. The default is SimpleSnitch, but for this exercise, change it to GossipingPropertyFileSnitch which is more suitable for production environments:
    • SimpleSnitch: Used for single-datacenter deployments or single-zone in public clouds. Does not recognize datacenter or rack information. It treats strategy order as proximity, which can improve cache locality when disabling read repair.
    • GossipingPropertyFileSnitch: Recommended for production. The rack and datacenter for the local node are defined in the cassandra-rackdc.properties file and propagate to other nodes via gossip.
  • auto_bootstrap: This parameter is not present in the configuration file, so add it and set to false. It makes new (non-seed) nodes automatically migrate the right data to themselves.

Configuration files for a two-node cluster follow.

Node 1:

cluster_name: 'My Cluster'
num_tokens: 256
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
        - seeds:,
endpoint_snitch: GossipingPropertyFileSnitch
auto_bootstrap: false

Node 2:

cluster_name: 'My Cluster'
num_tokens: 256
    - class_name: org.apache.cassandra.locator.SimpleSeedProvider
        - seeds:,
endpoint_snitch: GossipingPropertyFileSnitch
auto_bootstrap: false

Starting the cluster

The final step is to start each instance of the cluster. Start the seed instances first, then the remaining nodes.

$ sudo systemctl start cassandra

Checking the cluster status

In conclusion, you can check the cluster status with the nodetool utility:

$ sudo nodetool status

Datacenter: datacenter1
|/ State=Normal/Leaving/Joining/Moving
--  Address      Load       Tokens       Owns    Host ID                               Rack
UN  147.48 KB  256          ?       f50799ee-8589-4eb8-a0c8-241cd254e424  rack1
UN  139.04 KB  256          ?       54b16af1-ad0a-4288-b34e-cacab39caeec  rack1
Note: Non-system keyspaces don't have the same replication settings, effective ownership information is meaningless

Cassandra in a container

Linux containers are becoming more popular. You can find a Cassandra container image here on DockerHub:


It’s easy to start a container for this purpose without touching the rest of the system. First, install and run the Docker daemon:

$ sudo dnf install docker
$ sudo systemctl start docker

Next, pull the image:

$ sudo docker pull centos/cassandra-3-centos7

Now prepare a directory for data:

$ sudo mkdir data
$ sudo chown 143:143 data

Finally, start the container with a few arguments. The container uses the prepared directory to store data into and creates a user and database.

$ docker run --name cassandra -d -e CASSANDRA_ADMIN_PASSWORD=secret -p 9042:9042 -v `pwd`/data:/var/opt/rh/sclo-cassandra3/lib/cassandra:Z centos/cassandra-3-centos7

Now you have the service running in a container while storing data into the data directory in the current working directory. If the cqlsh client is not installed on your host system, run the one provided by the image with the following command:

$ docker exec -it cassandra 'bash' -c 'cqlsh '`docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' cassandra`' -u admin -p secret'


The Cassandra maintainers in Fedora seek co-maintainers to help keep the package fresh on Fedora. If you’d like to help, simply send them email.

Photo by Glen Jackson on Unsplash.

GIMP 2.10.2 released. How to install it on Fedora or Ubuntu

Posted by Luca Ciavatta on May 18, 2018 10:00 AM

The long-awaited GIMP 2.10.2 is finally here! Discover how to install it on Fedora or Ubuntu Linux distribution. GIMP 2.10.0 was a huge release, which contains the result of 6 long years of work (GIMP 2.8 was released almost exactly 6 years ago!) by a small but dedicated core of contributors. GIMP 2.10.2 is out with 44 bugfixes and some new features such as HEIF file format support. Windows installer and a flatpak for Linux are available, Mac OS X version is coming in near future.

So, Let’s see how to install GIMP on Fedora or Ubuntu.


<figure class="wp-caption alignright" id="attachment_717" style="width: 950px">GIMP 2.10.0 brings updated user interface and initial HiDPI support<figcaption class="wp-caption-text">GIMP 2.10.0 brings updated user interface and initial HiDPI support</figcaption></figure>

GIMP 2.10.0 released. Really a big list of notable changes

GIMP, the GNU Image Manipulation Program, is a cross-platform image editor available for GNU/Linux, OS X, Windows and more operating systems. It is free software, you can change its source code and distribute your changes.

Whether you are a graphic designer, photographer, illustrator, or scientist, GIMP provides you with sophisticated tools to get your job done. You can further enhance your productivity with GIMP thanks to many customization options and 3rd party plugins.

GIMP provides the tools needed for high-quality image manipulation. From retouching to restoring to creative composites, the only limit is your imagination. Here, you can find a glance at the most notable features from the official site:

  • Image processing nearly fully ported to GEGL, allowing high bit depth processing, multi-threaded and hardware accelerated pixel processing and more.
  • Color management is a core feature now, most widgets and preview areas are color-managed.
  • Many improved tools, and several new and exciting tools, such as the Warp transform, the Unified transform, and the Handle transform tools.
  • On-canvas preview for all filters ported to GEGL.
  • Improved digital painting with canvas rotation and flipping, symmetry painting, MyPaint brush support…
  • Support for several new image formats added (OpenEXR, RGBE, WebP, HGT), as well as improved support for many existing formats (in particular more robust PSD importing).
  • Metadata viewing and editing for Exif, XMP, IPTC, and DICOM.
  • Basic HiDPI support: automatic or user-selected icon size.


GIMP 2.10.2 released. With 44 bugfixes and some new features

It’s barely been a month since the dev team released GIMP 2.10.0, and the first bugfix version 2.10.2 is already there. Its main purpose is fixing the various bugs and issues which were to be expected after the 2.10.0 release.

For a complete list of changes please see Overview of Changes from GIMP 2.10.0 to GIMP 2.10.2.

Notable changes are the added support for HEIF image format and new filters. This release brings HEIF image support, both for loading and export. Two new filters have been added, based off GEGL operations: Spherize filter to wrap an image around a spherical cap, based on the gegl:spherize operation, and Recursive Transform filter to create a Droste effect, based on the gegl:recursive-transform operation.


How to install GIMP 2.10.2 on Fedora or Ubuntu distributions

GIMP 2.10.2 is available for all Linux distributions in various packages, through Flatpak packaging system, for Windows, and for Mac OS X. We’re going to look on how to install GIMP on Fedora or Ubuntu.

The suggested way to install the new version of GIMP on Fedora systems is with Flatpak:

The easy way to install the new version of GIMP on Ubuntu systems is through the otto-kesselgulasch PPA:

  • so open a terminal session and digit:

sudo add-apt-repository ppa:otto-kesselgulasch/gimp
sudo apt update
sudo apt install gimp

You can also install GIMP on Ubuntu (or any other Linux distribution) through Flatpak packaging system:

The new version of GIMP ships with far more new features, including new and improved tools, better file formats support, various usability improvements, revamped color management support, a plethora of improvements targeted at digital painters and photographers, metadata editing, and much, much more.

So after discovered how to install GIMP on Fedora or Ubuntu, enjoy your GIMP!

The post GIMP 2.10.2 released. How to install it on Fedora or Ubuntu appeared first on cialu.net.

#FaasFriday – Building a Serverless Microblog in Ruby with OpenFaaS – Part 1

Posted by Keiran "Affix" Smith on May 18, 2018 08:00 AM

Happy #FaaSFriday today we are going to learn how to build our very own serverless microblogging backend using Pure Ruby. Alex Ellis of OpenFaaS believes it would be useful to not rely on any gems that require Native Extensions (MySQL, ActiveRecord, bcrypt, etc…) so thats what I intend to do, This tutorial will use nothing byt pure ruby to keep containers small.

Before you continue reading I highly recommend you checkout OpenFaaS on Twitter and Star the OpenFaas repo on Github


This microblogging platform as is should not be used in a production environment as password encryption is sub-optimal (i.e its plain SHA2 and not salted) but.. it could be changed easily something more suitable like bcrypt.

We make extensive use of Environment variables however You should use secrets read more about that here.

Now thats out of the way lets get stuck in.


As always in order to follow this tutorial there are pre-requisites.

  • You have a working OpenFaaS deployment
    • Mine is on Kubernetes, but you can also use Docker Swarm
  • You have an understanding of Ruby
  • You have access to a mySQL Database
  • You can make calls to an OpenFaaS Function

What is OpenFaas?

With OpenFaaS you can package anything as a serverless function – from Node.js to Golang to CSharp, even binaries like ffmpeg or ImageMagick.


Serverless Microblog ArchitectureOur serverless microblogging platform consists of 5 Functions


Our Register function will accept a JSON String containing a Username, Password, First Name, Last Name and E-Mail address. Once submitted we will check the Database for an existing Username and E-Mail if none is found we will add a new record to the database.


Our Login function will accept a JSON string containing a username and password, We will then return a signed JWT containing the username, first name, last name and e-mail.

Add Post

Our Add Post function will accept a JSON string containing the token(from login) and body of the post.

Before we add the post to the database we will call the Validate JWT function to validate the signature of the JWT Token. We will take the User ID from the token.

List Posts

When we call the list posts function we can call it with a JSON string to filter the posts by user and Paginate. By Default we will return the last 100 posts. To paginate we will add an offset parameter to our JSON.

If we don’t send a filter we will return the latest 100 posts in our database.

Registering Users

In order to add posts we need to have users, Lets get started by registering our first user.

Run the following SQL Query to create the users table.

 password TEXT NOT NULL,
 first_name text NOT NULL,
 last_name int NOT NULL
CREATE UNIQUE INDEX users_id_uindex ON users (id);

Now we have our table lets create our first function.

$ faas-cli new faas-ruby-register --lang ruby

this will download the templates and create our first function. Open the Gemfile and add the ‘ruby-mysql’ gem

source 'https://rubygems.org'

gem 'ruby-mysql'

thats the only Gem we need for this function. ruby-mysql is a pure ruby implementation of a mySQL connector and suits the needs of our project. We will be using this gem extensively.

Now open up handler.rb and we add the following code.

require 'mysql'
require 'json'
require 'digest/sha2'

class Handler
    def run(req)
      @my = Mysql.connect(ENV['mysql_host'], ENV['mysql_user'], ENV['mysql_password'], ENV['mysql_database'])
      json = JSON.parse(req)
      if !user_exists(json["username"], json["email"])
        password = Digest::SHA2::new << json['password']
        stmt = @my.prepare('insert into users (username, password, email, first_name, last_name) values (?, ?, ?, ?, ?)')
        stmt.execute json["username"], password.to_s, json["email"], json["first_name"], json["last_name"]
        return "{'username': #{json["username"]}, 'status': 'created'}"
        return "{'error': 'Username or E-Mail already in use'}"

    def user_exists(username, email)
      @my.query("SELECT username FROM users WHERE username = '#{Mysql.escape_string(username)}' OR email = '#{Mysql.escape_string(email)}'").each do |username|
        return true
      return false

And thats our function, Lets have a look at some of this function in detail. In our runner I declared a class variable @my with our mySQL connection. I then parsed the JSON we passed to the function. I used a user_exists method to determine if a user exists in our database, if not I moved on to create a new user. I hashed the password with SHA2 and used a prepared statement to insert our new user.

Open your faas-ruby-register.yml and make it match the following, Please ensure you use your own image instead of mine if you are making modifications.

  name: faas

    lang: ruby
    handler: ./faas-ruby-register
    image: affixxx/faas-ruby-register
      mysql_host: <HOST>
      mysql_user: <USERNAME>
      mysql_password: <PASSWORD>      
      mysql_database: <DB>

Now lets deploy and test the function!

$ faas-cli build -f faas-ruby-register.yml # Build our function Container
$ faas-cli push -f faas-ruby-register.yml # Push our container to dockerhub
$ faas-cli deploy -f faas-ruby-register.yml # deploy the function
$ echo '{"username": "affixx", "password": "TestPassword", "email":"caontact@keiran.scot", "first_name": "keiran", "last_name":"smith"}' | faas-cli invoke faas-ruby-register
{'username': username, 'status': 'created'}
$ echo '{"username": "affixx", "password": "TestPassword", "email":"caontact@keiran.scot", "first_name": "keiran", "last_name":"smith"}' | faas-cli invoke faas-ruby-register
{'error': 'Username or E-Mail already in use'}

awesome register works. Lets move on.

Logging In

Now we have a user in our database we need to log them in, This will require generating a JWT token so we need to generate an RSA Keypair. On a unix based system run the following commands.

$ openssl genpkey -algorithm RSA -out private_key.pem -pkeyopt rsa_keygen_bits:2048
$ openssl rsa -pubout -in private_key.pem -out public_key.pem

Now we have our keypair we need a base64 representation of the text, using a method of your choice get a base64 representation.

Lets generate our new function with faas-cli

$ faas-cli new faas-ruby-login --lang ruby

Before we start working on the function lets get the faas-ruby-login.yml file ready

  name: faas

    lang: ruby
    handler: ./faas-ruby-login
    image: affixxx/faas-ruby-login
      public_key: <BASE64_RSA_PUBLIC_KEY>
      private_key: <BASE64_RSA_PRIVATE_KEY>
      mysql_host: mysql
      mysql_user: root
      mysql_database: users

Now we can write the function. This one is a little more complex than registration. So open the faas-ruby-login/handler.rb file and replace it with the following.

require 'mysql'
require 'jwt'
require 'digest/sha2'

class Handler
    def run(req)
      my = Mysql.connect(ENV['mysql_host'], ENV['mysql_user'], ENV['mysql_password'], ENV['mysql_database'])
      token = nil
      req = JSON.parse(req)
      username = Mysql.escape_string(req['username'])
      my.query("SELECT email, password, username, first_name, last_name FROM users WHERE username = '#{username}'").each do |email, password, username, first_name, last_name|
        digest = Digest::SHA2.new << req['password']
        if digest.to_s == password
          user = {
            email: email,
            first_name: first_name,
            last_name: last_name,
            username: username

          token = generate_jwt(user)
          return "{'username': '#{username}', 'token': '#{token}'}"
          return "{'error': 'Invalid username/password'}"
      return "{'error': 'Invalid username/password'}"

    def generate_jwt(user)
      payload = {
        nbf: Time.now.to_i - 10,
        iat: Time.now.to_i - 10,
        exp: Time.now.to_i + ((60) * 60) * 4,
        user: user

      priv_key = OpenSSL::PKey::RSA.new(Base64.decode64(ENV['private_key']))

      JWT.encode(payload, priv_key, 'RS256')

The biggest difference between this function and our register function is of course the JWT generator. JWT (JSON Web Tokens) are an open, industry standard RFC 7519 method for representing claims securely between two parties.

Our payload obviously contains our user hash after we fetched this from the database, However there are some other fields required.

nbf: Not Before, Our token is not valid before this timestamp. We subtract 10 seconds from the timestamps to account for clock drift.
iat: Issued at, This is the time we issued the token, Again we set this to 10 seconds in the past to account for time drift.
exp: Expiry, this is when our token will no longer be, we have it set to 14400 seconds (4 hours).

Lets test our login function!

$ echo '{"username": "affixx", "password": "TestPassword"}' | faas invoke faas-ruby-login
{'username': 'affixx', 'token': 'eyJhbGciOiJSUzI1NiJ9.eyJuYmYiOjE1MjY1OTMyMTgsImlhdCI6MTUyNjU5MzIxOCwiZXhwIjoxNTI2NjA3NjI4LCJ1c2VyIjp7ImVtYWlsIjoiY2FvbnRhY3RAa2VpcmFuLnNjb3QiLCJmaXJzdF9uYW1lIjoia2VpcmFuIiwibGFzdF9uYW1lIjoic21pdGgiLCJ1c2VybmFtZSI6ImFmZml4eCJ9fQ.qchkmOk8dsrw7SL6Rhi0nHyIlaHX4pzUNXXAQMEOb6IU0n1uT9AJEhFVptZ7tueriaTauY1zmYjKm79pd_UfekVICU4EMbGKt8bQaWrmlqpSel88PyQwolI_bYZqybW2TwWYsdwHcGgGgfb8A8ssk9y6YhktviKdofQYPUmLmaB5uljFHkMvNIg-ByJQpTYmCnMfAC-JF6mOsh65dKCP3qz78HiSX3gHODG1Gk1OJbePVpyDNmw7pGrO97c7kUgTWs5wVmD7Kgs697tAkPz65pFDavwZHSvdzpPEZ47Bh8NCGfWe73KYpceCjmOZK6tuawIx0MM4YP0XWke7kOtKkg'}

Success! lets check the token on JWT.io

Success we have a valid JWT.

What happens with an invalid username/password

$ echo '{"username": "affixx", "password": "WrongPassword"}' | faas invoke faas-ruby-login
{'error': 'Invalid username/password'}

Exactly as expected!

Thats all folks!

Thanks for reading part 1 of this tutorial, Check back next #FaaSFriday when we will work on posting!

As always the code for this tutorial is available on github, there are also extra tools available!

The post #FaasFriday – Building a Serverless Microblog in Ruby with OpenFaaS – Part 1 appeared first on Keiran.SCOT.

Atom pour remplacer Netbeans

Posted by Guillaume Kulakowski on May 17, 2018 06:52 PM

J’avais, par le passé, fait un article sur mon passage d’Eclipse vers Netbeans. Je dois dire qu’au fil du temps, cet IDE m’a déçu : Cycle de vie assez long. Plugin Python qui n’est plus maintenu depuis près de 3 ans. Lourd (qui a dit Java ?). Oracle, qui via sa politique de rachat se […]

Cet article Atom pour remplacer Netbeans est apparu en premier sur Guillaume Kulakowski's blog.

Blue Sky Discussion: EPEL-next or EPIC

Posted by Stephen Smoogen on May 16, 2018 11:54 PM

EPIC Planning Document

History / Background

Since 2007, Fedora Extra Packages for Enterprise Linux (EPEL) has been rebuilding Fedora Project Linux packages for Red Hat Enterprise Linux and its clones. Originally the goal was to compile packages that RHEL did not ship but were useful in the running of Fedora Infrastructure and other sites. Packages would be forked from the nearest Fedora release (Fedora 3 for EPEL-4, Fedora 6 for EPEL-5) with little updating or moving of packages in order to give similar lifetimes as the EL packages. Emphasis was made on back-porting fixes versus upgrading, and also not making large feature changes which would cause confusion. If a package could not longer be supported, it would be removed from the repository to eliminate security concerns. At the time RHEL lifetimes were thought to be only 5-6 years so back-porting did not look like a large problem.

As RHEL and its clones became more popular, Red Hat began to extend the lifetime of the Enterprise Linux releases from 6 years to 10 years of "active" support. This made trying to back-port fixes harder and many packages in EPEL would be "aged" out and removed. This in turn caused problems for consumers who had tied kick-starts and other scripts to having access to those packages. Attempts to fix this by pushing for release upgrade policies have run into resistance from packagers who find focusing on the main Fedora releases a full time job already and only build EPEL packages as one-offs. Other attempts to update policies have run into needing major updates and changes to build tools and scripting but no time to do so. Finally, because EPEL has not majorly changed in 10 years, conversations about changing fall into "well EPEL has always done it like this" from consumers, packagers, and engineering at different places.

In order to get around many of these resistance points with changing EPEL, I suggest that we frame the problems around a new project called Extra Packages for Inter Communities. The goal of this project would be to build packages from Fedora Project Linux releases to various Enterprise Linux whether they are Red Hat Enterprise Linux, CentOS, Scientific Linux or Oracle Enterprise Linux.

Problems and Proposals

Composer Limitations:

Currently EPEL uses the Fedora build system to compose a release of packages every couple of days. Because each day creates a new compose, the only channels are the various architectures and a testing where future packages can be tested. Updates are not in a separate because EPEL does not track releases.
EPEL packagers currently have to support a package for the 10 year lifetime of an RHEL release. If they have to update a release, all older versions are no longer available. If they no longer want to support a package it is completely removed. While this sounds like it increases security of consumers, Fedora does not remove old packages from older releases.
Proposed Solution
EPIC will match the Enterprise Linux major/minor numbers for releases. This means that a set of packages will be built for say EL5 sub-release 11 (aka 5.11). Those packages would populate for each supported architecture a release, updates and updates-testing directory. This will allow for a set of packages to be composed when the sub-release occurs and then stay until the release is ended.

Once a minor release is done, the old tree will be hard linked to an appropriate archive directory.


A new one will be built and placed in appropriate sub directories. Hard links to the latest will point to the new one, and after some time the old-tree will be removed from the active directory tree.

Channel Limitations:

EPEL is built against a subset of channels that Red Hat Enterprise Linux has for customers, namely the Server, High Availability, Optional, and some sort of Extras. Effort is made to make sure that EPEL does not replace with newer packages anything in those channels. However this does not extend to packages which are in the Workstation, Desktop, and similar channels. This can cause problems where EPEL’s packages replace something in those channels.
Proposed Solution
EPIC will be built against the latest released CentOS minor release using the channels which are enabled by default in the CentOS-Base.repo. These packages are built from source code that Red Hat delivers via a git mechanism to the CentOS project in order to rebuild them for mass consumption. Packages will not be allowed to replace/update according to the standard RPM Name-Epoch-Version-Release (NEVR) mechanism. This will allow EPIC to actually service more clients

Build System Limitations

EPEL is built against Red Hat Enterprise Linux. Because these packages are not meant for general consumption, the Fedora Build-system does not import them but builds them similarly to a hidden build-root. This causes multiple problems:
  • If EPEL has a package with the same name, it supersedes the RHEL one even if the NEVR is newer. This means old packages may get built against and constant pruning needs to be done.
  • If the EPEL package has a newer NEVR, it will replace the RHEL one which may not be what the consumer intended. This may break other software requirements.
  • Because parts of the build are hidden the package build may not be as audit-able as some consumers would like.
Proposed Solution
EPIC will import into the build system the CentOS build it is building against. With this the build is not hidden from view. It also makes it easier to put in rules that an EPIC package will never replace/remove a core build package. Audits of how a build is done can be clearly shown.

Greater Frequency Rebasing

Red Hat Enterprise Linux have been split between competing customer needs. Customers wish to have some packages stay steady for 10 years with only some updates to them, but they have also found that they need rapidly updated software. In order to bridge this, recent RHEL releases have rebased many software packages during a minor release. This has caused problems because EPEL packages were built against older software ABI’s which no longer work with the latest RHEL. This requires the EPEL software to be rebased and rebuilt regularly. Conversely, because of how the Fedora build system sees Red Hat Enterprise Linux packages, it only knows about the latest packages. In the 2-4 weeks between various community rebuilds getting their minor release packages built, EPEL packages may be built against API’s which are not available.

Proposed Solution
The main EPIC releases will be built against specific CentOS releases versus the Continual Release (CR) channel. When the next RHEL minor is announced, the EPIC releng will create new git branch from the current minor version (aka 5.10 → 5.11). Packagers can then make major updates to versions or other needs done. When the CentOS CR is populated with the new rpms, CR will be turned on in koji and packages will be built in the new tree using those packages. After 2 weeks, the EPIC minor release will be frozen and any new packages or fixes will occur in the updates tree.




This release is no longer supported by CentOS and will not be supported by EPIC.


This release is no longer supported by CentOS and will not be supported by EPIC.


This release is supported until Nov 30 2020 (2020-11-30). The base packaging rules for any package would be those used by the Fedora Project during its 12 and 13 releases. Where possible, EPIC will make macros to keep packaging more in line with current packaging rules.


This release is supported until Jun 30 2024 (2024-06-30). The base packaging rules for any package would be those used by the Fedora Project during its 18 and 19 releases. Because EL7 has seen major updates in certain core software, newer packaging rules from newer releases is possible to follow.


Red Hat has not publicly announced what its next release will be, when it will be released, or what its lifetime is. When that occurs, it will be clearer which Fedora release packaging will be based off of.

GIT structure

Currently EPEL uses only 1 branch for every major RHEL release. In order to better match how current RHEL releases contain major differences, EPIC will have a branch for every major.minor release. This is to allow for people who need older versions for their usage to better snapshot and build their own software off of it. There are several naming patterns which need to be researched:


Git module patterns will need to match what upstream delivers for any future EL.

Continuous Integration (CI) Gating


The EL-6 life-cycle is reaching its final sub releases with more focus and growth in EL-7 and the future. Because of this gating will be turned off EPIC-6. Testing of packages can be done at the packagers discretion but is not required.


The EL-7 life-cycle is midstream with 1-2 more minor releases with major API changes. Due to this, it makes sense to research if gating can be put in place for the next minor release. If the time and energy to retrofit tools to the older EL are possible then it can be turned on.


Because gating is built into current Fedora releases, there should be no problem with turning it on for a future release. Packages which do not pass testing will be blocked just as they will be in Fedora 29+ releases.



Because EL-6’s tooling is locked at this point, it does not make sense to investigate modules.


Currently EL-7 does not support Fedora modules and would require updates to yum, rpm and other tools in order to do so. If these show up in some form in a future minor release, then trees for modules can be created and builds done.


The tooling for modules can match how Fedora approaches it. This means that rules for module inclusion will be similar to package inclusion. EPIC-next modules must not replace/conflict with CentOS modules. They may use their own name-space to offer newer versions than what is offered and those modules may be removed in the next minor release if CentOS offers them then.

Build/Update Policy

Major Release

In the past, Red Hat has released a public beta before it finalizes its next major version. If possible, the rebuilders have come out with their versions of this release in order to learn what gotchas they will have when the .0 release occurs. Once the packages for the beta are built, EPIC will make a public call for packages to be released to it. Because packagers may not want to support a beta or they know that there will be other problems, these packages will NOT be auto branched from Fedora.

Minor Release

The current method CentOS uses to build a minor release is to begin rebuilding packages, patching problems and then when ready put those packages in their /cr/ directory. These are then tested for by people while updates are built and ISOs for the final minor release is done. The steps for EPIC release engineering will be the following:
  1. Branch all current packages from X.Y to X.Y+1
  2. Make any Bugzilla updates needed
  3. Rebuild all branched packages against CR
  4. File FTBFS against any packages.
  5. Packagers will announce major updates to mailing list
  6. Packagers will build updates against CR.
  7. 2 weeks in, releng will cull any packages which are still FTBFS
  8. 2 weeks in, releng will compose and lock the X.Y+1 release
  9. symlinks will point to the new minor release.
  10. 4 weeks in, releng will finish archiving off the X.Y release

Between Releases

Updates and new packages between releases will be pushed to the appropriate /updates/X.Y/ tree. Packagers will be encouraged to only make minor non-api breaking updates during this time. Major changes are possible, but need to follow this work flow:
  1. Announce to the EPEL list that a change is required and why
  2. Open a ticket to EPIC steering committee on this change
  3. EPIC steering committee approves/disapproves change
  4. If approved change happens but packages are in updates
  5. If not approved it can be done next minor release.

Build System

Build in Fedora

Currently EPEL is built in Fedora using the Fedora Build system which integrates koji, bodhi, greenwave, other tools together. This could be still used with EPIC.

Build in CentOS

EPIC could be built in the CentOS BuildSystem (CBS) which also uses koji and has some integration to the CentOS Jenkins CI system.

Build in Cloud

Instead of using existing infrastructure, EPIC is built with newly stood up builders in Amazon or similar cloud environments. The reasoning behind this would be to see if other build systems can transition there eventually.


Blue Sky Project
A project with a different name to help eliminate preconceptions with the existing project.
A person who pays for a service either in money, time or goods.
Sometimes called a user. A person who is consuming the service without work put into it.
Extra Packages for Enterprise Linux. A product name which was to be replaced years ago, but no one came up with a better one.
Extra Packages Inter Community.
Red Hat Enterprise Linux

Last updated 2018-05-16 19:10:17 EDT This document was imported from an adoc..

Fantastic kernel patches and where to find them

Posted by Laura Abbott on May 16, 2018 06:00 PM

I've griped before about kernel development being scattered and spread about. A quick grep of MAINTAINERS shows over 200 git trees and even more mailing lists. Today's discussion is a partial enumeration of some common mailing lists, git trees and patchwork instances. You can certainly find some of this in the MAINTAINERS file.

  • LKML. The main mailing list. This is the one everyone thinks of when they think 'kernel'. Really though, it mostly serves as an archive of everything at this point. I do not recommend e-mailing just LKML with no other lists or people. Sometimes you'll get a response but think of it more as writing to your blog that has 10 followers you've never met, 7 of which are bots. Or your twitter. There is a patchwork instance and various mail archives out there. I haven't found one I actually like as much as GMANE unfortunately. The closest corresponding git tree is the master where all releases happen.

  • The stable mailing list. This is where patches go to be picked up for stable releases. The stable release have a set of rules for how patches are picked up. Most important is that the patch must be in Linus' tree before it will be applied to stable. Greg KH is the main stable maintainer. He does a fantastic job for taking care of the large number of patches that come in. In general, if a patch is properly tagged for stable yes it will show up eventually. There is a tree for his queue of patches to be applied along with stable git trees

  • Linux -next. This is the closest thing to an integration tree right now. The goal is to find merge conflicts and bugs before they hit Linus' tree. All the work of merging trees is handled manually. Typically subsystem maintainers have a branch that's designated for -next which gets pulled in on a daily basis. Running -next is not usually recommended for anything more than "does this fix your problem" unless you are willing to actively report bugs. Running -next and learning how to report bugs is a great way to get involved though. There's a tree with tags per day.

  • The -mm tree. This gets its name from memory management but really it's Andrew Morton's queue. Lots of odd fixes end up getting queued through here. Officially, this gets maintained with quilt. The tree for -next "mmotm" (mm of the moment) is available as a series. If you just want the memory management part of the tree, there's a tree available for that.

  • Networking. netdev is the primary mailing list which covers everything from core networking infrastructure to drivers. And there's even a patchwork instance too! David Miller is the top level networking maintainer and has a tree for all your networking needs. He has a separate -next tree. One thing to keep in mind is that networking patches are sent to stable in batches and not just tagged and picked up by Greg KH. This sometimes means a larger gap between when a patch lands in Linus' branch and when it gets into a stable release.

  • Fedora tree. Most of the git trees listed above are "source git/src-git" trees, meaning it's the actual source code. Fedora officially distributes everything in "pkg-git" form. If you look at the official Fedora kernel repository, you'll see it contains a bunch of patches and support files. This is similar to the -mm and -stable-queue. Josh Boyer (Fedora kernel maintainer emeritus) has some scripts to take the Fedora pkg-git and put it on kernel.org. This gets updated automatically with each build.

  • DRM. This is for anything and everything related to graphics. Most everything is hosted a freedesktop.org, including the mailing list. Recently, DRM has switched to a group maintainer model (Daniel Vetter has written about some of this philosophy before). Ultimately though, all the patches will come through the main DRM git repo. There's a DRM -tip for -next like testing of all the latest graphics work. Graphics maintainers may occasionally request you test that tree if you have graphics problems. There's also a patchwork instance.

Fedora 28 setup (After install)

Posted by Robbi Nespu on May 16, 2018 04:00 PM

What is about?

Fedora 28 already released maybe this is a perfect time to do clean format instead of upgrading the newest version because previously started from Fedora 25, I just upgrade when each time new version released.


Yes, there is lot of thing need to be configure again after clean formating. Here few of thing on my list :

1 - Change the hostname (note : Need to reboot after change the host name)

$ hostnamectl status # view current hostname
$ hostnamectl set-hostname --static "robbinespu" # set up new hostname

2 - Configure DNF : Use delta and fastest mirror (edit /etc/dnf/dnf.conf file)


3 - Enviroment : restore bashrc and metadata files

FYI I already have export and backup the selected dot files and few metadata files via this trick1, now I need to import from repository into my workstation using the same tutorial1.

4 - Install RPM fusion repository and get most recent update

$ sudo dnf update --refresh
$ sudo dnf install https://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm 
$ sudo dnf install https://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

5 - Enabling SSH : Since sometimes I need to access this machine remotely

$ sudo systemctl start sshd
$ sudo systemctl enable sshd

6 - Performance monitoring tools for Linux

$ sudo dnf install sysstat htop glances

There is also some apps like Ccleaner on windows callled Tracer but there is no rpm for newest version released but replaced with app image

$ cd /tmp
$ wget https://github.com/oguzhaninan/Stacer/releases/download/v1.0.9/Stacer-x86_64.AppImage
$ sudo chmod a+x stacer*.AppImage
$ ./stacer*.AppImage

7 - Internet : Few list of internet stuff that I use

Skype now are available on linux (thanks Microsoft) but it quite hogging the memory. You also can use pidgin with skype purple plugin.

$ sudo dnf install -y wget alsa-lib pulseaudio glibc libXv libXScrnSaver 
$ wget https://go.skype.com/skypeforlinux-64.rpm
$ sudo dnf install -y skypeforlinux-64.rpm

Corebird - Twitter desktop client (but I prefer tweeting using my phone)

$ flatpak remote-add \
--if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo
flatpak install flathub org.baedert.corebird

P2P torrent downloader for downloading ISO and something else

$ sudo dnf install qbittorrent

Liferea are RSS reader which I can read news from feed aggregator. I already backup *.opml file from previous version, now just import and sync the feeds.

$ sudo dnf liferea

Flameshot is screenshot tool that better than others and has awesome features like paint tools, highlighting, blurring, upload to imgur and many more

$ sudo dnf flameshot

Barracuda VPN client need to be download from Barracuda portal. It provide *.deb and also *.rpm for x86 and x64 architecture machine.

$ tar xzf VPNClient_4.1.1_Linux.tar.gz
$ sudo rpm -Uhv VPNClient_4.1.1_Linux_x86_64.rpm 

For next step, kindly refer to private note on bitbucket

8 - Office stuff : Scanner, OnlyOffice, compression and archiever tools

$ sudo dnf install simple-scan
$ rpm --import "http://keyserver.ubuntu.com/pks/lookup?op=get&search=0x8320CA65CB2DE8E5"
$ sudo bash -c 'cat > /etc/yum.repos.d/onlyoffice.repo << 'EOF'
name=onlyoffice repo
$ sudo dnf install onlyoffice-desktopeditors
$ sudo  dnf install unzip p7zip

9 - Cross platform and virtualization

Install and open windows application via Wine

$ dnf install wine 

Hardware virtualization support via libvirt2

$ sudo dnf group install –with-optional virtualization
$ sudo systemctl start libvirtd
$ sudo systemctl enable libvirtd

10 - Entertaiment stuff : Player and codecs

$ sudo dnf install youtube-dl vlc
$ sudo dnf install \
gstreamer-plugins-base \
gstreamer1-plugins-base \
gstreamer-plugins-bad \
gstreamer-plugins-ugly \
gstreamer1-plugins-ugly \
gstreamer-plugins-good-extras \
gstreamer1-plugins-good \
gstreamer1-plugins-good-extras \
gstreamer1-plugins-bad-freeworld \
ffmpeg \

11 - Software engineering related stuff:

Sublime text editor

$ sudo rpm -v --import https://download.sublimetext.com/sublimehq-rpm-pub.gpg
$ sudo dnf config-manager --add-repo https://download.sublimetext.com/rpm/stable/x86_64/sublime-text.repo
$ sudo dnf install sublime-text

I also VIM to open file from terminal

$ cd /tmp/; git clone https://github.com/tomasr/molokai.git
$ sudo dnf install vim vim-plugin-powerline
$ sudo cp molokai/colors/molokai.vim /usr/share/vim/vim81/colors/
$ mkdir ~/.vim/bundle
$ cd ~/.vim/bundle
$ git clone https://github.com/VundleVim/Vundle.vim.git
$ git clone https://github.com/Valloric/YouCompleteMe.git
$ sudo dnf install automake gcc gcc-c++ kernel-devel cmake python-devel
$ cd ~/.vim/bundle/YouCompleteMe
$ git submodule update --init --recursive
$ /usr/bin/python ./install.py --clang-completer --go-completer --java-completer
$ vim +PluginInstall +qall
$ sudo dnf install ncurses-compat-libs

Note : Please restart you x11 / wayland. If powerline font still broken, please follow this step.

Sometimes, I do kernel or package compiling stuff

$ sudo dnf group install "C Development Tools and Libraries"

12 - Gnome plugin and addons

Some of plugin that I activate and use are alternatetab, application menu, caffein, dash to dock, impatience, netspeed, place status indicator, services systemd, status area horizontal spacing and top icon plus

To manage, edit, add and delete application launcher, I use menulibre which are FreeDesktop compliant menu editor

$ sudo dnf install menulibre


Well this actually incomplete list of my todo. I actually plan to use KDE but seem it really does’t meet my satisfaction and re-install once again and now I using Gnome desktop.

The thing I really hate and frustrated about Gnome is the shell keep eating memory from time to time!!!

$ free -mth
              total        used        free      shared  buff/cache   available
Mem:            11G        3.2G        4.2G        350M        4.3G        8.4G
Swap:          5.9G          0B        5.9G
Total:          17G        3.2G         10G

As you can see, now it use 3.2GB from 11GB ram memory installed compare to KDE are just using around 300MB - 1GB memory. I hope this kind of problem will be fixes soon.

GNOME Performance Hackfest

Posted by Alberto Ruiz on May 16, 2018 03:04 PM

We’re about to finish the three days long first GNOME Performance Hackfest here in Cambridge.

We started covering a few topics, there are three major areas we’ve covered and in each one of those there has been a bunch of initiatives.





GNOME Shell performance

Jonas Adahl, Marco Trevisan, Eric Anholt, Emmanuele Bassi, Carlos Garnacho and Daniel Stone have been flocking together around Shell performance. There has been some high level discussions about the pipeline, Clutter, Cogl, cairo/gtk3 and gtk4.

The main effort has been around creating probes across the stack to help Christian Hergert with sysprof (in drm, mutter, gjs…) so that we can actually measure performance bottlenecks at different levels and pinpoint culprits.

We’ve been also looking at the story behind search providers and see if we can rearchitect things a bit to have less roundtrips and avoid persistent session daemons to achieve the same results. Discussions are still ongoing on that front.

GNOME Session resource consumption

Hans de Goede put together a summary of the resource consumed in a typical GNOME session in Fedora and tweaks to avoid those, you can check the list in the agenda.

There are some issues specific to Fedora there, but the biggest improvement that we can achieve is shutting down’s GDM’s own gnome-shell instance, for which Hans already has a working patch. This should reduce resource consumption by 280megs of RAM.

The second biggest target is GNOME Software, which we keep running primarily for the shell provider. Richard Hughes was here yesterday and is already working on a solution for this.

We are also looking into the different GNOME Settings Daemon processes and trying to figure out which ones we can shut down until needed.

Surely there’s stuff I’ve missed, and hopefully we’ll see blogposts and patches surfacing soon after we wrap up the event. Hopefully we can follow up during GUADEC and start showing the results.

On Tuesday we enjoyed some drinks out kindly hosted by Collabora.

I’d like to thank the Eben Upton and the Raspberry Pi Foundation for sponsoring the venue and sending Eric Anholt over.


New badge: SELF 2018 !

Posted by Fedora Badges on May 16, 2018 02:18 PM
SELF 2018You visited the Fedora table at SouthEast LinuxFest (SELF) 2018!

New badge: Let's have a party (Fedora 28) !

Posted by Fedora Badges on May 16, 2018 02:09 PM
LetYou organized a party for the release of Fedora 28

New badge: Fedora 28 Release Partygoer !

Posted by Fedora Badges on May 16, 2018 01:50 PM
Fedora 28 Release PartygoerAttended a local release party to celebrate the launch of Fedora 28!

New badge: OSCAL 2018 Attendee !

Posted by Fedora Badges on May 16, 2018 01:42 PM
OSCAL 2018 AttendeeYou visited the Fedora booth at OSCAL 2018 in Tirana, Albania!

Cockpit 168

Posted by Cockpit Project on May 16, 2018 11:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 168.

Improve checks for root privilege availability

Many actions, like joining an IPA domain or rebooting, can only be performed by an administrator (root user).

Cockpit previously checked if the user was a member of the wheel (used in Red Hat Enterprise Linux, CentOS, and Fedora) or sudo (used in Debian and Ubuntu) groups to enable these actions. Simple group checking was insufficient, as different group names are used by other operating systems and configurations, or a system might be set up to rely on custom user-based sudo rules.

Cockpit no longer makes assumptions about special groups. Now, Cockpit simply checks if the user is capable of running commands as root.

As a result of this fix, privileged operations now work by default with FreeIPA, which uses the admins@domain.name group.

Try it out

Cockpit 168 is available now:

New badge: FLISOL 2018 Attendee !

Posted by Fedora Badges on May 16, 2018 09:17 AM
FLISOL 2018 AttendeeYou visited the Fedora booth at FLISOL 2018!

Introducing the 1.8 freedesktop runtime in the gnome nightly builds

Posted by Alexander Larsson on May 16, 2018 08:03 AM

All the current Flatpak runtimes in wide use are based on the 1.6 Freedesktop runtime. This is a two-layered beast where the lower layer is built using Yocto and the upper layer is built using flatpak-builder.

Yocto let us get a basic system bootstrapped, but it is not really a great match for the needs of a Flatpak runtime. We’ve long wanted a base that targeted sandboxed builds which is closer to upstream and without the embedded cross-compiled legacy of Yocto. This is part of the reason why Tristan started working on BuildStream, and why the 1.8 Freedesktop runtime was created from scratch, using it.

After a herculean effort from the people at Codethink (who sponsor this effort) we now have a working Yocto-free version of the Platform and Sdk. You can download the unstable version from here and start playing with it. It is not yet frozen or API/ABI stable, but its getting there.

The next step in this effort is to test it as widely as possible to catch any issues before the release is frozen. In order to do this I rebased the Gnome nightly runtime builds on top of the new Freedesktop version this week. This is a good match for a test release, because it has no real ABI requirements (being rebuilt with the apps daily), yet gets a fair amount of testing.

WARNING: During the initial phase it is likely that there will be problems. Please test your apps extra carefully and report all the issues you find.

In the future, the goal is to also convert the Gnome runtimes to BuildStream. Work on this has started, but for now we want to focus on getting the base runtime stable.

Revue de presse de Fedora 28

Posted by Charles-Antoine Couret on May 15, 2018 03:00 PM

Cela fait depuis Fedora 19 que je publie sur la liste de diffusion de Fedora-fr une revue de presse de chaque sortie d'une nouvelle version. Récapituler quels sites en parle et comment. Je le fais toujours deux semaines après la publication (pour que tout le monde ait le temps d'en parler). Maintenant, place à Fedora 28 !

Bien entendu je passe sous silence mon blog et le forum de fedora-fr.

Sites web d'actualité

Soit 7 sites sur les 25 contactés.

Blogs, sites persos ou sites non contactés

Soit 2 sites.


Le nombre de sites parlant de Fedora 28 est en légère baisse, deux blogs en moins. Beaucoup d'articles se fondent sur ce que j'ai moi même rédigé (que ce soit la version courte ou longue). Notons que developpez.net m'a contacté pour publier la version longue les prochaines fois, ce que je vais essayer de faire bien entendu.

La semaine de sa sortie, nous avons eu une augmentation de visites par rapport à la semaine d'avant de cet ordre là :

  • Forums : 4% (soit 250 visites environ)
  • Documentation : baisse de 1% (soit environ 40 visites)
  • Le site Fedora-fr : 43% (soit 500 visites en plus)
  • Borsalinux-fr : 344% (soit 90 visites en plus)

À tenir compte de la situation particulière avec une sortie le 1er mai qui est un jour férié globalement dans l'espace francophone, hors Québec.

Si vous avez connaissance d'un autre lien, n'hésitez pas à partager ! Rendez-vous pour Fedora 29.

Protect your Fedora system against this DHCP flaw

Posted by Fedora Magazine on May 15, 2018 02:59 PM

A critical security vulnerability was discovered and disclosed earlier today in dhcp-client. This DHCP flaw carries a high risk to your system and data, especially if you use untrusted networks such as a WiFi access point you don’t own. Read more here for how to protect your Fedora system.

Dynamic Host Control Protocol (DHCP) allows your system to get configuration from a network it joins. Your system will make a request for DHCP data, and typically a server such as a router answers. The server provides the necessary data for your system to configure itself. This is how, for instance, your system configures itself properly for networking when it joins a wireless network.

However, an attacker on the local network may be able to exploit this vulnerability. Using a flaw in a dhcp-client script that runs under NetworkManager, the attacker may be able to run arbitrary commands with root privileges on your system. This DHCP flaw puts your system and your data at high risk. The flaw has been assigned CVE-2018-1111 and has a Bugzilla tracking bug.

Guarding against this DHCP flaw

New dhcp packages contain fixes for Fedora 26, 27, and 28, as well as Rawhide. The maintainers have submitted these updates to the updates-testing repositories. They should show up in stable repos within a day or so of this post for most users. The desired packages are:

  • Fedora 26: dhcp-4.3.5-11.fc26
  • Fedora 27: dhcp-4.3.6-10.fc27
  • Fedora 28: dhcp-4.3.6-20.fc28
  • Rawhide: dhcp-4.3.6-21.fc29

Updating a stable Fedora system

To update immediately on a stable Fedora release, use this command with sudo. Type your password at the prompt, if necessary:

sudo dnf --refresh --enablerepo=updates-testing update dhcp-client

Later, use the standard stable repos to update. To update your Fedora system from the stable repos, use this command:

sudo dnf --refresh update dhcp-client

Updating a Rawhide system

If your system is on Rawhide, use these commands to download and update the packages immediately:

mkdir dhcp && cd dhcp
koji download-build --arch={x86_64,noarch} dhcp-4.3.6-21.fc29
sudo dnf update ./dhcp-*.rpm

After the nightly Rawhide compose, simply run sudo dnf update to get the update.

Fedora Atomic Host

The fixes for Fedora Atomic Host are in ostree version 28.20180515.1. To get the update, run this command:

atomic host upgrade -r

This command reboots your system to apply the upgrade.

Photo by Markus Spiske on Unsplash.