Fedora People

Share Your Doc una versión minimalista de Pastebin

Posted by Alvaro Castillo on January 15, 2019 07:40 PM

Hace unos días acabé una pequeña herramienta Web llamada Share Your Doc, que permite compartir código fuente, mensajes, scripts...etc via Web como si fuese un típico servicio de Pastebin, Fpaste como seguramente conocerás.

Sin embargo, lo bueno que tiene este, es que trabaja conjuntamente con el sistema operativo, no requiere de ningún método para validarse de usuarios, ni tampoco hace uso de conexiones FTP. Simplemente, añades tu código, creas el token y a correr.

Es una herramienta...

Security isn’t a feature

Posted by Josh Bressers on January 15, 2019 03:48 PM

As CES draws to a close, I’ve seen more than one security person complain that nobody at the show was talking about security. There were an incredible number of consumer devices unveiled, no doubt there is no security in any of them. I think we get caught up in the security world sometimes so we forget that the VAST majority of people don’t care if something has zero security. People want interesting features that amuse them or make their lives easier. Security is rarely either of these, generally it makes their lives worse so it’s an anti-feature to many.

Now the first thing many security people think goes something like this “if there’s no security they’ll be sorry when their lightbulb steals their wallet and dumps the milk on the floor!!!” The reality is that argument will convince nobody, it’s not even very funny so they’re laughing at us, not with us. Our thoughts by very nature blame all the wrong people and we try to scare them into listening to us. It’s never worked. Ever. That one time you think it worked they were only pretended to care so you would go away.

So it brings us to the idea that security isn’t a feature. Turning your lights on is a feature. Cooking you dinner is a feature. Driving your car is a feature. Not bursting into flames is not a feature. Well it sort of is, but nobody talks about it. Security is a lot like the bursting into flames thing. Security really is about something not happening, things not happening is the fundamental  problem we have when we try to talk about all this. You can’t build a plausible story around an event that may or may not happen. Trying to build a narrative around something that may or may not happen is incredibly confusing. This isn’t how feature work, features do positive things, they don’t not do negative things (I don’t even know if that’s right). Security isn’t a feature.

So the question you should be asking then is how do we make products being created contain more of this thing we keep calling security. The reality is we can’t make this happen given our current strategies. There are two ways products will be produced that are less insecure (see what I did there). Either the market demands it, which given the current trends isn’t happening anytime soon. People just don’t care about security. The second way is a government creates regulations that demand it. Given the current state of the world’s governments, I’m not confident that will happen either.

Let’s look at market demand first. If consumers decide that buying products that are horribly insecure is bad, they could start buying products with more built in security. But even the security industry can’t define what that really means. How can you measure which product has the best security? Consumers don’t have a way to know which products are safer. How to measure security could be a multi-year blog series so I won’t get into the details today.

What if the government regulates security? We sort of end up in a similar place to consumer demand. How do we define security? It’s a bit like defining safety I suppose. We’re a hundred years into safety regulations and still get a lot wrong and I don’t think anyone would argue defining safety is much easier than defining security. Security regulation would probably follow a similar path. It will be decades before things could be good enough to create real change. It’s very possible by then the machines will have taken over (that’s the secret third way security gets fixed, perhaps a post for another day).

So here we are again, things seem a bit doom and gloom. That’s not the intention of this post. The real purpose is to point out we have to change the way we talk about security. Yelling at vendors for building insecure devices isn’t going to ever work. We could possibly talk to consumers in a way that resonates with them, but does anyone buy the stove that promises to burst into flames the least? Nobody would ever use that as a marketing strategy. I bet it would have the opposite effect, a bit like our current behaviors and talking points I suppose.

Complaining that companies don’t take security seriously hasn’t ever worked and never will work. They need an incentive to care, us complaining isn’t an incentive. Stay tuned for some ideas on how to frame these conversations and who the audience needs to be.

OpenClass: Kontinuirana integracija i isporuka u razvoju softvera

Posted by HULK Rijeka on January 15, 2019 12:44 PM

Riječka podružnica Hrvatske udruge Linux korisnika i Odjel za informatiku Sveučilišta u Rijeci pozivaju vas na OpenClass koji će se održati četvrtak, 17. siječnja 2019. u 17 sati, u zgradi Sveučilišnih odjela, prostorija O-028. Naslov:

Kontinuirana integracija i isporuka u razvoju softvera

Predavač je Kristijan Lenković, bivši student Odjela za informatiku i voditelj softverskog razvojnog tima u tvrtci Coadria/iOLAP u Rijeci.


Razgovarat će se o kontinuiranoj integraciji i isporuci kao dijelu životnog ciklusa razvoja modernog softvera, a poseban naglasak bit će stavljen na razvoj web aplikacija. Cilj ove metodologije je stabilan, učinkovit, siguran i brz razvoj s prethodno definiranom infrastrukturom i okruženjem na kojem će se aplikacija pokretati te značajno smanjenje visokih troškova, vremena i rizika prilikom isporuke softvera na produkcijsko okruženje.

Nadamo se vašem dolasku!

Contribute at the Fedora Test Day for kernel 4.20

Posted by Fedora Magazine on January 14, 2019 06:49 PM

The kernel team is working on final integration for kernel 4.20. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test day for Tuesday, January 15, 2019. Refer to the wiki page for links to the test images you’ll need to participate.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you on test day.

Updating release schedule tasks

Posted by Fedora Community Blog on January 14, 2019 04:03 PM

One thing that I noticed as I got settled in to this role last summer is that the Fedora release schedule tasks look a lot like they did when I first started contributing almost a decade ago. That’s not necessarily a bad thing — if it’s not broke, don’t fix it. but I suspect it’s less because we’re still getting releases out in the same way we did 10 years ago and more because we haven’t captured when reality has drifted from the schedule.

As I start putting together a draft of the Fedora 31 release schedule, I want to take the opportunity to re-converge on reality. Last week, I sent an email to all of the teams that have a schedule in the main release schedule requesting updates to the tasks they have.

I’m putting the question to the larger community now. What tasks should be added, changed, or removed from the schedules? Are there teams that should be specifically called out in the release schedule? How can our release schedules better serve the community? I’m open to your feedback via email or as an issue on the schedule Pagure repo.

The post Updating release schedule tasks appeared first on Fedora Community Blog.

Home Automation I

Posted by Zamir SUN on January 14, 2019 02:29 PM

I’ve been thinking about to automating my living for quite some time. Basically, my requirements are:

  • I can control the power of some home appliances no matter which platform I am using, be it Linux or Android or even iOS. And it can be controlled with customized rules.
  • I can power on my workstation without using WOL.

For the first requirement I’ve purchased some so-called smart power strip in the early days. But unfortunately it has a lot of limitations and they are not really ‘smart’. Most of them only work with a timer. So I’ve been looking for alternatives.

Well for the second requirement, I’ve been thinking about adding a relay to control the power button. However after some discussion with Shankerwangmiao and z4yx, they told me they already have made some product level prototype, and Shankerwangmiao kindly offered me the PCIe adapter they made for free. They call it IPMI_TU.

During the new year holiday, I decide to spend more time into the first requirement. And Sonoff comes up to me. Their products use an app called EWeLink which seems to have more features than the ones I have. After some research, I know Sonoff products are equipped with a SoC called ESP8266, which is very popular recently in the so-called ‘IoT’ area. I even found an open source firmware for a series of Sonoff products called Sonoff Tasmota which is appealing to me. The Sonoff Tasmota firmware supports controlling using MQTT, which is a plus as I can make customized rules anywhere and just make the MQTT call when the rules met. The Sonoff Tasmota firmware also works on many other ESP8266 based smart plug, so I checked their list and finally come up with one of the smaller sized variant and a Sonoff Basic smart switch to control my light.

Now it’s the time to flash them. I think it is easy, but in fact, it really takes me some time.

In order to learn about the Arduino IDE and ESP8266 flashing, I purchased a NodeMCU board in advance. The Arduino IDE is available in Fedora so I only need to do dnf install -y arduino. But this is just the beginning. To make a long story short, I need to install a bunch of Arduino libraries which is not a problem first, but result in I need to choose some older version library to workaround bugs in newer ones.

So here are some notes when I try to flash the Huafan smart plug I purchased.

  • Crystal Frequency needs to be changed to 40MHz
  • Choose v1.4 Prebuild for IwIP Variant to workaround some bug
  • Always connect GPIO0 to ground before power the ESP8266 on for flashing. It can be disconnected after powering up, but it won’t work if you power the ESP8266 on then connect GPIO0.

The first is pretty straight forward, but the other 2 notes really took me a long debugging time before I know the expected way.

Thanks for imi415’s suggestions on narrowing the WiFi problem down.

And for flashing the Sonoff Basic switch:

  • Remember to change Crystal Frequency back to 26MHz
  • The IwIP Variant still need to be v1.4 Prebuild
  • Disable some unessential features if you don’t need by editing my_user_config.h

That’s pretty much of it for the two.

Then it comes to the IPMI_TU. IPMI_TU is not ESP8266 based. Instead, they use STM32F103 which is an ARM Cortex-M3 MCU, and a WIZnet W5500 ethernet controller. To flash an STM32 a tool called stlink is needed, which is also available in Fedora as stlink or stlink-gui.

Since IPMI_TU originally designed for other use cases, they use Protocol Buffers as the data serialization protocol in their firmware. This is overkill for my use case, so I replaced the control function with a plain text one.

When it comes to flashing, I did something wrong in the beginning, and after that, I cannot flash it again using the open source variant of stlink. I figured out a way to re-flash its bootloader and used the official STM32 flashing tool to flash. Luckily after that, the STM32 is back to normal.

One more thing to note, the firmware of IPMI_TU uses the DHCP log server option as a hackerish way determine the MQTT, so changes to the DHCP configuration is needed.

Now the firmware part is done. I’ll write my experience about the server side later on.

PHP with the NGINX unit application server

Posted by Remi Collet on January 14, 2019 02:21 PM

Official web site: NGINX Unit

The official repository, for RHEL and CentOS, provides the PHP module PHP version (5.3 / 5.4) in official repository.

My repository provides various versions of the module, as base packages (unit-php) and as Software Collections (php##-unit-php).

Here is small test tutorial, to create 1 application per available PHP version.


1. Official repository installation

Create the repository configuration (/etc/yum.repos.d/unit.repo):

name=unit repo

For now, the packages are only available for CentOS / RHEL 6 and 7.

See also: CentOS Packages in official documentation.

2. Remi repository installation

# yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# yum install http://rpms.remirepo.net/enterprise/remi-release-7.rpm

3. Server and modules installation

Install theNGINX unit server and various PHP modules. The unit-php package provides the module for the system default php.

# yum install unit unit-php php56-unit-php php71-unit-php php72-unit-php php73-unit-php

4. Test configuration

4.1 Preparation

This configuration create a listener for each PHP version,listening on a different port(8300, 8356, ...) and an application serving the usual web application directory.

Download the unit.config file:

	"applications": {
		"exphp": {
			"type": "php",
			"user": "nobody",
			"processes": 2,
			"root": "/var/www/html",
			"index": "index.php"
		"exphp56": {
			"type": "php 5.6",
			"user": "nobody",
			"processes": 2,
			"root": "/var/www/html",
			"index": "index.php"
		"exphp71": {
			"type": "php 7.1",
			"user": "nobody",
			"processes": 2,
			"root": "/var/www/html",
			"index": "index.php"
		"exphp72": {
			"type": "php 7.2",
			"user": "nobody",
			"processes": 2,
			"root": "/var/www/html",
			"index": "index.php"
		"exphp73": {
			"type": "php 7.3",
			"user": "nobody",
			"processes": 2,
			"root": "/var/www/html",
			"index": "index.php"
	"listeners": {
		"*:8300": {
			"application": "exphp"
		"*:8356": {
			"application": "exphp56"
		"*:8371": {
			"application": "exphp71"
		"*:8372": {
			"application": "exphp72"
		"*:8373": {
			"application": "exphp73"

4.2 Run the service:

# systemctl start unit

4.3 Configuration

Configuration is managed through a REST API:

# curl -X PUT --data-binary @unit.config --unix-socket /var/run/control.unit.sock :/config
    "success": "Reconfiguration done."

And to check running configuration:

# curl --unix-socket /var/run/control.unit.sock :/

5 Usage

You can access the application on each new port:

  • http://localhost:8300/ for default PHP
  • http://localhost:8356/ for PHP version 5.6
  • http://localhost:8372/ for PHP version 7.2
  • etc

The phpinfo page will display language informations, to be noticed, in this case, Serveur API is unit.

6. Conclusion

As this is a application server, we'll probably plug it behing a web frontal (Apache HHTP server or NGINX).

This project seems interesting, but is quite young (the first version 1.2 available on github was released on june 2018); will see what the user feedback will be.

Current version is 1.7.

How to Build a Netboot Server, Part 4

Posted by Fedora Magazine on January 14, 2019 08:00 AM

One significant limitation of the netboot server built in this series is the operating system image being served is read-only. Some use cases may require the end user to modify the image. For example, an instructor may want to have the students install and configure software packages like MariaDB and Node.js as part of their course walk-through.

An added benefit of writable netboot images is the end user’s “personalized” operating system can follow them to different workstations they may use at later times.

Change the Bootmenu Application to use HTTPS

Create a self-signed certificate for the bootmenu application:

$ sudo -i
# MY_NAME=$(</etc/hostname)
# MY_TLSD=/opt/bootmenu/tls
# mkdir $MY_TLSD
# openssl req -newkey rsa:2048 -nodes -keyout $MY_TLSD/$MY_NAME.key -x509 -days 3650 -out $MY_TLSD/$MY_NAME.pem

Verify your certificate’s values. Make sure the “CN” value in the “Subject” line matches the DNS name that your iPXE clients use to connect to your bootmenu server:

# openssl x509 -text -noout -in $MY_TLSD/$MY_NAME.pem

Next, update the bootmenu application’s listen directive to use the HTTPS port and the newly created certificate and key:

# sed -i "s#listen => .*#listen => ['https://$MY_NAME:443?cert=$MY_TLSD/$MY_NAME.pem\&key=$MY_TLSD/$MY_NAME.key\&ciphers=AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA'],#" /opt/bootmenu/bootmenu.conf

Note the ciphers have been restricted to those currently supported by iPXE.

GnuTLS requires the “CAP_DAC_READ_SEARCH” capability, so add it to the bootmenu application’s systemd service:

# sed -i '/^AmbientCapabilities=/ s/$/ CAP_DAC_READ_SEARCH/' /etc/systemd/system/bootmenu.service
# sed -i 's/Serves iPXE Menus over HTTP/Serves iPXE Menus over HTTPS/' /etc/systemd/system/bootmenu.service
# systemctl daemon-reload

Now, add an exception for the bootmenu service to the firewall and restart the service:

# firewall-cmd --add-rich-rule="rule family='ipv4' source address='$MY_SUBNET/$MY_PREFIX' service name='https' accept"
# firewall-cmd --runtime-to-permanent
# systemctl restart bootmenu.service

Use wget to verify it’s working:

$ MY_NAME=server-01.example.edu
$ MY_TLSD=/opt/bootmenu/tls
$ wget -q --ca-certificate=$MY_TLSD/$MY_NAME.pem -O - https://$MY_NAME/menu


Update init.ipxe to use HTTPS. Then recompile the ipxe bootloader with options to embed and trust the self-signed certificate you created for the bootmenu application:

$ echo '#define DOWNLOAD_PROTO_HTTPS' >> $HOME/ipxe/src/config/local/general.h
$ sed -i 's/^chain http:/chain https:/' $HOME/ipxe/init.ipxe
$ cp $MY_TLSD/$MY_NAME.pem $HOME/ipxe
$ cd $HOME/ipxe/src
$ make clean
$ make bin-x86_64-efi/ipxe.efi EMBED=../init.ipxe CERT="../$MY_NAME.pem" TRUST="../$MY_NAME.pem"

You can now copy the HTTPS-enabled iPXE bootloader out to your clients and test that everything is working correctly:

$ cp $HOME/ipxe/src/bin-x86_64-efi/ipxe.efi $HOME/esp/efi/boot/bootx64.efi

Add User Authentication to Mojolicious

Create a PAM service definition for the bootmenu application:

# dnf install -y pam_krb5
# echo 'auth required pam_krb5.so' > /etc/pam.d/bootmenu

Add a library to the bootmenu application that uses the Authen-PAM perl module to perform user authentication:

# dnf install -y perl-Authen-PAM;
# MY_MOJO=/opt/bootmenu
# mkdir $MY_MOJO/lib
# cat << 'END' > $MY_MOJO/lib/PAM.pm
package PAM;

use Authen::PAM;

sub auth {
   my $success = 0;

   my $username = shift;
   my $password = shift;

   my $callback = sub {
      my @res;
      while (@_) {
         my $code = shift;
         my $msg = shift;
         my $ans = "";
         $ans = $username if ($code == PAM_PROMPT_ECHO_ON());
         $ans = $password if ($code == PAM_PROMPT_ECHO_OFF());
         push @res, (PAM_SUCCESS(), $ans);
      push @res, PAM_SUCCESS();

      return @res;

   my $pamh = new Authen::PAM('bootmenu', $username, $callback);

      last unless ref $pamh;
      last unless $pamh->pam_authenticate() == PAM_SUCCESS;
      $success = 1;

   return $success;

return 1;

The above code is taken almost verbatim from the Authen::PAM::FAQ man page.

Redefine the bootmenu application so it returns a netboot template only if a valid username and password are supplied:

# cat << 'END' > $MY_MOJO/bootmenu.pl
#!/usr/bin/env perl

use lib 'lib';

use PAM;
use Mojolicious::Lite;
use Mojolicious::Plugins;
use Mojo::Util ('url_unescape');

plugin 'Config';

get '/menu';
get '/boot' => sub {
   my $c = shift;

   my $instance = $c->param('instance');
   my $username = $c->param('username');
   my $password = $c->param('password');

   my $template = 'menu';

      last unless $instance =~ /^fc[[:digit:]]{2}$/;
      last unless $username =~ /^[[:alnum:]]+$/;
      last unless PAM::auth($username, url_unescape($password));
      $template = $instance;

   return $c->render(template => $template);


The bootmenu application now looks for the lib directory relative to its WorkingDirectory. However, by default the working directory is set to the root directory of the server for systemd units. Therefore, you must update the systemd unit to set WorkingDirectory to the root of the bootmenu application instead:

# sed -i "/^RuntimeDirectory=/ a WorkingDirectory=$MY_MOJO" /etc/systemd/system/bootmenu.service
# systemctl daemon-reload

Update the templates to work with the redefined bootmenu application:

# cd $MY_MOJO/templates
# MY_BOOTMENU_SERVER=$(</etc/hostname)
# for i in $MY_FEDORA_RELEASES; do echo '#!ipxe' > fc$i.html.ep; grep "^kernel\|initrd" menu.html.ep | grep "fc$i" >> fc$i.html.ep; echo "boot || chain https://$MY_BOOTMENU_SERVER/menu" >> fc$i.html.ep; sed -i "/^:f$i$/,/^boot /c :f$i\nlogin\nchain https://$MY_BOOTMENU_SERVER/boot?instance=fc$i\&username=\${username}\&password=\${password:uristring} || goto failed" menu.html.ep; done

The result of the last command above should be three files similar to the following:



set timeout 5000

menu iPXE Boot Menu
item --key 1 lcl 1. Microsoft Windows 10
item --key 2 f29 2. RedHat Fedora 29
item --key 3 f28 3. RedHat Fedora 28
choose --timeout ${timeout} --default lcl selected || goto shell
set timeout 0
goto ${selected}

echo boot failed, dropping to shell...
goto shell

echo type 'exit' to get the back to the menu
set timeout 0
goto menu


chain https://server-01.example.edu/boot?instance=fc29&username=${username}&password=${password:uristring} || goto failed

chain https://server-01.example.edu/boot?instance=fc28&username=${username}&password=${password:uristring} || goto failed


kernel --name kernel.efi ${prefix}/vmlinuz-4.19.5-300.fc29.x86_64 initrd=initrd.img ro ip=dhcp rd.peerdns=0 nameserver= nameserver= root=/dev/disk/by-path/ip- netroot=iscsi: console=tty0 console=ttyS0,115200n8 audit=0 selinux=0 quiet
initrd --name initrd.img ${prefix}/initramfs-4.19.5-300.fc29.x86_64.img
boot || chain https://server-01.example.edu/menu


kernel --name kernel.efi ${prefix}/vmlinuz-4.19.3-200.fc28.x86_64 initrd=initrd.img ro ip=dhcp rd.peerdns=0 nameserver= nameserver= root=/dev/disk/by-path/ip- netroot=iscsi: console=tty0 console=ttyS0,115200n8 audit=0 selinux=0 quiet
initrd --name initrd.img ${prefix}/initramfs-4.19.3-200.fc28.x86_64.img
boot || chain https://server-01.example.edu/menu

Now, restart the bootmenu application and verify authentication is working:

# systemctl restart bootmenu.service

Make the iSCSI Target Writeable

Now that user authentication works through iPXE, you can create per-user, writeable overlays on top of the read-only image on demand when users connect. Using a copy-on-write overlay has three advantages over simply copying the original image file for each user:

  1. The copy can be created very quickly. This allows creation on-demand.
  2. The copy does not increase the disk usage on the server. Only what the user writes to their personal copy of the image is stored in addition to the original image.
  3. Since most sectors for each copy are the same sectors on the server’s storage, they’ll likely already be loaded in RAM when subsequent users access their copies of the operating system. This improves the server’s performance because RAM is faster than disk I/O.

One potential pitfall of using copy-on-write is that once overlays are created, the images on which they are overlayed must not be changed. If they are changed, all the overlays will be corrupted. Then the overlays must be deleted and replaced with new, blank overlays. Even simply mounting the image file in read-write mode can cause sufficient filesystem updates to corrupt the overlays.

Due to the potential for the overlays to be corrupted if the original image is modified, mark the original image as immutable by running:

# chattr +i </path/to/file>

You can use lsattr </path/to/file> to view the status of the immutable flag and use  to chattr -i </path/to/file> unset the immutable flag. While the immutable flag is set, even the root user or a system process running as root cannot modify or delete the file.

Begin by stopping the tgtd.service so you can change the image files:

# systemctl stop tgtd.service

It’s normal for this command to take a minute or so to stop when there are connections still open.

Now, remove the read-only iSCSI export. Then update the readonly-root configuration file in the template so the image is no longer read-only:

# MY_FC=fc29
# rm -f /etc/tgt/conf.d/$MY_FC.conf
# TEMP_MNT=$(mktemp -d)
# mount /$MY_FC.img $TEMP_MNT
# sed -i 's/^READONLY=yes$/READONLY=no/' $TEMP_MNT/etc/sysconfig/readonly-root
# sed -i 's/^Storage=volatile$/#Storage=auto/' $TEMP_MNT/etc/systemd/journald.conf
# umount $TEMP_MNT

Journald was changed from logging to volatile memory back to its default (log to disk if /var/log/journal exists) because a user reported his clients would freeze with an out-of-memory error due to an application generating excessive system logs. The downside to setting logging to disk is that extra write traffic is generated by the clients, and might burden your netboot server with unnecessary I/O. You should decide which option — log to memory or log to disk — is preferable depending on your environment.

Since you won’t make any further changes to the template image, set the immutable flag on it and restart the tgtd.service:

# chattr +i /$MY_FC.img
# systemctl start tgtd.service

Now, update the bootmenu application:

# cat << 'END' > $MY_MOJO/bootmenu.pl
#!/usr/bin/env perl

use lib 'lib';

use PAM;
use Mojolicious::Lite;
use Mojolicious::Plugins;
use Mojo::Util ('url_unescape');

plugin 'Config';

get '/menu';
get '/boot' => sub {
   my $c = shift;

   my $instance = $c->param('instance');
   my $username = $c->param('username');
   my $password = $c->param('password');

   my $chapscrt;
   my $template = 'menu';

      last unless $instance =~ /^fc[[:digit:]]{2}$/;
      last unless $username =~ /^[[:alnum:]]+$/;
      last unless PAM::auth($username, url_unescape($password));
      last unless $chapscrt = `sudo scripts/mktgt $instance $username`;
      $template = $instance;

   return $c->render(template => $template, username => $username, chapscrt => $chapscrt);


This new version of the bootmenu application calls a custom mktgt script which, on success, returns a random CHAP password for each new iSCSI target that it creates. The CHAP password prevents one user from mounting another user’s iSCSI target by indirect means. The app only returns the correct iSCSI target password to a user who has successfully authenticated.

The mktgt script is prefixed with sudo because it needs root privileges to create the target.

The $username and $chapscrt variables also pass to the render command so they can be incorporated into the templates returned to the user when necessary.

Next, update our boot templates so they can read the username and chapscrt variables and pass them along to the end user. Also update the templates to mount the root filesystem in rw (read-write) mode:

# cd $MY_MOJO/templates
# sed -i "s/:$MY_FC/:$MY_FC-<%= \$username %>/g" $MY_FC.html.ep
# sed -i "s/ netroot=iscsi:/ netroot=iscsi:<%= \$username %>:<%= \$chapscrt %>@/" $MY_FC.html.ep
# sed -i "s/ ro / rw /" $MY_FC.html.ep

After running the above commands, you should have boot templates like the following:

kernel --name kernel.efi ${prefix}/vmlinuz-4.19.5-300.fc29.x86_64 initrd=initrd.img rw ip=dhcp rd.peerdns=0 nameserver= nameserver= root=/dev/disk/by-path/ip-<%= $username %>-lun-1 netroot=iscsi:<%= $username %>:<%= $chapscrt %>@<%= $username %> console=tty0 console=ttyS0,115200n8 audit=0 selinux=0 quiet
initrd --name initrd.img ${prefix}/initramfs-4.19.5-300.fc29.x86_64.img
boot || chain https://server-01.example.edu/menu

NOTE: If you need to view the boot template after the variables have been interpolated, you can insert the “shell” command on its own line just before the “boot” command. Then, when you netboot your client, iPXE gives you an interactive shell where you can enter “imgstat” to view the parameters being passed to the kernel. If everything looks correct, you can type “exit” to leave the shell and continue the boot process.

Now allow the bootmenu user to run the mktgt script (and only that script) as root via sudo:

# echo "bootmenu ALL = NOPASSWD: $MY_MOJO/scripts/mktgt *" > /etc/sudoers.d/bootmenu

The bootmenu user should not have write access to the mktgt script or any other files under its home directory. All the files under /opt/bootmenu should be owned by root, and should not be writable by any user other than root.

Sudo does not work well with systemd’s DynamicUser option, so create a normal user account and set the systemd service to run as that user:

# useradd -r -c 'iPXE Boot Menu Service' -d /opt/bootmenu -s /sbin/nologin bootmenu
# sed -i 's/^DynamicUser=true$/User=bootmenu/' /etc/systemd/system/bootmenu.service
# systemctl daemon-reload

Finally, create a directory for the copy-on-write overlays and create the mktgt script that manages the iSCSI targets and their overlayed backing stores:

# mkdir /$MY_FC.cow
# mkdir $MY_MOJO/scripts
# cat << 'END' > $MY_MOJO/scripts/mktgt
#!/usr/bin/env perl

# if another instance of this script is running, wait for it to finish
"$ENV{FLOCKER}" eq 'MKTGT' or exec "env FLOCKER=MKTGT flock /tmp $0 @ARGV";

# use "RETURN" to print to STDOUT; everything else goes to STDERR by default
open(RETURN, '>&', STDOUT);
open(STDOUT, '>&', STDERR);

my $instance = shift or die "instance not provided";
my $username = shift or die "username not provided";

my $img = "/$instance.img";
my $dir = "/$instance.cow";
my $top = "$dir/$username";

-f "$img" or die "'$img' is not a file"; 
-d "$dir" or die "'$dir' is not a directory";

my $base;
die unless $base = `losetup --show --read-only --nooverlap --find $img`;
chomp $base;

my $size;
die unless $size = `blockdev --getsz $base`;
chomp $size;

# create the per-user sparse file if it does not exist
if (! -e "$top") {
   die unless system("dd if=/dev/zero of=$top status=none bs=512 count=0 seek=$size") == 0;

# create the copy-on-write overlay if it does not exist
my $cow="$instance-$username";
my $dev="/dev/mapper/$cow";
if (! -e "$dev") {
   my $over;
   die unless $over = `losetup --show --nooverlap --find $top`;
   chomp $over;
   die unless system("echo 0 $size snapshot $base $over p 8 | dmsetup create $cow") == 0;

my $tgtadm = '/usr/sbin/tgtadm --lld iscsi';

# get textual representations of the iscsi targets
my $text = `$tgtadm --op show --mode target`;
my @targets = $text =~ /(?:^T.*\n)(?:^ .*\n)*/mg;

# convert the textual representations into a hash table
my $targets = {};
foreach (@targets) {
   my $tgt;
   my $sid;

   foreach (split /\n/) {
      /^Target (\d+)(?{ $tgt = $targets->{$^N} = [] })/;
      /I_T nexus: (\d+)(?{ $sid = $^N })/;
      /Connection: (\d+)(?{ push @{$tgt}, [ $sid, $^N ] })/;

my $hostname;
die unless $hostname = `hostname`;
chomp $hostname;

my $target = 'iqn.' . join('.', reverse split('\.', $hostname)) . ":$cow";

# find the target id corresponding to the provided target name and
# close any existing connections to it
my $tid = 0;
foreach (@targets) {
   next unless /^Target (\d+)(?{ $tid = $^N }): $target$/m;
   foreach (@{$targets->{$tid}}) {
      die unless system("$tgtadm --op delete --mode conn --tid $tid --sid $_->[0] --cid $_->[1]") == 0;

# create a new target if an existing one was not found
if ($tid == 0) {
   # find an available target id
   my @ids = (0, sort keys %{$targets});
   $tid = 1; while ($ids[$tid]==$tid) { $tid++ }

   # create the target
   die unless -e "$dev";
   die unless system("$tgtadm --op new --mode target --tid $tid --targetname $target") == 0;
   die unless system("$tgtadm --op new --mode logicalunit --tid $tid --lun 1 --backing-store $dev") == 0;
   die unless system("$tgtadm --op bind --mode target --tid $tid --initiator-address ALL") == 0;

# (re)set the provided target's chap password
my $password = join('', map(chr(int(rand(26))+65), 1..8));
my $accounts = `$tgtadm --op show --mode account`;
if ($accounts =~ / $username$/m) {
   die unless system("$tgtadm --op delete --mode account --user $username") == 0;
die unless system("$tgtadm --op new --mode account --user $username --password $password") == 0;
die unless system("$tgtadm --op bind --mode account --tid $tid --user $username") == 0;

# return the new password to the iscsi target on stdout
print RETURN $password;
# chmod +x $MY_MOJO/scripts/mktgt

The above script does five things:

  1. It creates the /<instance>.cow/<username> sparse file if it does not already exist.
  2. It creates the /dev/mapper/<instance>-<username> device node that serves as the copy-on-write backing store for the iSCSI target if it does not already exist.
  3. It creates the iqn.<reverse-hostname>:<instance>-<username> iSCSI target if it does not exist. Or, if the target does exist, it closes any existing connections to it because the image can only be opened in read-write mode from one place at a time.
  4. It (re)sets the chap password on the iqn.<reverse-hostname>:<instance>-<username> iSCSI target to a new random value.
  5. It prints the new chap password on standard output if all of the previous tasks compeleted successfully.

You should be able to test the mktgt script from the command line by running it with valid test parameters. For example:

# echo `$MY_MOJO/scripts/mktgt fc29 jsmith`

When run from the command line, the mktgt script should print out either the eight-character random password for the iSCSI target if it succeeded or the line number on which something went wrong if it failed.

On occasion, you may want to delete an iSCSI target without having to stop the entire service. For example, a user might inadvertently corrupt their personal image, in which case you would need to systematically undo everything that the above mktgt script does so that the next time they log in they will get a copy of the original image.

Below is an rmtgt script that undoes, in reverse order, what the above mktgt script did:

# mkdir $HOME/bin
# cat << 'END' > $HOME/bin/rmtgt
#!/usr/bin/env perl

@ARGV >= 2 or die "usage: $0 <instance> <username> [+d|+f]\n";

my $instance = shift;
my $username = shift;

my $rmd = ($ARGV[0] eq '+d'); #remove device node if +d flag is set
my $rmf = ($ARGV[0] eq '+f'); #remove sparse file if +f flag is set
my $cow = "$instance-$username";

my $hostname;
die unless $hostname = `hostname`;
chomp $hostname;

my $tgtadm = '/usr/sbin/tgtadm';
my $target = 'iqn.' . join('.', reverse split('\.', $hostname)) . ":$cow";

my $text = `$tgtadm --op show --mode target`;
my @targets = $text =~ /(?:^T.*\n)(?:^ .*\n)*/mg;

my $targets = {};
foreach (@targets) {
   my $tgt;
   my $sid;

   foreach (split /\n/) {
      /^Target (\d+)(?{ $tgt = $targets->{$^N} = [] })/;
      /I_T nexus: (\d+)(?{ $sid = $^N })/;
      /Connection: (\d+)(?{ push @{$tgt}, [ $sid, $^N ] })/;

my $tid = 0;
foreach (@targets) {
   next unless /^Target (\d+)(?{ $tid = $^N }): $target$/m;
   foreach (@{$targets->{$tid}}) {
      die unless system("$tgtadm --op delete --mode conn --tid $tid --sid $_->[0] --cid $_->[1]") == 0;
   die unless system("$tgtadm --op delete --mode target --tid $tid") == 0;
   print "target $tid deleted\n";
   sleep 1;

my $dev = "/dev/mapper/$cow";
if ($rmd or ($rmf and -e $dev)) {
   die unless system("dmsetup remove $cow") == 0;
   print "device node $dev deleted\n";

if ($rmf) {
   my $sf = "/$instance.cow/$username";
   die "sparse file $sf not found" unless -e "$sf";
   die unless system("rm -f $sf") == 0;
   die unless not -e "$sf";
   print "sparse file $sf deleted\n";
# chmod +x $HOME/bin/rmtgt

For example, to use the above script to completely remove the fc29-jsmith target including its backing store device node and its sparse file, run the following:

# rmtgt fc29 jsmith +f

Once you’ve verified that the mktgt script is working properly, you can restart the bootmenu service. The next time someone netboots, they should receive a personal copy of the the netboot image they can write to:

# systemctl restart bootmenu.service

Users should now be able to modify the root filesystem as demonstrated in the below screenshot:

Episode 129 - The EU bug bounty program

Posted by Open Source Security Podcast on January 14, 2019 12:58 AM
Josh and Kurt talk about the EU bug bounty program. There have been a fair number of people complaining it's solving the wrong problem, but it's the only way the EU has to spend money on open source today. If that doesn't change this program will fail.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/8242709/height/90/theme/custom/thumbnail/yes/preload/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

    All systems go

    Posted by Fedora Infrastructure Status on January 13, 2019 10:04 PM
    Service 'The Koji Buildsystem' now has status: good: Everything seems to be working.

    NeuroFedora updated: 2019 week 2

    Posted by Ankur Sinha "FranciscoD" on January 13, 2019 08:55 PM

    We had our first meeting of the year. The full logs from our meeting are available here on the Fedora mote application. I have pasted the minutes of the meeting at the end for your convenience.

    The meeting was broadly for the team to come together and discuss a few things. We checked on the status of current tasks, and discussed our future steps. We've got to work on our documentation, for example. There's a lot to do, and a lot of cool new things to learn---in science, computing, and community development. If you'd like to get involved, please get in touch.

    We're continuing our work on including software in NeuroFedora, since that's the major chunk of our work load.

    Meeting summary

    Meeting started by FranciscoD at 14:00:15 UTC.

    Meeting ended at 15:10:43 UTC.

    NeuroFedora documentation is available on the Fedora documentation website. Feedback is always welcome. You can get in touch with us here.

    Appel à rejoindre Borsalinux-fr

    Posted by Charles-Antoine Couret on January 13, 2019 03:44 PM



    Borsalinux-fr est l'association qui gère la promotion de Fedora dans l'espace francophone. Nous constatons depuis quelques années une baisse progressive des membres à jour de cotisation et de volontaires pour prendre en main les activités dévolues à l'association.

    Nous lançons donc un appel à nous rejoindre afin de nous aider.

    L'association est en effet propriétaire du site officiel de la communauté francophone de Fedora, organise des évènements promotionnels comme les Rencontres Fedora régulièrement et participe à l'ensemble des évènements majeurs concernant le libre à travers la France principalement.

    Pourquoi nous lançons cet appel ?

    Nous constatons depuis 2012 ou 2013 une baisse progressive du nombre d'adhérents et en particulier de membres actifs au sein de l'association voire même de la communauté francophone dans son ensemble. Nous atteignons aujourd'hui une phase critique où l'activité est portée essentiellement par une poignée de personnes. Et certaines personnes actives aujourd'hui souhaitent baisser le rythme pour s'impliquer dans d'autres projets au sein de Fedora comme ailleurs.

    Ainsi il devient difficile de maintenir notre activité dans de bonnes conditions. Ce qui nuit à notre visibilité d'une part, mais aussi à l'attractivité du projet auprès des francophones d'autres part.

    Activités possibles

    Dans l'ensemble, les besoins les plus urgents sont au niveau de l'association où le renouvellement des membres dans le conseil d'administration est nécessaire. La traduction est aussi un domaine qui commence à être à l'arrêt. Et nous souhaitons aussi un élargissement de notre ancrage local. Actuellement les évènements de l'axe Bruxelles - Paris - Lyon - Nice sont assez bien couverts. En dehors nous avons des difficultés croissantes à envoyer quelqu'un sur place dans de bonnes conditions comme au Capitole du Libre à Toulouse ou aux RMLL suivant sa localisation.

    Si vous aimez Fedora, et que vous souhaitez que notre action perdure, vous pouvez :

    • Adhérer à l'association : les cotisations nous aident à produire des goodies, à nous déplacer pour les évènements, à payer le matériel ;
    • Postuler à un poste du Conseil d'Administration, en particulier pour la présidence, le secrétariat et la trésorerie ;
    • Participer à la traduction, sur le forum, sur les listes de diffusion, à la réfection de la documentation, représenter l'association sur différents évènements francophones ;
    • Concevoir des goodies ;
    • Organiser des évènements type Rencontres Fedora dans votre ville.

    Nous serions ravis de vous accueillir et de vous aider dans vos démarches. Toute contribution, même minime, est appréciée.

    Si vous souhaitez avoir un aperçu de notre activité, vous pouvez participer à nos réunions hebdomadaires chaque lundi soir à 20h30 (heure de Paris) sur IRC (canal #fedora-meeting-1 sur Freenode).

    Vous souhaitez nous aider ?

    N'hésitez pas à nous contacter pour nous faire part de vos idées et de ce que vous souhaitez faire.

    Par ailleurs le samedi 9 février 2019 à 14h à Paris (dans les locaux de la Fondation des Droits de l'Homme), l'Assemblée Générale Ordinaire procèdera au renouvellement du Conseil d'Administration et du Bureau de l'association. C'est l'occasion de se présenter et d'intégrer le fonctionnement de l'association ! C'est vraiment le moment idéal pour se tenir au courant de ce qui se passe et de présenter ses idées. Si vous ne pouvez pas venir sur place, n'hésitez pas à nous contacter avant pour nous communiquer vos idées et votre participation à la communauté francophone.

    The AppImage tool and Krita Next.

    Posted by mythcat on January 13, 2019 02:13 AM
    The AppImage is a universal software package format.
    The process of packaging the software in AppImage is a storage file provide by the the developer.
    This file is a compressed image with all the dependencies and libraries needed to run the desired software. The AppImage doesn’t really install the software just execute it without no extraction and no installation.
    The most common features:
    • Can run on various different Linux distributions;
    • No need of installing and compiling software;
    • No need of root permission and the system files are not touched;
    • Can be run anywhere including live disks;
    • Applications are in read only mode;
    • Software are removed just by just deleting the AppImage file;
    • Applications packaged in AppImage are not sandboxed by default.
    More about this can be read at official webpage.
    I tested the Krita Next with this tool.
    The appimage file of Krita Next can be found here.
    About the Krita Next this is a daily builds that contain new features, but could be unstable.
    After I download the file I change it to executable with:
    [mythcat@desk Downloads]$ chmod +x krita-4.2.0-pre-alpha-95773b5-x86_64.appimage 
    [mythcat@desk Downloads]$ ./krita-4.2.0-pre-alpha-95773b5-x86_64.appimage

    There are scheduled downtimes in progress

    Posted by Fedora Infrastructure Status on January 11, 2019 09:58 PM
    Service 'The Koji Buildsystem' now has status: scheduled: Scheduled outage of s390x builders is in progress

    FPgM report: 2019-02

    Posted by Fedora Community Blog on January 11, 2019 09:55 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora Program Management this week.

    I’ve set up weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else.


    Upcoming meetings & test days

    Fedora 30 Status

    Fedora 30 Change Proposal deadlines are approaching

    • Change proposals for Self-Contained Changes are due 2019-01-29.

    Fedora 30 includes a Change that will cause ambiguous python shebangs to error.  A list of failing builds is available on Taskotron.

    Fedora 30 includes a Change that will remove glibc langpacks from the buildroot. See the devel mailing list for more information and impacted packages.



    Submitted to FESCo

    Approved by FESCo


    The post FPgM report: 2019-02 appeared first on Fedora Community Blog.

    PHP version 5.6.40, 7.1.26, 7.2.14 and 7.3.1

    Posted by Remi Collet on January 11, 2019 06:06 AM

    RPM of PHP version 7.3.1 are available in remi-php73 repository for Fedora 27-29 and Enterprise Linux  6 (RHEL, CentOS).

    RPM of PHP version 7.2.14 are available in remi repository for Fedora 28-29 and in remi-php72 repository for Fedora 26-27 and Enterprise Linux  6 (RHEL, CentOS).

    RPM of PHP version 7.1.26 are available in remi repository for Fedora 26-27 and in remi-php71 repository for Enterprise Linux (RHEL, CentOS).

    RPM of PHP version 5.6.40 are available in remi-php56 repository for Enterprise Linux.

    emblem-important-2-24.pngPHP version 7.0 have reached its end of life and is no longer maintained by the PHP project. This version is also the last for PHP 5.6.

    These versions are also available as Software Collections in the remi-safe repository.

    security-medium-2-24.pngThese versions fix a few security bugs, so update is strongly recommended.

    Version announcements:

    emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

    Replacement of default PHP by version 7.3 installation (simplest):

    yum-config-manager --enable remi-php73
    yum update php\*

    Parallel installation of version 7.3 as Software Collection (x86_64 only):

    yum install php73

    Replacement of default PHP by version 7.2 installation (simplest):

    yum-config-manager --enable remi-php72
    yum update

    Parallel installation of version 7.2 as Software Collection (x86_64 only):

    yum install php72

    And soon in the official updates:

    emblem-important-2-24.pngTo be noticed :

    emblem-notice-24.pngInformation, read:

    Base packages (php)

    Software Collections (php56 / php71 / php72 / php73)

    Kernel 4.20 Test Day 2019-01-15

    Posted by Fedora Community Blog on January 10, 2019 10:20 PM
    F30 Kernel 4.20 Test Day 2019-01-15

    Tuesday, 2019-01-15 is the Kernel 4.20 Test Day!

    Why Test Kernel?

    The kernel team is working on Kernel 4.20.  This version was just recently released, and will arrive soon in Fedora.
    This version will also be the shipping Kernel for Fedora 29. So it’s to see whether it’s working well enough and catch any remaining issues.
    It’s also pretty easy to join in: all you’ll need is an iso (which you can grab from the wiki page).

    We need your help!

    All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

    Share this!

    Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

    The post Kernel 4.20 Test Day 2019-01-15 appeared first on Fedora Community Blog.

    Kernel numbering and Fedora

    Posted by Laura Abbott on January 10, 2019 07:00 PM

    By now it's made the news that the kernel version has jumped to version 5.0. Once again, this numbering jump means nothing except Linus decide that he wanted to change it. We've been through versioning jumps before (2.6 -> 3.x, 3.x -> 4.x) so practically we know how to deal with this by now. It still takes a bit of hacking on the kernel packaging side though.

    Fedora works off of a package git (pkg-git) model. This means that the primary trees are not git trees of the actual source code but git trees of a spec file, patches, and any other scripts. The sources get uploaded in compressed archive format. For a stable fedora release (F28/F29 as of this writing), the sources are a base tarball (linux-4.19.tar.xz) and a stable patch on top of that (patch-4.19.14.xz). Rawhide is built off of Linus' master branch. Using 4.20 as an example, start with the last base release (linux-4.19.tar.xz), apply an -rc patch on top (patch-4.20-rc6.xz) and then another patch containing the diff from the rc to master on that day (patch-4.20-rc6-git2.xz). We have scripts to take care of grabbing from kernel.org and generating snapshots automatically so kernel maintainers don't usually think too much about this.

    When there's a major version bump, most of our scripts break. This isn't just a matter of doing s/4/5/. Because the major version bump happens randomly, we can't easily script "if minor version == XXX pickup y as base". This means our existing code doesn't know how to pick up linux-4.20.tar.xz as a base and apply patch-5.0-rc1.xz on top of that. Because we've dealt with this before, other people have come up with the easiest solution which is a combination of hardcoding and using the full -rc tarball. This means that our base is linux-5.0-rc1.tar.xz and we generate snapshots on top of that (patch-5.0-rc1-git3.xz).

    The kernel.spec and associated scripts look a bit hacked up at the moment. This is only for the next 6 weeks though, after which we will go back to our usual methods. All credit for this scheme goes to the maintainers before me.

    A new logo for the Fedora distro

    Posted by Luca Ciavatta on January 10, 2019 09:00 AM

    Let’s talk about the logo that the Fedora community is talking about and analyzing. I’m really too happy that the Fedora Project, one of my favorite Linux distro, is evaluating a new logo proposal. Fedora was my first distro and I will undoubtedly be tied to her for the lifetime. It has served me faithfully every time on laptops, desktops,[...]

    The post A new logo for the Fedora distro appeared first on CIALU.NET.

    Project mission and goals for 2019

    Posted by Kiwi TCMS on January 10, 2019 08:35 AM

    Hello testers, Kiwi TCMS has taken on a brave new mission! We would like to transform the testing process by making it more organized, transparent & accountable for everyone on your team. Our goal is to improve engineering productivity and participation in testing. The following blog post outlines how we would like to achieve this and what goals we put before ourselves for this year.

    Complete the internal refactoring

    Last year we took on the challenge to bring a legacy code base up to modern coding standard. We did not complete that effort but made very good progress along the way. This is not a small task and that's why our team will continue with it this year.

    CodeClimate report

    • CodeClimate: 0 issues, 0% technical debt, health score A
    • Scrutinizer: only A and B type issues
    • Pylint: 0 issues
    • Remove vendored-in Handlebars, jQuery, jQuery-UI and TableDnD JavaScript libraries in favor of existing npm dependencies
    • Front-end uses the existing JSON-RPC instead of backend views that are only used for AJAX requests. Tip: these are usually accessed via postToURL() and jQ.ajax() on the front-end
    • Inspect and classify all 3rd party issues reported from Coverity and Bandit. Report and fix what we can, ignore the rest that do not affect Kiwi TCMS.

    Redesign the UI templates with the help of Patternfly

    There are 59 templates remaining to be converted to a modern look and feel. Along with them comes more refactoring and even redesign of the existing pages and the workflow around them. Together with refactoring this will make Kiwi TCMS easier to use and also to maintain.

    Modernize reporting

    We are planning to remove the existing reports feature because they are not well designed. We will re-implement existing functionality that our community finds useful, add new types of reports (incl. nicer graphics and UI) and make it possible for the reporting sub-system to be more easily extendable.

    Phase out is planned to begin after 1st March 2019! Until then we are looking for your feedback. Please comment in Issue #657!

    Plugins for 3rd party test automation frameworks

    These will make it easier to collect results from automated test suites into Kiwi TCMS for later analysis. Instead of creating scripts that parse the results and talk to our API you will only have to install an additional package in your test environment and configure the test runner to use it! Automation test results will then appear inside Kiwi TCMS.

    If you would like to use such functionality leave your vote inside GitHub issues! In case you would like to write a test-runner plugin you can find the specification here.

    Redefine bug-tracker integration

    Question: Does Kiwi TCMS integrate with JIRA?

    Answer: Well, it does. How exactly do you want to integrate?

    ... silence ...

    The following dialog happens every time someone asks me about bug-tracker integration, especially with JIRA. The thing is integration is a specified set of behavior which may or may not be desired in a particular team. As of now Kiwi TCMS is able to open a URL to your bug-tracker with predefined field values, add comments to bug reports and report a simple summary of bugs inside a TestRun.

    We recognize this may not be enough and together with the community we really need to define what bug tracker integration means! The broader domain of application lifecycle management tools (of which TCMS is a sub-set) have an integrated bug tracking system. We can add something like this and save you the trouble of using JIRA, however many teams have already invested in integrating their infrastructure or just like other tools. For example we love GitHub issues and our team regularly makes public reports about issues that we find internally!

    GitHub flow integration

    Developers have their GitHub PR flow and if they have done the job of having unit tests then they will merge only when things are green! This leaves additional testing efforts kind of to the side and doesn't really help with transparency and visibility. I'm not going to mention having an automatically deployed staging environment for every change because very few teams actually have managed to do this effectively.

    Kiwi TCMS statuses on GitHub PR

    • Goal: Figure out how Kiwi TCMS can integrate with GitHub flow and bridge the gap. Please share and +1 your wildest ideas in Issue #700.
    • Follow up: depending on the results in #700 we will follow with other goals and sub-tasks

    Agile integration with Trello

    Speaking of modern engineering flow is your team truly agile? When and how do you plan your testing activities ? Before the devel sprint or afterwards? How many testers take part in refining product backlog and working on user stories?

    Similar to GitHub flow lots of teams and open source projects are using Trello to effectively organize their development process. Testing should not be left behind and Kiwi TCMS may be able to help.

    • Goal: Figure out how Kiwi TCMS fits into the overall devel-test-planning process for agile teams and what we can do to make this easier for testers. Please share and +1 your wildest ideas in Issue #701
    • Follow up: depending on the results in #701 we will follow with other goals and sub-tasks

    Improve engineering productivity

    What makes a test engineer productive when they need to assess product risk and new features, when mapping product requirements documents (PRD) to test plans and test cases, when collaborating on user stories and behavior specification ? What makes developers, product owners, designers and other professionals productive when it comes to dealing with testing ?

    For example consider the following workflow:

    • Company has idea for a new product
    • In case this is a big product it may have its own mission, i.e. what kind of problem is it trying to solve and for which group of customers
    • Product backlog is then created which outlines features that map to the product mission
    • Then the team, together with test engineers perform example mapping and discuss and refine the initial feature requirements. User stories are created
    • Behavior specification may also be created
    • Test plans and test cases are the immediate product of BDD specs and desired user stories

    Later we iterate through the sprints and for each sprint something like this happens:

    • Desired product features are planned for development. They must be complete at least in terms of requirements, specs and tests
    • Devel writes code, maybe some unit tests, testers can also write automated tests and/or manually verify the current state of the feature being developed
    • Testing, including exploratory is performed before feature is merged
    • Rinse and repeat

    Devel is also part of testing, right? Product owners, UX and interaction designers as well. Producing quality software product is a team effort!

    In every step of the way Kiwi TCMS can provide notification wizards, guidelines and/or documentation for best practices, facilitate tooling, e.g. to write user stories and assess them or map out an exploratory testing session, etc. The list of ideas is virtually endless. We can even go into deep learning, AI and blockchain but honestly who knows how to use them in testing ?

    Our team is not quite sure how this goal will look like 3 months from now but we are certain that testing needs to happen first, last and all the time during the entire software development lifecycle. By providing the necessary functionality and tools in Kiwi TCMS we can boost engineering productivity and steer the testing process in your organization into a better, more productive direction which welcomes participation from all engineering groups.

    Let's consider another point of view: testing is a creative activity which is benefited by putting your brain into a specific state of mind! For example Gherkin (the Given-When-Then language) has the benefit of forcing you to think about behavior and while doing so you are vocalizing the various roles in the system, what kind of actions are accepted and what sort of result is expected! Many times this will help you remember or discover missing scenarios, edge cases and raise even more questions!

    Crazy ideas, brain dumps and +1 as always are welcome in Issue #703.


    Coding alone is not fun! Here's what you can do to help us:

    We are also looking to expand our core team and the list of occasional contributors. The following are mostly organizational goals:

    • Goal: participate in 5 conferences with a project stand
    • Goal: define how we find, recruit and onboard new team members. The foundation is already set in TP-3
    • Goal: clearly mark GitHub issues which are suitable for external contributors which don’t want to spend lots of time learning how Kiwi TCMS works under the hood. We're going to tag all such issues with the GitHub help wanted label

    Development policy

    Our team will be working on areas related to the goals above. A +1 reaction on GitHub issues will help us prioritize what we work on!

    GitHub +1

    Bug fixes and other issues will be occasionally slipped into the stream and pull requests from non-team contributors will be reviewed and merged in a timely fashion.

    There is at least 1 full day of work that goes behind the scenes when a new version is officially released: compile changelog, build images and upload them, create blog post and newsletter announcement, share on social media, etc. We also deploy on our own Kiwi TCMS instance as a stop-gap measure before making everything public!

    New PyPI tarballs and Docker images will be released every few weeks as we see fit, this has been our standard process. We try to align releases with Django's release schedule and try to cut a new version when there are known security vulnerabilities fixed. However we can't guarantee this will always be the case!

    If you are in a hurry and need something quickly the best option is to send a pull request, build your own Docker image from source and maybe consider sponsoring us via Open Collective!

    Happy testing!

    python-bugzilla + bugzilla 5.0 API keys

    Posted by Cole Robinson on January 09, 2019 10:58 PM
    For many uses of /usr/bin/bugzilla and python-bugzilla, it's necessary to actually be logged in to a bugzilla server. Creating bugs, editing bugs, querying private data, etc.

    Up until now anyone that's used the command line tool has periodically had to do a 'bugzilla login' to refresh their authentication cache. In older bugzilla versions this was an HTTP cookie, more recently it's a bugzilla API token. Generally 'login' calls were needed infrequently on a single machine as tokens would remain valid for a long time.

    Recently, bugzilla.redhat.com received a big update to bugzilla 5.0. However with that update it seems like API tokens now expire after a week, which has necessitated lots more 'bugzilla login' calls than I'm used to.

    Thankfully with bugzilla 5.0 and later there's a better option: API keys. Here's how to to use them transparently with /usr/bin/bugzilla and all python-bugzilla library usage. Here's steps for enabling API keys with bugzilla.redhat.com, but the same process should roughly apply to other bugzilla instances too.

    Login to the bugzilla web UI, click your email, select Preferences, select API Keys. Generate an API key with an optional comment like 'python-bugzilla'. Afterwards the screen will look something like this:

    MY-EXAMPLE-API-KEY is not my actual key, I just replaced it for demo purposes. The actual key is a long string of characters and numbers. Copy that string value and write a bugzillarc file like this:

    $ cat ~/.config/python-bugzilla/bugzillarc

    That's it, /usr/bin/bugzilla and python-bugzilla using tools should pick it up automagically. Note, API keys are as good as passwords in certain ways, so treat it with the same secrecy you would treat a password.

    Blog-o-Matic - quickly get a GitHub hosted blog with Pelican, Elegant with little setup steps.

    Posted by Pablo Iranzo Gómez on January 09, 2019 09:00 PM


    I’ve already covered some articles about automation with Travis-ci, GitHub, and one step that seems a show-stopper for many users when trying to build a website is on one side, the investment (domain, hosting, etc), the backend being used (wordpress, static generators, etc)…

    While preparing a talk for a group of coworkers covering several of those aspects, I came with the idea to create Blog-o-Matic, implementing many of those ‘learnings’ in a ‘canned’ way that can be easy to consume by users.

    The approach

    Blog-o-Matic, uses several discussed topics so far:

    • Github and GH Pages for hosting the source and the website
    • travis-ci.org for automating the update and generation process
    • Pelican’ for static rendering of your blog from the markdown or asciidoc articles
    • Elegant’ for the ‘Theme’
    • peru for automating repository upgrades for plugins, etc

    The setup process is outlined at its README.md and just requires a few steps to setup that, from that point, will allow you to get your website published each time you commit a new document to the content folder.

    You can also check the ‘generated’ website after installation via https://iranzo.github.io/blog-o-matic

    Which new Fedora logo design do you prefer?

    Posted by Máirín Duffy on January 09, 2019 08:39 PM
    <figure class="wp-block-image">Fedora Design Team Logo</figure>

    As I mentioned in an earlier post, the Fedora design team has been working on a refresh of the Fedora logo. This work started in a Fedora design ticket at the request of the Fedora Project Leader Matthew Miller, and has been discussed openly in the ticket, on the council list, on the design-team list, and within the Fedora Council including at their recent hackfest.
    In this post, I’d like to do the following:

    • First, outline the history of our logo and how it got to where it is today. It’s important to understand the full context of the logo when analyzing it and considering change.
    • I’d then like to talk about some of the challenges we’ve faced with the current iteration of our logo for the past few years, with some concrete examples. I want you to know there are solid and clear reasons why we need to iterate our logo – this isn’t something we’re doing for change’s sake.
    • Finally, I’d like to present two proposals the Fedora Design Team has created for the next iteration of our logo – we would very much like to hear your feedback and understand what direction you’d prefer us to go in.

    Wait, you’re doing what?

    Yes, changing the logo is a big deal. While the overarching goal here is evolving the logo we already have with some light touches rather creating something new, it’s a change regardless. The logo is central to our identity as a project and community, and even iterations on the 13-year old current version of our logo are really visible.
    This is a wide-reaching change, and will affect most if not all parts of the Fedora community. If we’re going to do something like this, it’s not something to be done lightly. This isn’t the first (or second) time we’ve changed our logo, though!The final proposal of the Fedora logo from Nov 2005; lighter blue is darker, f's crossbar is much shorter

    A history of Fedora’s logo, 2003 to 2019

    I have been around the Fedora project since 2004, and for most of that time I’ve been the primary caretaker of the Fedora logo. I’m the author and maintainer of the current Fedora Logo Usage Guidelines document and created and maintain the Fedora Logo History page, and I have maintained the Fedora logo email request queue and lead the Fedora Design Team for most of the past 15 years. I’ve witnessed and took part in most of the decisions that have been made about our logo over the years. The information we’re going to go through for the most part should therefore be regarded as accurate, and where I thought it would be helpful I’ve linked to primary source documents below.
    Here is the very first Fedora project logo used in Fedora Core 1 through Fedora Core 4, for at least two years (I believe a simple wordmark using an italic and extra bold / black version of a Myriad typeface):
    Original Fedora logo, in a bold italic Myriad font
    A couple of years later came the initial public proposal for a complete redesign from Matt Muñoz (at time time from CapStrat) in November 2005:

    Original Fedora logo. Ends of the F's were much longer and curled, and the lighter blue color was brighter.

    With some feedback back and forth, this was the final result:

    The final proposal of the Fedora logo from Nov 2005; lighter blue is darker, f's crossbar is much shorter
    You can see that:

    • The lighter Fedora blue used in the infinity symbol was darkened and made less cyan
    • The color of the ‘fedora’ text was originally in the dark blue and was swapped for the lighter blue in our current version (this actually results in poorer contrast.)
    • Both blues in the final version were shifted more towards purple from a cyan tint.
    • The shape of the ‘f’ in the infinity mark was changed too – the ends of the f were blunted and the crossbar of the f was made longer.
    • Proportionally, the Fedora infinity logomark was made smaller in proportion to the Fedora wordmark.

    Note too, this was 2005, and we only had a handful of high-quality free and open source fonts available to us. This logo is designed with a proprietary font called Bryant (the v. 2 2005 version) designed by Eric Olson.  That is one of the reasons we decided to redesign the original sublogo design created for the Fedora logo, which looked like this:

    These sublogos relied on the designer having access to Bryant, which would necessarily restrict how and who on a community design team (which was just forming at the time) could create new sublogos for the project. They also rely on having a wide palette of colors distinguishable yet harmonious with the brand, without an understanding how many sublogos there might actually be, so scaleability was an issue. (I would guess we have hundreds. We have sublogos for different teams, different geographical groups, lots and lots o’ apps…)
    This is what the Fedora Design Team ended up creating as a replacement for this design, which uses the free & open source font Comfortaa by Johan Aakerlund (who kindly licensed it under an open source license at our request):
    Fedora sublogo design - uses the FLOSS font Comfortaa alongside Fedora logo elements.
    Note that even the current sublogo design shown above was not the only one we’ve used – we originally had a sublogo design that used the free & open source font MgOpen Modata created by Magenta, and that was in use for around four years (example design that used it.) We fully / officially transitioned over to Comfortaa (first suggested by design team member Luya Tshimbalanga) back around 2010. MgOpen Modata did not have support for even basic acute marks which was problematic for our global community, because on the design team, we felt the shape of the letters better coordinated with the shapes of the Bryant lettering in the logo. (We had considered multiple other FLOSS fonts as you can see in our initial requirements document for the change.)

    This has to be said: A soapbox

    I just want to say that the fact the design-team and marketing mailing lists among others have been on mailman for so many years, and because we have Hyperkitty deployed in Fedora, researching all of the specific facts, dates, and circumstances around the history of the logo was quick, easy, and painless and resulted in my being able to link you up to primary source documents (and jog my own memory) above with little effort. I was able to search 15 years of history across all of our mailing lists with one quick query and find what I was looking for right away. I continue to be acutely and deeply concerned about the recent Balkanization of our communications within the Fedora project, but am grateful that Hyperkitty ensured, in this case, that important parts of our history have not been lost to time.

    I hope this history of the Fedora logo demonstrates that our logo and brand over time have not been static, nor is the logo we use today the first logo the project ever had. Understandably, the notion of changing our logo can feel overwhelming, but it is not something new to us as a project.

    The challenges

    The Fedora logo today probably seems benign and unproblematic to most folks, but for those of us who work with it frequently (such as members of the Fedora Design Team), it has some rough edges we deal with frequently. I would classify those issues as technical / design issues. Let’s walk through them.

    Technical Issues

    It doesn’t work at all in a single color

    The Fedora logomark necessarily requires two colors to render:

    • a color for the bubble background
    • a color for the ‘f’
    • a color for the infinity

    This makes a single-color version of the logo impossible. (Note single color means one color, not shades of grey.) This has caused us a number of issues over the year, from printing swag with the full logo on it when the vendors only allow single color on particular items (in these cases, we use only the ‘fedora’ wordmark and have to drop the infinity bubblemark, or pay much more money for multiple color prints) to causing issues with our ability to be iconified in libraries of Linux and open source project logos.
    This recently caused an issue when an attempted one-colorization of our logo (the infinity symbol was dropped, against our guidelines) was submitted to font-awesome without our permission; because the distribution of that icon library is so wide and I didn’t want the broken logo proliferating, I had to work over my Christmas holiday to come up with a one-color version of the logo as a stopgap because that library doesn’t have a way of removing a logo once submitted.

    The solution above is problematic. I say this having created it. It’s a hack – it’s using diagonal hash marks to simulate a second color, which doesn’t scale well and can cause blurriness, glitching, and artifacts on screen display, and also particularly at small sizes won’t work for printing on swag items (the hatch lines are too fine for screen printing processes to reproduce reliably across vendors.) It’s truly a stopgap and not a long-term solution.

    It doesn’t work well on a dark background, particularly blue ones

    You’ve probably seen it – it’s unavoidable. I call it the logo glow. If you want to put the Fedora logo on a dark background – particularly a dark blue background! – to get enough contrast to have it stand out from the background, you have to add a white keyline or a white ‘glow’ to the back of the logo to create enough contrast that it doesn’t melt into the background.
    This is against the logo usage guidelines, by the way. It adds an additional, non-standardized element to the logo and it changes the look and character of the logo.
    If you do a simple search for “fedora wallpaper” on an image search engine, these are the sorts of results you’ll turn up, exemplifying the logo flow – I promise I didn’t search for “fedora glow”:

    Part of the reason the logo has bad contrast with dark backgrounds is because the infinity bubble is necessarily a dark color. This is related to the fact the logo cannot be displayed in one color. If our logo had a symbol that could be one-color, then display on a dark background is a fairly trivial prospect – you can invert the color of the logo to a light color, like white, and the problem is solved. Since the design of our logo mark requires at least two separate colors in a very specific configuration (you can’t swap the background bubble for a light color and make the infinity color dark), we have this challenge.
    I have also seen third parties invert the logo to try to deal with this issue – this is against the guidelines and looks terrible, but perhaps you’ve seen it in the wild, too. On duckduckgo.org image search, this was in the first few hits for “fedora logo” today (note it also uses the wrong, original proposal ‘f’ shape from November 2005):

    Typically on the design team we’ve dealt with this using gradients in a clever way, whether inside the dark blue bubble of the logo itself, in the background, or a combination of the two. Here is an example – you can see how we positioned the logo relative to the lighter part of the gradient to ensure enough contrast:

    While this solution is workable and we’ve used it many times, it still results in artwork (sometimes even official artwork) ending up with the glow. The problem comes up over and over and constrains the type of artwork we can do. Also note the gradient solution will not work for printed objects, making it difficult to print a good-looking Fedora logo on a dark-colored t-shirt or any blue-colored item. The gradient solution is also far less reliable in web-based treaments of the logo across platforms, where we cannot guarantee where exactly within a gradient the logomark may fall across screen sizes.

    It’s hard to center the mark visually in designs

    The ‘bubble’ at the back of the Fedora logomark is meant to be a stylized speech bubble, symbolizing the ‘voice of the community.’ Unfortunately, it’s also a lopsided shape that is deceptively difficult to center. Visualize it as a square – three of its four edges are rounded, so if you center it programatically using HTML/CSS or a creative tool like Inkscape, visually it just won’t be centered. You don’t have to take my word for it; here’s a demonstration:

    The two rounded edges on the right in comparison to the straight edge on the left makes the programmatically centered version appear shifted slightly to the left; typically this requires manually nudging the logomark to the right a few pixels when trying to center it against anything. The reason this happens is because the programmatic center is calculated based on the exact distance between the rightmost point of the image and the leftmost point. The rounded right side of the image has only one point in the horizontal center of the shape that sticks out the most, where as the straighter left side has many more points at the left extreme used in this calculation.
    This is an annoying problem to keep on top of.

    The ‘superscript’ logo bubble position makes the entire logo hard to position

    One of the things that is unique about our current logo design that also causes confusion is the placement of the bubble relative to the “fedora” text.
    The final proposal of the Fedora logo from Nov 2005; lighter blue is darker, f's crossbar is much shorter
    It’s almost like a superscript on the text itself. While the logotype (text alone) has a typical basic rectangular shape, the bubble throws it off, pushing both the upper extreme and the right extreme of the shape out and creating some oddly-shaped negative space:

    It’s almost like the shape of a hooved animal, like a cow, with the logomark as the head. The imbalanced negative space gives the logo a bit of a fragility in appearance, as if it could be tipped over into that lower right negative space. It also makes the logo extremely difficult to center both vertically and horizontally. Similarly to how we compensate for this as shown in the demo above for the logomark, we have to manually tweak the position of the full logo by eye to center it relative to other items both vertically and horizontally.
    This impacts the creation of any Fedora-affiliated logo, sublogo, or partnership involving multiple logos (such as a list of sponsor logos on a t-shirt or on a conference program.)
    It means our logo cannot be properly centered in a programmatic way. While those of us on the Fedora Design Team and other teams within Fedora are aware of the issue and compensate for it naturally, those less familiar with our logo, like other projects we may be partnering with or vendors, or even any algorithmic working of our logo (in an app or on a website) is not going to be aware of it. Our logo is going to look sloppy in these scenarios where automatic centering is employed, and for those who catch the issue, it’s going to demand more time and care that should not be necessary to work with the logo.
    The position of the logomark is also so atypical that it’s been assumed to be a mistake, and some third parties have tried modifying it to a more traditional position and proportion to the logotype to ‘fix’ it. Here is an example of this I found in the wild (again, from close to the top of hits received from a duckduckgo.com image search for ‘fedora logo’):

    The ‘a’ in ‘fedora’ can look like an ‘o’

    The final proposal of the Fedora logo from Nov 2005; lighter blue is darker, f's crossbar is much shorter

    Bryant is a stylized font, and the ‘a’ in Fedora has on occasion been confused for an ‘o.’ It’s not a major call-the-fire-department type of issue, just one of those long simmering annoyances that adds to everything else.

    Technical Issues Summary

    Ok, so… that was a lot of problems to walk through. These aren’t all obvious on the surface, but if you work with the logo regularly as many Fedora Design team members do, these are familiar issues that probably have you nodding your head. The more ‘special treatment’ our logo requires to look good, the more hacks and crutches we need to create to help it look good, means the less chance it’ll be treated correctly by those who need to use it who have less experience with it. No single one of these issues is insurmountable, but together they do all add up.
    On top of that, there are two more challenges we deal with around our current logo. Let’s talk about them.

    Other Challenges

    War of the f’s

    The Fedora project was started in 2003 and the current version of our logo was developed in 2005. Facebook existed in 2005 (it was launched in 2004) in a limited capacity: it was nowhere near as ubiquitous and pervasive as it is today, and was restricted to only .edu accounts at select universities, starting with Ivy League colleges (accounts didn’t open up to the general public until 2006.)

    I do not know when Facebook started using its white lowercase f on a blue square icon/logo, but based on Fedora ambassador reports, I am guessing it became pervasive around 2009/2010 when smartphone and tablet usage ramped up and the blue square f was likely used as its first smartphone/tablet icon.
    Here’s a couple of long, early email threads where Fedora community members encountered confusion around the Fedora logo and the Facebook logo:

    “A word about F…acebook” started by Sascha Thomas Spreitzer

    Wednesday 5 May 2010, 17 comments 14 participants

    • “Yeah, I’ve had the same remark from lots of people when they see the Fedora logo on my tshirts.”
    • ” the blue sticker on the back of my suv has caused people to assume facebook too.”
    • “I’ve stopped wearing my Fedora baseball cap I bought from brandfuel stores because of a similar situation. I had 5 people within a time frame of only a couple hours ask me why I had a Facebook hat on. Sad times.”
    • “I used to carry a backpack with a Fedora sticker on it. Ended up pasting that sticker on my desktop after constant “Oh sweet, where’d you get that Facebook sticker” questions, so I know how you feel.”
    “Feedback from Distrowatch” started by Rahul Sundaram

    Tuesday 8 May 2012, 14 comments 14 participants

    • “What I found a bit amusing was that almost a third of the participants thought Fedora was some sort of Facebook plugin or application instead of a full operating system, due to the project’s logo.”
    • “Greetings Fedorans, this whole logo thing has been going on for a while. I use it to my advantage and as part of my Linux advocacy. I have a Fedora sticker in front of *acebook*.”

    The confusion between the two logos has been a long-running annoyance. My own young daughters called the Fedora logo “Mama ‘f'” because of my Fedora stickers – but if they see Facebook open on a computer or phone, they do point at the logo and say, “Mama f” as if it’s the same thing!
    While addressing the confusion between the two marks would likely not be a reasonable justification to change the logo, creating more differentiation between the two would be a helpful tweak that could be worked into a redesign.

    Closed source font

    For a very long time, I’ve personally been irked by the fact that a logo that in part represents software freedom, a logo that represents a community so dedicated to software freedom, is comprised of a wordmark with a closed, proprietary font. We have wanted to swap it out for a FLOSS font for a long time, and I’ve tried and failed to make that change happen in the past.
    In historical context, it makes sense for a logo created in 2005 – even one for a FLOSS project – to make use of a closed font. In 2019, however, it makes less sense. There are large libraries of free and open source fonts out there now, including fontlibrary.org and google fonts, so the excuse of there not being enough high-quality, openly-licensed fonts available just no longer stands.
    A logo is a symbol, and a logo using an open source font would better represent who we are and what we do symbolically.

    Where we are now

    “All right,” you must be thinking. “That’s a hell of a lot of problems. How can we possibly fix them?”
    About three months ago, I had a conversation with our project leader Matthew Miller about these issues. He is familiar with all of them and thought maybe we should see if the Fedora Council and if our community would be open to a change. He kicked things off with a thread on the fedora-council list:
    “Considering a logo refresh” started by Matthew Miller on 4 October 2018
    From there, we agreed that since the initial reception to the idea wasn’t awful, he opened up a formal design team ticket and myself and the rest of the design team started working on some ideas. As we just wanted to address the issues identified and not make a big change for changes sake, I started off by trying the very lightest touches I could think of:

    With these touches, you can see direct correlations with the issues we’ve walked through:

    1. The current logo
    2. Normalize mark placement – this relates to “The ‘superscript’ logo bubble position makes the entire logo hard to position” above
    3. Brighten colors – this helps differentiate from Facebook’s blue
    4. Open source font & Balance Bubble – the font change relates to “Closed source font” above, and balancing the bubble relates to “It’s hard to center the mark visually in designs” above
    5. Match bubble ‘f’ to logotype – another attempt to differentiate from the shape of the “f” in Facebook’s mark
    6. Attempt to make single color – failed, but tried to address “It doesn’t work at all in a single color” above
    7. Drop bubble – relates to both single color and imbalance of the bubble mark
    8. Drop infinity – another attempt to make one-color
    9. Another attempt at one-color compatible mark

    We started working on infinity and f only designs to try to get away from using the bubble so we could have a one-color friendly logo. In order to give a bit more balance to this type of infinity-only mark, we tried things like changing the relative sizes of the curves of the infinity:

    We tried playing with perspective:

    And we tried all different types of creating a “Fedora-like” f:

    These were all explorations in trying to tweak the logo we already had to minimize change.
    We also had a series of work done on trying to come up with an new, alternative f mark that was less problematic but still looked ‘Fedora-ish’:

    I invite you to go through Design Ticket #620 which is where all of this work happened, and you can see how this work unfolded in detail, with the back and forth between designers and community members and active brainstorming. This process took place pretty much entirely within the pagure ticket, so everything is there.

    The Proposals

    we need your help!
    Eventually, as all great design brainstorming processes go, you have to pick a direction, refine it, and make a final decision. We need your help in picking a direction. Here are two logo candidates representing two different directions we could go in for a Fedora logo redesign:

    • Do you have a preference?
    • How do you feel about these?
    • What would you change?
    • Do you think each solves the issues we outlined?
    • Is one a better solution than the other?

    The most useful feedback is stated as a problem, not a solution. E.g., if you suggest changing an element, to understand your perspective it’s helpful to know why you seek to change that element. Also note that while “I don’t like X” or “I like Y” is a perfectly valid reaction, it’s not particularly helpful unless you can dig in a little deeper and share with us why you feel that way, what specific technical details of the logo (shape, contrast, color, clarity, connotation, meaning, similarity to something else, etc.) you think triggered the feeling.

    Please also note this is not a vote. We would love your feedback in order to iterate and push the designs forward. If this was a vote or poll, we’d set one up using the proper software. We want feedback on why you like, don’t like, or otherwise react to what you see here. We are not going to tally “votes” here and make a decision based on that. Here is an example of a very productive and helpful set of feedback that resulted in a healthy back and forth with a new direction for the designs. Providing feedback on specific components of the logo is great brain food for making it better!

    Candidate #1

    This design has a flaw in that it still includes a bubble mark, which comes with all of the alignment headaches we’ve talked about. However, its position relative to the logotype is changed to a more typical layout (mark on the left, a bit larger than it is now) and this design allows for the mark to be used without the bubble (“mark sans bubble”) in certain applications. Both variants of the mark are one-color capable.
    The font is a modified version of Comfortaa that is hand-kerned and has a modified ‘a’ to lessen consfusion with ‘o’.
    As the main goal here was really a light touch to address the issues we have, you can see that items like the Fedora remix logo and sublogos are only lightly affected: the ‘remix’ logo text is changed to Comfortaa, and the ‘fedora’ logotext in all sublogos is updated.
    You can see in the sample web treatment, you can make some neat designs by clipping this mark on top of a photo, as is done under “Headline Example” with the latest Fedora wallpaper graphic.
    This candidate I believe represents the least amount of change that addresses most of the issues we identified.

    Candidate #2

    As with candidate #1, the font is a modified version of Comfortaa that is hand-kerned and has a modified ‘a’ to lessen consfusion with ‘o’.
    The mark has changed the ratio of sizes between the two loops of the infinity, and has completely dropped the bubble in the main version of the logo. However, as an alternative possibility, we could offer in the logo guidelines the ability to apply this mark on top of different shapes.
    As with candidate #1, the main goal here was really a light touch to address the issues we have, you can see that items like the Fedora remix logo and sublogos are only lightly affected: the ‘remix’ logo text is changed to Comfortaa, and the ‘fedora’ logotext in all sublogos is updated.
    This logo candidate is more of a departure from our current logo than candidate #1. However, it is a bit closer in design to the various icons we have for the Fedora editions (server, atomic, workstation) as it’s a mark that does not rely on contrast with another shape, it’s free form and stands on its own without a background.

    We would love to hear your constructive and respectful feedback on these design options, either here in the blog comment or on the design team ticket. Thanks for reading this far!

    Phoenix joins the LVFS

    Posted by Richard Hughes on January 09, 2019 02:15 PM

    Just like AMI, Phoenix is a huge firmware vendor, providing the firmware for millions of machines. If you’re using a ThinkPad right now, you’re most probably using Phoenix code in your mainboard firmware. Phoenix have been working with Lenovo and their ODMs on LVFS support for a while, fixing all the niggles that was stopping the capsule from working with the loader used by Linux. Phoenix can help customers build deliverables for the LVFS that use UX capsule support to make flashing beautiful, although it’s up to the OEM if that’s used or not.

    It might seem slightly odd for me to be working with the firmware suppliers, rather than just OEMs, but I’m actually just doing both in parallel. From my point of view, both of the biggest firmware suppliers now understand the LVFS, and provide standards-compliant capsules by default. This should hopefully mean smaller Linux-specific OEMs like Tuxedo and Star Labs might be able to get signed UEFI capsules, rather than just getting a ROM file and an unsigned loader.

    We’re still waiting for the last remaining huge OEM, but fingers crossed that should be any day now.

    PHP 7.0 is dead

    Posted by Remi Collet on January 09, 2019 02:11 PM

    After PHP 5.5, and as announced, PHP version 7.0.33 was the last official release of PHP 7.0

    Which means that after version 7.1.26, 7.2.14 and 7.3.1 releases, some security vulnerabilities are not, and won't be, fixed by the PHP project.

    To keep a secure installation, the upgrade to a maintained version is strongly recommended:

    • PHP 7.2 is in active support mode, and will be maintained until December 2019 (2020 for security).
    • PHP 7.3 is in active support mode, and will be maintained until December 2020 (2021 for security).

    Read :

    However, given the very important number of downloads by the users of my repository (~10%)  the version is still available in  remi repository for Enterprise Linux (RHEL, CentOS...) and Fedora (Software Collections) and includes the latest security fix.

    Warning : this is a best effort action, depending of my spare time, without any warranty, only to give users more time to migrate. This can only be temporary, and upgrade must be the priority.

    Version 7.0.33-2 includes fix backported from upcoming 7.1.26.

    Base packages (php)

    Software Collections (php70)

    Smarter tabular editing with Vim

    Posted by Rajeesh K Nambiar on January 09, 2019 11:44 AM

    I happen to edit tabular data in LaTeX format quite a bit. Being scientific documents, the table columns are (almost) always left-aligned, even for numbers. That warrants carefully crafted decimal and digit alignment on such columns containing only numbers.

    I also happen to edit the text (almost) always in Vim, and just selecting/changing a certain column only is not easily doable (like in a spreadsheet). If there are tens of rows that needs manual digit/decimal align adjustment, it gets even more tedious. There must be another way!

    Thankfully, smarter people already figured out better ways (h/t MasteringVim).

    With that neat trick, it is much more palatable to look at the tabular data and edit it. Even then, though, it is not possible to search & replace only within a column using Visual Block selection. The Visual Block (^v) sets mark on the column of first row till the column on last row, so any :<','>s/.../.../g would replace any matching text in-between (including any other columns).

    To solve that, I’ve figured out another way. It is possible to copy the Visual Block alone and pasting any other content over (though cutting it and pasting would not work as you think). Thus, the plan is:

    • Copy the required column using Visual Block (^v + y)
    • Open a new buffer and paste the copied column there
    • Edit/search & replace to your need in that buffer, so nothing else would be unintentionally changed
    • Select the modified content as Visual Block again, copy/cut it and come back to the main buffer/file
    • Re-select the required column using Visual Block again and paste over
    • Profit!

    Here’s a short video of how to do so. I’d love to hear if there are better ways.

    <figure class="aligncenter is-resized">Column editing in Vim<figcaption>Demo of column editing in Vim</figcaption></figure>

    Fedora classroom: Building Container images with Buildah

    Posted by Fedora Magazine on January 09, 2019 09:00 AM

    Fedora Classroom sessions continue with an introductory session on the use of Buildah to create container images. The general schedule for sessions is availble on the wiki, along with resources and recordings from previous sessions.

    Topic: Building container images with Buildah

    Containers are becoming the de facto standard for building and distributing applications. Fedora as a modern operating system already supports container use by default. As with every new technology, there are different applications and services available for adopting it. This classroom will explain and demonstrate the Buildah command line tool for building container images and its implementation in Fedora 29.

    Here’s the agenda for the Classroom session:

    • Quick overview of What is a container image?
    • Deep dive into container architecture.
    • Container runtimes.
    • Building container images from commandline.
    • Building container images using Dockerfile.
    • Running Buildah within a container.

    When and where

    • The session will be held on the Jitsi video-conferencing platform. Please use this URL to join the session: https://meet.jit.si/20190115-buildah
    • It will be held on  Tuesday, January 15 at 1600 UTC.  (Please click the link to see the time in your time zone.)


    Dan Walsh is a Distinguished Engineer for Red Hat. Dan is a recognized expert in Linux Security and container technologies. He has been working on container technologies for the last 17 years at Red Hat. Dan now leads the Container Runtime team at Red Hat. Responsible for the CRI-O, Buildah, Podman, and Skopeo projects.

    AdamW’s Debugging Adventures: Python 3 Porting 201

    Posted by Adam Williamson on January 09, 2019 04:12 AM

    Hey folks! Time for another edition of AdamW’s Debugging Adventures, wherein I boast about how great I am at fixin’ stuff.

    Today’s episode is about a bug in the client for Fedora’s Koji buildsystem which has been biting more and more Fedora maintainers lately. The most obvious thing it affects is task watching. When you do a package build with fedpkg, it will by default “watch” the build task – it’ll update you when the various subtasks start and finish, and not quit until the build ultimately succeeds or fails. You can also directly watch tasks with koji watch-task. So this is something Fedora maintainers see a lot. There’s also a common workflow where you chain something to the successful completion of a fedpkg build or koji watch-task, which relies on the task watch completing successfully and exiting 0, if the build actually completed.

    However, recently, people noticed that this task watching seemed to be just…failing, quite a lot. While the task was still running, it’d suddenly exit, usually showing this message:

    ConnectionError: (‘Connection aborted.’, RemoteDisconnected(‘Remote end closed connection without response’,))

    After a while, nirik realized that this seemed to be associated with the client going from running under Python 2 by default to running under Python 3 by default. This seems to happen when running on Python 3; it doesn’t seem to happen when running on Python 2.

    Today I finally decided it had got annoying enough that I’d spend some time trying to track it down.

    It’s pretty obvious that the message we see relates to an exception, in some way. But ultimately something is catching that exception and printing it out and then exiting (we’re not actually getting a traceback, as you do if the exception is ultimately left to reach the interpreter). So my first approach was to dig into the watch-task code from the top down, and try and find something that handles exceptions that looks like it might be the bit we were hitting.

    And…I failed! This happens, sometimes. In fact I still haven’t found the exact bit of code that prints the message and exits. Sometimes, this just happens. It’s OK. Don’t give up. Try something else!

    So what I did next was kind of a long shot – I just grepped the code for the exception text. I wasn’t really expecting this to work, as there’s nothing to suggest the actual exception is part of Koji; it’s most likely the code doesn’t contain any of that text at all. But hey, it’s easy to do, so why not? And as it happened, I got lucky and hit paydirt: there happens to be a comment with some of the text from the error we’re hitting. And it sure looks like it might be relevant to the problem we’re having! The comment itself, and the function it’s in, looked so obviously promising that I went ahead and dug a little deeper.

    That function, is_conn_error(), is used by only one other thing: this _sendCall() method in the same file. And that seems very interesting, because what it does can be boiled down to: “hey, we got an error! OK, send it to is_conn_error(). If that returns True, then just log a debug message and kick the session. If that returns False, then raise an exception”. That behaviour obviously smells a lot like it could be causing our problem. So, I now had a working theory: for some reason, given some particular server behaviour, is_conn_error() returns True on Python 2 but False on Python 3. That causes this _sendCall() to raise an exception instead of just resetting the session and carrying on, and some other code – which we no longer need to find – catches that exception, prints it, and quits.

    The next step was to test this theory – because at this point it’s only a theory, it could be entirely wrong. I’ve certainly come up with entirely plausible theories like this before which turned out to be not what was going on at all. So, like a true lazy shortcut enthusiast, I hacked up my local copy of Koji’s __init__.py and sprinkled a bunch of lines like print("HERE 1!") and print("HERE 2!") through the whole of is_conn_error(). Then I just rankoji wait-task commands on random tasks until one failed.

    This is fine. When you’re just trying to debug the problem you don’t need to be super elegant about it. You don’t need to do a proper git patch and rebuild the Koji package for your system and use proper logging methods and all the rest of it. Just dumping some print lines in a working copy of the file is just fine, if it works. Just remember to put everything back as it was before later. 🙂

    So, as it happened the god of root causing was on my side today, and it turned out I was right on the money. When one of the koji watch-task commands failed, it hit my HERE 1! and HERE 3! lines right when it died. Those told me we were indeed running through is_conn_error() right before the error, and further, where we were coming out of it. We were entering the if isinstance(e, socket.error) block at the start of the function, and returning False because the exception (e) did appear to be an instance of socket.error, but either did not have an errno attribute, or it was not one of errno.ECONNRESET, errno.ECONNABORTED, or errno.EPIPE.

    Obviously, this made me curious as to what the exception actually is, whether it has an errno at all, and if so, what it is. So I threw in a few more debugging lines – to print out type(e), and getattr(e, 'errno', 'foobar'). The result of this was pretty interesting. The second print statement gave me ‘foobar’, meaning the exception doesn’t have an errno attribute at all. And the type of the exception was…requests.exceptions.ConnectionError.

    That’s a bit curious! You wouldn’t necessarily expect requests.exceptions.ConnectionError to be an instance of socket.error, would you? So why are we in a block that only handles instances of socket.error? Also, it’s clear the code doesn’t expect this, because there’s a block later in the function that explicitly handles instances of requests.exceptions.ConnectionError – but because this earlier block that handles socket.error instances always returns, we will never reach that block if requests.exceptions.ConnectionError instances are also instances of socket.error. So there’s clearly something screwy going on here.

    So of course the next thing to do is…look up socket.error in the Python 2 and Python 3 docs. ANY TIME you’re investigating a mysterious Python 3 porting issue, remember this can be useful. Here’s the Python 2 socket.error entry, and the Python 3 socket.error entry. And indeed there’s a rather significant difference! The Python 2 docs talk about socket.error as an exception that is, well, its own unique thing. However, the Python 3 docs say: “A deprecated alias of OSError.” – and even tell us specifically that this changed in Python 3.3: “Changed in version 3.3: Following PEP 3151, this class was made an alias of OSError.” Obviously, this is looking an awful lot like one more link in the chain of what’s going wrong here.

    With a bit of Python knowledge you should be able to figure out what’s going on now. Think: if socket.error is now just an alias of OSError, what does if isinstance(e, socket.error) mean, in Python 3.3+ ? It means just the same as if isinstance(e, OSError). And guess what? requests.exception.ConnectionError happens to be a subclass of OSError. Thus, if e is an instance of requests.exception.ConnectionError, isinstance(e, socket.error) will return True in Python 3.3+. In Python 2, it returns False. It’s easy to check this in an interactive Python shell or with a test script, to confirm.

    Because of this, when we run under Python 3 and e is a requests.exception.ConnectionError, we’re unexpectedly entering this block intended for handling socket.error exceptions and – because that block always returns, having the return False line that gets hit if the errno attribute check fails – we’re never actually reaching the later block that’s actually intended to handle requests.exception.ConnectionError instances at all, we return False before we get there.

    There are a few different ways you could fix this – you could just drop the return False short-circuit line in the socket.error block, for instance, or change the ordering so the requests.exception.ConnectionError handling is done first. In the end I sent a pull request which drops the return False, but also drops the if isinstance(e, socket.error) checks (there’s another, for nested exceptions, later) entirely. Since socket.error is meant to be deprecated in Python 3.3+ we shouldn’t really use it, and we probably don’t need to – we can just rely on the errno attribute check alone. Whatever type the exception is, if it has an errno attribute and that attribute is errno.ECONNRESET, errno.ECONNABORTED, or errno.EPIPE, I think we can be pretty sure this is a connection error.

    What’s the moral of this debugging tale? I guess it’s this: when porting from Python 2 to Python 3 (or doing anything similar to that), fixing the things that outright crash or obviously behave wrong is sometimes the easy part. Even if everything seems to be working fine on a simple test, it’s certainly possible that subtler issues like this could be lurking in the background, causing unexpected failures or (possibly worse) subtly incorrect behaviour. And of course, that’s just another reason to add to the big old “Why To Have A Really Good Test Suite” list!

    There’s also a ‘secondary moral’, I guess, and that’s this: predicting all the impacts of an interface change like this is hard. Remember the Python 3 docs mentioned a PEP associated with this change? Well, here it is. If you read it, it’s clear the proposers actually put quite a lot of effort into thinking about how existing code might be affected by the change, but it looks like they still didn’t consider a case like this. They talk about “Careless (or “naïve”) code” which “blindly catches any of OSError, IOError, socket.error, mmap.error, WindowsError, select.error without checking the errno attribute”, and about “Careful code is defined as code which, when catching any of the above exceptions, examines the errno attribute to determine the actual error condition and takes action depending on it” – and claim that “useful compatibility doesn’t alter the behaviour of careful exception-catching code”. However, Koji’s code here clearly matches their definition of “careful” code – it considers both the exception’s type, and the errno attribute, in making decisions – but because it is not just doing except socket.error as e or similar, but catching the exception elsewhere and then passing it to this function and using isinstance, it still gets tripped up by the change.

    So…the ur-moral, as always, is: software is hard!

    Fedora Participates in Google Code In 2018

    Posted by Fedora Community Blog on January 09, 2019 02:19 AM

    What is Google Code In?

    Pre-university students ages 13 to 17 are invited to take part in Google Code-in: Google’s global, online contest introducing teenagers to the world of open source development.
    With a wide variety of bite-sized tasks, it’s easy for beginners to jump in and get started no matter what skills they have.
    Fedora participated in GCI 2018 and had a huge success. At the end of seven weeks, the org and the mentors choose winners who get a trip to Google HQ for GCI summit in June.
    Our mentors helped , 125 participants with our top 5 finalists and winners completing 110+ tasks together and raising the total completed task count to 326.
    Thanks to our Org Admins and Mentors especially Bex for his support. Congratulations, to all the winners. This year’s winners are listed here.

    What Happened?

    This time we had  tasks from setting up FAS account to writing Ansible Playbooks and Creating mDNS Client/Servers Demos to Kernel Regression Testing.
    Our mentors had a very busy seven weeks interacting with mentees around the world. It was a great learning experience to poke around what
    worked for us and how we can do better from next time.  Moving forward, we are thinking of helping out new mentees to get started with Fedora as regular contributors.
    Fedora Classroom session will be proposed to help students who want to keep contributing after the contest ends.

    Here’s what one of our winners had to say:

    My name is Alex Marginean, I’m 17 years old and I participated this year at Google Code-in 2018 by completing 25 tasks from the Fedora organization.
    I live in a small town from Romania but that didn’t stop me from participating at competitions because I love Computer Science and they help me a lot.

    Being a participant at Google Code-in introduced me to the world of open source,
    which was the best decision I made for myself because by creating or contributing to open source projects, I learned a lot of new helpful stuff.
    The Fedora community is great because there are a lot of people who know all sorts of stuff in a lot of different areas.
    Every time I got stuck there was either a student or a mentor in the Fedora group that was able to help me.
    I realised that the contest wasn’t only about coding or making as much tasks as possible
    but it is also about learning how to communicate with others.
    This helped me a lot to improve my communication skills in the English language and
    making new friends that also have same interests as me.
    Now, after more than a month of hard work the knowledge I got built the starting of my career in Software Engineering.

    I personally recommend anybody to take part at Google Code-in for the Fedora Organization
    because there are some of the most friendliest and helpful mentors


    1. Even though GCI has ended, we would like to keep helping students to get started with contributions and keep expanding the number of new contributors.
      Also, helping students to apply for GSoC and Outreachy in the coming years.
    2. Expanding the mentor list as we would like to have people participating in every specturm of the project and not just limited to a certain section.
    3. Hosting regular meetings between Org admins and mentors to understand where we can improve.
    4. Planning tasks well in advance which gives a lot of time for us to refine as much as possible before it reaches the students.

    The post Fedora Participates in Google Code In 2018 appeared first on Fedora Community Blog.

    Cockpit 185

    Posted by Cockpit Project on January 09, 2019 12:00 AM

    Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 185.

    Responsive dialogs

    Dialogs on the Networking and KDump pages, as well as the password dialogs on the Accounts page are now responsive.

    network bridge dialog screenshot kdump crash dialog screenshot user password dialog screenshot

    Docker: Include kubernetes containers in graphs

    Kubernetes containers will now also be present in CPU/memory graphs.

    Kubernetes containers in graphs

    Try it out

    Cockpit 185 is available now:

    nightly builds are too fast and too slow

    Posted by Ken Dreyer on January 08, 2019 11:43 PM
    I love build systems. I love working on them because it's such a unique blend between development and operations. A great build system is a great foundation for delivering value to users.

    One term I often hear is "nightly build". As in, "Where can I download the nightly build?" or "Let's set up a nightly build."

    "Nightlies" is a concept from the time where you'd set up a cron job to build your code from source control. You just poll CVS or Subversion every 24 hours, and build whatever's there. Tack a datestamp on the end of the build artifact and you're good to go.

    In this post I want to talk about how "nightly" is almost always the wrong concept. They are too frequent, or else they are not frequent enough. Or if you're writing a catchy blog post title, they're too fast and too slow.

    Nightlies are too slow

    When you write code and test it, you want that feedback loop to be as tight as possible. Write code, save, compile, test - the faster these things happen, the faster your brain stays engaged.

    If you have to sit and wait a few minutes to get information back about whether your code is correct or your build process succeeded, you're going to context switch to something else and lose time when you forget to switch back.

    When we reach build processes that take hours, now we're in the "Meh, I'll check it when I'm back from lunch" territory. At that rate, you're probably only going to be running that process three or four times a day, max. Your workday is only eight hours, after all. The thoroughput for your changes drops through the floor.

    Now imagine extending that feedback loop even further, to a full 24 hours.  You've just arrived at the "nightly build".

    When that nightly build breaks, you have eight working hours to fix it and then you get to wait again for tomorrow morning when you find out the new problem.

    After a few days of this, you no longer arrive at work with the same positive mental energy. Your morning email inbox experience becomes a thing where you discover what has gone wrong during the night, because you never saw it go right during the daytime.

    Operational tempo slides further, because it feels like "everything takes so long around here." Teams lower their optimistic expectations that anything should ever happen quickly.

    I've seen several odd knock-on effects here.

    Sometimes what happens then is that you have multiple "nightlies" for a single day. One is the first broken nightly that ran in cron, and the others are multiple attempts where someone ran the script by hand trying to get it to pass. The "nightly" is no longer nightly. Odds are that those manual runs did not do everything exactly like the full cron job did. More confusion ensues across the organization.

    When we only run a big ugly task once at midnight, then we don't care strongly about how long it takes. We've removed a big incentive to pay down the tech debt and work on shortening the long tasks, because they always happen while we're asleep. The big ugly tasks get progressively longer and longer, until an important emergency happens, and we have to run the task during working hours and we're unable to deliver in a timely way.

    Another common papercut: someone will increase the frequency of the cron task so that it runs hourly, or every 20 minutes, instead of 24 hours. This is better, but unfortunately 20 minutes is still quite slow, and users will frequently multi-task away and forget to see the failure until hours or days have gone past.  There is also something maddeningly unclear about this type of every-couple-of-minutes scheduling. Is that cron job going to kick off at the top of the hour, or some other time? Did I just miss it and I have to wait the full 20 minute period, or will it happen sooner? Should I bother someone if nothing appears to be happening, or did I just do my clock math wrong? This user experience is particularly demoralizing.

    Increasing the cron task model's frequency also leads to the next problem, which is:

    Nightlies are too fast

    If you have a project with code that changes daily, then yep, you want to build it at least daily. But does your project change literally every day 365 days of every year? For most projects, the answer is no. Did any code really change on Saturday? Or Sunday? Not just one weekend, but every weekend?

    If we simply build every day (or even every weekday), this only works for projects that always have one or more changes every 24 hours, on to infinity. In the case where nothing has changed in the last 24 hours, then we are needlessly rebuilding for no reason. If your artifacts are multiple gigabytes, stored on highly available storage, that is a lot of duplicated disk space.

    There is also an impact to the rest of the pipeline here. If the QE team thinks they have to test every build, they may be wasting human effort and compute costs.

    The typical improvement in this case is to build some kind of polling in, like "Poll this GitHub repository every day and run a build only if there are changes from last time". Jenkins in particular has really helped spread this model, because it can do this out of the box.

    For small projects, it's usually trivial to answer "did anything change here"?  For example, it's really easy to run "git fetch" and see if there are any new changes, and then build those.

    Sometimes your build process depends on many other things besides that single Git repository. For example, if you build a container that includes artifacts from several locations, then you will need to poll all of them to know if anything has changed. Many times those those locations are not trivial to poll with a one-liner.

    Now you are in a poll-the-world model, asking yourself how to poll, what is a reasonable frequency to poll, and how annoyed will those administrators be if I hit their systems every 60 seconds?

    These questions lead to spending more engineering effort or taking shortcuts which the QE team must pay for later.

    What should we do instead?

    Instead of talking about "nightly builds", let's talk about "CI builds".

    Instead of a poll-the-world model, make the build systems event-driven.

    This requires having a really solid grasp of your inputs and outputs. For example: my Jenkins job should fire if the code changes in Git *or* if one of the input binaries change versions, *and* it should feed its pass/fail status into these other three systems."

    If you don't know the input events for your process, research more about the system that is upstream of you, instead of simply configuring your system to poll it.

    Set the expectation that all the build pipeline processes for which you are responsible will happen immediately, without polling. This implicitly sets other expectations for all your other teams, particularly those upstream and downstream to you.

    For the dev teams feeding into your build system, they should expect actions to happen immediately. If a developer does not see the build system immediately respond to their changes, their first mental response should be "that's broken and we can fix or escalate it" instead of "it's just slow" or "it's just me".

    For QE teams that take output from your build system, you're communicating two things with an event-driven model. Firstly, when QE talk directly to a developer (skipping your role in the pipeline), and the developer says they've pushed some code, QE should immediately be able to see that the new code is building and is coming towards them. They should be checking the health of the pipeline as well, with positive expectations that they do not need to do complicated polling or involve you. Secondly, the fact that builds can arrive *at any time* means QE should set up their own automated triggers on your events, rather than polling you every 24 hours.

    Technical implementations

    Making all your tools event-driven is a long process, and in large systems it can take years. It's a culture shift as well as a technical shift.

    You can definitely go a long way by using GitHub's webhooks and running everything in a single Jenkins instance.

    When that no longer scales, you can run a message bus like RabbitMQ or ActiveMQ. At my current employer we have a company-wide message bus, and almost all the build and release tooing feeds into this bus. This lets each engineering team build operational pipelines that are loosely coupled from each other. There is a upward spiral effect: the more tools use the unified message bus, the more other tool authors want to use it. The messagebus has strong management support because it is the backbone of our CI efforts.

    When all the automated stuff like webhooks or a messagebus are great, of course it is a good idea to build fallback support for polling as well in the off-chance that the messages do not get through. But polling should be the fallback position to get you past an emergency, not the norm.


    We already have to wait for many things with our computers. Don't make "wall clock time" one of those things.

    Don't build nightly. Build continuously on each change event.

    Pitivi 0.999

    Posted by Gwyn Ciesla on January 08, 2019 08:14 PM

    Body in subject. It’s on it’s way to Fedora 29. Get it. Test it. Give karma!



    Cambodia – Statistics

    Posted by Sirko Kemter on January 08, 2019 06:33 PM

    We have a new year, so its time to look back, what was achieved in the old one. First for me this year was much shorter as 12 months, because me was forbidden to work for Fedora by our leadership. This had of course consequences on the results.
    First of all, during the research I figured out that there are some problems, so the data are a bit incorrect. The result from datagrepper shows just 32 FAS accounts, but it are 33. Also a lot of the accounts added during last years translation sprint dont have the “Egg” badge even they have now since a year a FAS account. Another problem is, I have among the zanata group one who has not an FAS account. So I will clean this problems during the next months, at least I hope so.
    So how does the statistic for Cambodia look like, first the amount of FAS accounts:

    One of the problems, I have is that the caching of datagrepper data takes very long and in case I have a connection break, I can start it again. So it no fun to make this statistics. For last year I made a comparison with Vietnam as neighbor country, just keep in mind the country is in population 7 times bigger. For this year I decided not to do it again, as it takes a lot of time to get all the data.

    For the CLA only accounts, a lot of them are added during ICT Camp in December and they will change soon to CLA+ status, when they start working. The unapproved groups, will also go.


    So how big was the activity? The only indicator I have for that is the amount of badges, but as said before there are some not awarded. So the result is not that big. The good thing is, slowly also the developer get more interest on Fedora, we had this year the first Copr build by a Cambodian.

    Next to make the name/brand of Fedora better known in Cambodia, one of my objectives is to drive the translation to khmer forward. Except the Translation Meetup we hold in January, there was no other one, Kuylim as Votey had from February to April no time and they at least would be needed as otherwise I might sitting there alone. After that well see on the begin. So there was not really progress on the translation and khmer people dont work online as we are used to, its better to do it event driven.
    So how looks the stage of translation, first the translation team size compared to other South East Asian countries:

    There are 5 new people in the translation team, they all joined during ICT Camp back in December and there is a high interest to drive the translation forward. There might be even kui as indiginous language be added, lets see. So what is the stage of the translation, of course through the missing events, the stage did shrink a bit. I tried to organize another sprint, but I still search for a venue for it (I have hopefully found one now).

    I like to compare here with Albania, as they started nearly the same time. But for me its interesting, that a team doubled in size and with much much more sprints has compared to Cambodia such a low result. With the next sprint, we will continue to work on web as priority, as this helps to inform local people. I am not so sure if it is a good idea to continue then with “docs” as this is huge and makes partly translated not fully sense or if we continue with rhel, priority and upstream as there are lesser strings. This all depends a bit on the amount of people showing up during the next sprint.

    Back in 2017 I have taken over the Facebook page, which was once created and fill it now with Fedora news, to have a communication channel. The mailing list is so far lesser successful but serves as point to speak to all cambodian contributors, I made it during the sprints and join-workshop mandatory to join it. I have not used the mailing list so far for announcements as there was nothing to announce, that will change with the next event.

    So how does the Facebook page results look like:

    The mass of the subscribers are really Cambodians, not as in other cases a lot of our own people. The average (not shared) post has between 50-300 views. Me absence from May until November did cost several subscribers as there been no news of course. Video posts get more attention as the normal links or posts. But there is not many shareable content so far, or better I dont have the time to spend hours searching for such one. The activity for posts in the group have grown during the year, as the group grows and more activity comes in this might grow in this year. Unfortunately fb let you look into the statistics, just for the last 28 days. I dont want to export the data month for month, to look for the development. For me its a side theater.


    Objectives 2019

    • Translation Sprint 3 – as I was “off” for nearly half a year, I could not organize the annual sprint, it was to late to find a venue. So sprint 3 will happen 2019 instead 2018. For the next sprint, I might have a person who can in the future take off this work from me. For number 4 that would mean I just have to support it and if it works out with 5 this person will organize the sprints. Hope that works.
    • Reviving the Translation Meetups
    • Finish the web part of translation – as mentioned before this is the highest priority, I still think about the next objective.
    • Starting kui as language team – here is a interest to start with this, to keep this language alive but its not so easy even there are 250.000 kui speakers, the problematic is kui is spoken in Thailand, Lao and Cambodia, and they use the alphabet of these countries.
    • Revive GNOME, hunspell translation – one of the problems coming up now is, that some idiots (namely SUSE) paid once for translation. GNOME has now only one translator left, marked as inactive and the stage shrinks. What at a certain point will create the opposite of the stage of today (Fedora install in english and usage in khmer). Same for some other free software projects, so big task.
    • Release party with PNC, also here damage been done by denying me to work.  I sent out a mail several month ago and as the old partners left the school I start here on zero again, for Fedora 30 it will not happen. Hopefully can be done for F31.
    • Barcamp – after doing this event not for 2 years, its time to do a presence there again at least bigger as a talk, to keep the awareness on Fedora alive. This might include the printing of material this time, I already working on this.

    When everything works out, the amount of FAS accounts should be end of the year over 40, so lets see.

    Fedora Firefox heads to updates with PGO/LTO.

    Posted by Martin Stransky on January 08, 2019 10:29 AM

    I’ve had lots of fun with GCC performance tuning at Fedora but without much results. When Mozilla switched its official builds to clang I considered that too due to difficulties with GCC PGO/LTO setup and inferior Fedora Firefox builds speed compared to Mozilla official builds.

    That movement woke up GCC fans to parry that threat. Lots of arguments were brought  to that ticket about clang insecurity and missing features. More importantly upstream developer Honza Hubicka found and fixed profile data generation bug (beside the others) and Jakub Jelinek worked out a GCC bug which caused Firefox crash at startup.

    That effort helped me to convince GCC to behave and thanks to those two guys Fedora can offer GCC Firefox builds with PGO (Profile-Guided Optimization) and LTO (Link-Time Optimization).

    The new builds are waiting for you at Koji (Fedora 28, Fedora 29). Don’t hesitate to take them for a test drive, I use Speedometer as general browser responsibility benchmark. You can also compare them with official Mozilla builds which are built with clang PGO/LTO.


    Fedora 29 : The figlet linux tool.

    Posted by mythcat on January 07, 2019 06:41 PM
    About this Linux tool you can read at figlet manual :
    FIGlet prints its input using large characters (called ``FIGcharac- ters'')made up of ordinary screen characters (called ``sub-charac- ters''). FIGlet output is generally reminiscent of the sort of ``sig- natures'' many people like to put at the end of e-mail and UseNet mes- sages. It is also reminiscent of the output of some banner programs, although it is oriented normally, not sideways. 
    Let's see some examples:
    [root@desk mythcat]# dnf install figlet
    Last metadata expiration check: 1:05:53 ago on Mon 07 Jan 2019 06:52:19 PM EET.
    Dependencies resolved.

    [mythcat@desk ~]$ figlet --h
    figlet: invalid option -- '-'
    Usage: figlet [ -cklnoprstvxDELNRSWX ] [ -d fontdirectory ]
    [ -f fontfile ] [ -m smushmode ] [ -w outputwidth ]
    [ -C controlfile ] [ -I infocode ] [ message ]

    [mythcat@desk ~]$ figlet -v
    FIGlet Copyright (C) 1991-2012 Glenn Chappell, Ian Chai, John Cowan,
    Christiaan Keet and Claudio Matsuoka
    Internet: <info figlet.org=""> Version: 2.2.5, date: 31 May 2012

    FIGlet, along with the various FIGlet fonts and documentation, may be
    freely copied and distributed.

    If you use FIGlet, please send an e-mail message to <info figlet.org="">.

    The latest version of FIGlet is available from the web site,

    Usage: figlet [ -cklnoprstvxDELNRSWX ] [ -d fontdirectory ]
    [ -f fontfile ] [ -m smushmode ] [ -w outputwidth ]
    [ -C controlfile ] [ -I infocode ] [ message ]</info></info>
    The messages can be set and show on output like a print similar to an ASCII banner.
    The arguments of this tool set to the left, center and right or change size and font:
    The simple one can be this:
    [mythcat@desk ~]$ figlet 2019

    Kiwi TCMS 6.4

    Posted by Kiwi TCMS on January 07, 2019 01:25 PM

    We're happy to announce Kiwi TCMS version 6.4! This is a security, improvement and bug-fix update that includes new versions of Django, Patternfly and other dependencies. You can explore everything at https://demo.kiwitcms.org!

    Supported upgrade paths:

    5.3   (or older) -> 5.3.1
    5.3.1 (or newer) -> 6.0.1
    6.0.1            -> 6.1
    6.1              -> 6.1.1
    6.1.1            -> 6.2 (or newer)

    Docker images:

    kiwitcms/kiwi       latest  39fcb88182bb    963.4 MB
    kiwitcms/kiwi       6.2     7870085ad415    957.6 MB
    kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955.7 MB
    kiwitcms/kiwi       6.1     b559123d25b0    970.2 MB
    kiwitcms/kiwi       6.0.1   87b24d94197d    970.1 MB
    kiwitcms/kiwi       5.3.1   a420465852be    976.8 MB

    Changes since Kiwi TCMS 6.3


    • Update Django from 2.1.4 to 2.1.5, which deals with CVE-2019-3498: Content spoofing possibility in the default 404 page
    • Update Patternfly to version 3.59.0, which deals with XSS issue in bootstrap. See CVE-2018-14041
    • By default session cookies will expire after 24 hours. This can be controlled via the SESSION_COOKIE_AGE setting. Fixes Issue #556


    • Update mysqlclient from 1.3.13 to 1.3.14
    • Update python-gitlab from 1.6.0 to 1.7.0
    • Update django-simple-history from 2.5.1 to 2.6.0
    • Update pygithub from 1.43.3 to 1.43.4
    • New API method TestCase.remove(). Initially requested as SO #53844380
    • Drop down select widges in Patternfly pages are now styled with bootstrap-select giving them more consistent look and feel with the rest of the page (Anton Sankov)
    • Create new TestPlan page now includes toggles to control notifications and whether or not the test plan is active. This was previously available only in edit page (Anton Sankov)
    • By default TestPlan notification toggles are turned on. Previously they were off (Anton Sankov)
    • Create and Edit TestPlan pages now look the same (Anton Sankov)
    • Kiwi TCMS is now accepting donations via Open Collective

    Removed functionality

    • Remove TestPlan page -> Run menu -> Add cases to run action. This is the same as TestRun page -> Cases menu -> Add action
    • Legacy reports will be removed after 1st March 2019. Provide your feedback in Issue #657
    • The /run/ URL path has been merged with /runs/ due to configuration refactoring. This may break your bookmarks or permalinks!

    Bug fixes

    • Don't traceback if markdown text is None. Originally reported as SO #53662887
    • Show loading spinner when searching. Fixes Issue #653
    • Quick fix: when viewing TestPlan cases make TC summary link to the test case. Previously the summary column was a link to nowhere.


    • Pylint fixes
    • New and updated internal linters
    • Refactor testplans.views.new to class based view (Anton Sankov)
    • Refactor TestCase -> Bugs tab -> Remove to JSON-RPC. References Issue #18
    • Refactor removeCaseRunBug() to JSON-RPC, References Issue #18
    • Remove unused render_form() methods
    • Remove unnecessary string-to-int conversion (Ivaylo Ivanov)
    • Remove obsolete label fields. References Issue #652 (Anton Sankov)
    • Remove JavaScript that duplicates requestOperationUponFilteredCases()
    • Remove QuerySetIterationProxy class - not used anymore

    How to upgrade

    If you are using Kiwi TCMS as a Docker container then:

    cd Kiwi/
    git pull
    docker-compose down
    docker pull kiwitcms/kiwi
    docker pull centos/mariadb
    docker-compose up -d
    docker exec -it kiwi_web /Kiwi/manage.py migrate

    Don't forget to backup before upgrade!

    WARNING: kiwitcms/kiwi:latest and docker-compose.yml will always point to the latest available version! If you have to upgrade in steps, e.g. between several intermediate releases, you have to modify the above workflow:

    # starting from an older Kiwi TCMS version
    docker-compose down
    docker pull kiwitcms/kiwi:<next_upgrade_version>
    edit docker-compose.yml to use kiwitcms/kiwi:<next_upgrade_version>
    docker-compose up -d
    docker exec -it kiwi_web /Kiwi/manage.py migrate
    # repeat until you have reached latest

    Happy testing!

    Debian's human rights paradox

    Posted by Daniel Pocock on January 07, 2019 11:46 AM

    It all started with a non-native-English speaker choosing the wrong pronoun in reference to a developer who identifies as non-binary. What, then, is the basis for this concern? Why do we give a damn about it?

    Is it because Sage Sharp is a great friend of Debian? Or is it because we would have the same concern for all LGBTQ+ people? In other words, is it about egos or is it about principles?

    I suspect and hope most people would agree it is about principles. We would expect the same respect to be shown referring to any person from a minority even if they have no relation to Debian whatsoever.

    If it is about principles, then, do we need to identify the principles that guide us, to ensure consistency in decision making? Recent posts on debian-project suggested human rights may not apply in Debian as we are not a Government, the same attitude has been repeated more strongly in a private email of the Debian account managers (DAM):

    This is not involving anything from the universal declaration of human rights. We are simply a project of volunteers which is free to chose its members as it wishes.

    If that is true, what is the basis to protect Sage Sharp's rights? If that is true, then we have to go back to the question, why was any action taken at all?

    In fact, if human rights principles are not present, what is the basis for Debian's anti-harassment team and the Code of Conduct?. If we don't want to be guided by human rights principles then could we take Norbert Preining's advice and dispense with those things?

    Yet I suspect that is not about to happen. People may prefer to understand them better and improve upon the way they are designed and used.

    When the Trump administration rescinded guidelines protecting transgender rights in education, they nonetheless allowed the following phrase in the new guidelines:

    All schools must ensure that students, including L.G.B.T. students, are able to learn and thrive in a safe environment

    Let's transpose that into the Debian context:

    Our non-LGBT developers need to be able to learn about LGBT issues, making mistakes along the way, because that is part of learning. To thrive, they should not fear making mistakes. People learn in different ways too, if one method of helping them doesn't work, we need to try others.

    We will continue to see people do things like deadnaming by mistake and we need to be able to deal with it patiently each time it happens. Given that anybody can post to the Internet and social media, there are a plethora of bad examples out there and people may not realize they are doing anything wrong. But if our reactions appear too strong, we run the risk that they never learn and just continue doing the same thing somewhere else.

    Thousands of messages have been exchanged, thousands of man hours consumed reviewing recent actions that have an impact on individual members. Would those decisions have been easier to defend, or mistakes more easily avoided, if human rights principles were involved more explicitly from the beginning?

    Let's consider some of Debian's guiding principles, the Debian Free Software Guidelines, which relate to intellectual property. It turns out intellectual property is not a Silicon Valley buzzword, it is a human right, firmly enshrined in article 27 of the declaration. Gotcha. Debian has been promoting human rights all along when we make a distinction between code that belongs in main and code that belongs in non-free.

    So why do we put the rights of users on a pinnacle like this but do so little to document and protect the rights of members and contributors?

    Another significant topic of debate now is the right to be heard. It appears that the Debian account managers consider it acceptable to summarily expel members without giving a member any evidence and without respecting the member's right of reply.

    The Debian account managers have acknowledged they operate this way.

    Yet without hearing from a member, they run the risk of operating on incomplete or inaccurate evidence, making decisions that could both be wrong and bring the project into disrepute.

    In fact, when evidence is taken in secret, without review by any party, including the accused, we run the risk of falling back to a situation where decisions are made based on egos rather than principles. Ironically, the Debian project's widely respected reproducible builds effort aims to ensure a malicious actor can't insert malicious code into a binary package. A disciplinary process operating without any oversight or transparency may be just as attractive for infiltration by the same malicious actors (name a state) who would infiltrate Debian's code. Fixing the process is just as imperative as the Reproducible Builds effort.

    By jumping head-first into these processes, the account managers may have also failed to act on information related to other human rights. Imagine if they made the mistake of subjecting somebody to a degrading or humiliating interaction, such as "demotion", at a time of personal tragedy. Society at large would find such a proceeding and the outcome quite repulsive. It may be unreasonably harmful to the person concerned and do severe and avoidable damage to the overall relationship: there are many times when it would be completely inappropriate for Debian to blindly send a member a long written list of their perceived defects, with no trace of empathy or compassion. There is never a good time to start gossip about a member but at certain times, it may be even more outrageous to do so. How can Debian target a member like that at a time of vulnerability and then talk about being a "safe space"? Is sending emails like this an abusive practice itself?

    Wouldn't it be paradoxical to see the Debian account managers taking action against a member accused of violating gender identity rights while simultaneously the account managers are failing to observe due process and violating another member's rights in a multitude of different ways?

    In fact, the violation of rights in the latter case may be far more outrageous than what may be a blogger's mistake because it has occurred at an institutional level, rejecting every opportunity to meet the person in question for almost a year and multiple pleas to act humanely and consider facts.

    We wouldn't dismiss Sage Sharp's gender identity rights as "minutiae", yet there is no doubt whatsoever that dismissing another member's circumstances as "minutiae" in this specific case was extraordinarily callous and grossly offensive. People wondering where mutual trust and respect was damaged may wish to focus on interactions like that and ignore everything said since then.

    Is it right to pick and choose human rights when convenient and ignore them when it isn't convenient, even in the same decision making process? Or is that the very reason why there is now so much confusion and frustration?

    So there is a new reason to heed the UN HRC's call to Stand up for human rights: doing so may help Debian roll back the current mistakes more quickly, avoid repeating them in future and avoid the reputation damage that would occur if a vote was taken on an issue that both contravenes a member's human rights and appears manifestly abusive and callous to society at large.

    It is my view that any processes that started without respecting the rights of the member should be rolled back and furthermore, everything that has happened since those rights were violated can be disregarded as part of a process to re-establish mutual trust.


    Chromium on Fedora finally gets VAAPI support!

    Posted by Fedora Magazine on January 07, 2019 09:00 AM

    Do you like playing videos in your web browser? Well, good news, the Chromium web browser available in Fedora gets a Video Acceleration API support. That makes video playback much smoother while using significantly less resources.

    A little bit of history

    Chromium with a VAAPI patch was already available on other distributions. But this was not the case with Fedora. I really want hardware acceleration. But my love for Fedora was holding me back. Then with sheer willpower, I joined Fedora and started maintaining a package in COPR.

    I am not really a distro hopper but a DE hopper. I usually jump from Gnome to KDE and vice versa depending upon my mood. Then I started maintaining Chromium with vaapi patch on COPR. I was using the official patch which was submitted upstream for code review. I had very little hope that it will get merge. The patch is outdated and and try jobs were failing at that time.

    After six months, the Chromium upstream maintainers made a statement that they are not interested to include this patch. So after that I started working on my own patch with referenced from the official patch. My patch is about using the existing flags that other operating system uses instead of creating a new flag just for experimentation.

    screenshot showing chromium uses video engine

    Chromium uses AMDGPU’s UVD engine while playing a video

    chromium's flag screenshot

    Chromium uses Existing flags on Fedora

    Effects of the VAAPI patch

    Chromium with this patch was extremely stable on both of my machines. They both have AMD GPU. The video playback is smooth. This improved overall power savings as well.

    Comparision with/without vaapi

    Credits: Tobias Wolfshappen

    As you can see, chromium with the vaapi patch takes up significantly less resources in comparison to chromium without the patch and Firefox.  The CPU usage went down from 120% to 10%. The playback is smooth with no shuttering.

    VA-API patch in chromium for Fedora

    It was then Fedora’s former Engineering Manager @ Red Hat and Chromium maintainer, Tom Callaway, finally recognises the VAAPI patch and decides to include in Fedora’s Chromium browser. Fedora becomes the second distribution to include the VAAPI patch in their official Chromium package.

    Episode 128 - Australia's encryption backdoor bill

    Posted by Open Source Security Podcast on January 07, 2019 12:41 AM
    Josh and Kurt talk about Australia's recently passed encryption bill. What is the law that was passed, what does it mean, and what are the possible outcomes? The show notes contain a flow chart of possible outcomes.

    <iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/8156204/height/90/theme/custom/thumbnail/yes/preload/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

    Show Notes

      NeuroFedora update: 2019 week 1

      Posted by Ankur Sinha "FranciscoD" on January 06, 2019 09:04 PM

      Happy new year, everyone! In week 1 of the year 2019:

      NeuroFedora documentation is available on the Fedora documentation website. Feedback is always welcome. You can get in touch with us here.

      Ansible Bender in OKD #2

      Posted by Tomas Tomecek on January 06, 2019 12:08 PM


      PoC Definition: Can ansible-bender run inside an OpenShift origin pod?
      Answer: Yes!


      $ oc exec -ti ab bash
      [root@ab /]# id
      uid=0(root) gid=0(root) groups=0(root)
      [root@ab /]# mount | grep containers
      /dev/mapper/luks-460d57c9-ef38-46d4-9bb1-f31b6c0feef5 on /var/lib/containers type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
      [root@ab ansible-bender]# ansible-bender build ./tests/data/basic_playbook.yaml fedora:29 test-image
      PLAY [all] ***************************************************************************
      TASK [Gathering Facts] ***************************************************************
      ok: [test-image-20190105-231401150707-cont]
      TASK [print local env vars] **********************************************************
      ok: [test-image-20190105-231401150707-cont] => {
          "msg": "/tmp/abogz7g00c/ansible.cfg,,"
      caching the task result in an image 'test-image-20191405-231405'
      TASK [print all remote env vars] *****************************************************
      ok: [test-image-20190105-231401150707-cont] => {
          "msg": {
              "DISTTAG": "f29container",
              "FBR": "f29",
              "FGC": "f29",
              "HOME": "/root",
              "LC_CTYPE": "C.UTF-8",
              "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
              "PWD": "/",
              "SHLVL": "1",
              "_": "/usr/bin/python3"
      caching the task result in an image 'test-image-20191405-231406'
      TASK [Run a sample command] **********************************************************
      changed: [test-image-20190105-231401150707-cont]
      caching the task result in an image 'test-image-20191405-231409'
      TASK [create a file] *****************************************************************
      changed: [test-image-20190105-231401150707-cont]
      caching the task result in an image 'test-image-20191405-231413'
      PLAY RECAP ***************************************************************************
      test-image-20190105-231401150707-cont : ok=5    changed=2    unreachable=0    failed=0
      Getting image source signatures
      Skipping fetch of repeat blob sha256:29395e07566574e3bae3a899a7859cdc18fca5accef7b133670dbc7c9762f672
      Skipping fetch of repeat blob sha256:a2d6fee87decc48d48c56895ff47c885b19d5e5e7eec920beaad6edd18628cd1
      Copying config sha256:8c9be0ba2a39a00edc099cbf143b4a6088e7a23b40fe287a9993f72346497ce7
       0 B / 1.54 KiB [--------------------------------------------------------------]
       1.54 KiB / 1.54 KiB [======================================================] 0s
      Writing manifest to image destination
      Storing signatures
      Image 'test-image' was built successfully \o/

      If you read my former blog post, you saw that I bumped into several issues. With help from Ben Parees, I was able to tackle most of them. As I stated in the other post, I decided to make /var/lib/containers/ an OpenShift volume (inspired by origin’s git master), so that I can use the overlay backend. This worked so smoothly that even the whole test suite started passing while running in the pod. The only problem is that I had to abandon the customStrategy build config, since it doesn’t allow you to change the build pod.

      There are two remaining problems to solve:

      1. The need for privileged.
      2. Deeper integration.

      For the second point, imagine that you’d be able to define your build, specifying Ansible as a source:

      kind: "BuildConfig"
      apiVersion: "vWishful"
        name: "sample-build"
            uri: "https://github.com/a-repo/with-my-playbooks"
              kind: "ImageStreamTag"
              name: "fedora:29"
            playbookPath: "my/favorite/playbook.yaml"
            kind: "ImageStreamTag"
            name: "my-lovely-image:latest"

      I would certainly like that.

      Fedora BTRFS+Snapper - The Fedora 29 Edition

      Posted by Dusty Mabe on January 06, 2019 12:00 AM
      History It’s 2019 and I’m just getting around to converting my desktop system to Fedora 29. For my work laptop I’ve moved on to Fedora Silverblue (previously known as Atomic Workstation) and will probably move my desktop there soon too as I’ve had a good experience so far. For now I’ll stick my desktop system to this old setup with BTRFS+snapper where I am able to snapshot and rollback the entire system by leveraging BTRFS snapshots, and a tool called snapper.

      Moving to Gitlab pages

      Posted by Robbi Nespu on January 05, 2019 04:00 PM

      Dear readers, there is a announcement

      Note : I moving to use Gitlab services to host my Jeklly static web-blog, which mean I will continue to write blog post at https://robbinespu.gitlab.io, I will keep the old post here and start a new one. See you there!

      I started to like Gitlab features such as WebIDE, CI pipeline, snippet and more. Everything looks more powerful :)

      FPgM report: 2019-01

      Posted by Fedora Community Blog on January 04, 2019 09:44 PM
      Fedora Program Manager weekly report on Fedora Project development and progress

      Here’s your report of what has happened in Fedora Program Management this week.

      I’ve set up weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections or anything else.


      Upcoming meetings

      Fedora 30 Status

      Fedora 30 Change Proposal deadlines are approaching.

      • Change proposals requiring a mass rebuild or for System-Wide Changes are due 2019-01-08.
      • Change proposals for Self-Contained Changes are due 2019-01-29.

      Fedora 30 includes a Change that will cause ambiguous python shebangs to error.  A list of failing builds is available on Taskotron.

      Fedora 30 includes a Change that will remove glibc langpacks from the buildroot. See the devel mailing list for more information and impacted packages.



      Submitted to FESCo

      Approved by FESCo


      The post FPgM report: 2019-01 appeared first on Fedora Community Blog.

      Managing dotfiles with rcm

      Posted by Fedora Magazine on January 04, 2019 08:00 AM

      A hallmark feature of many GNU/Linux programs is the easy-to-edit configuration file. Nearly all common free software programs store configuration settings inside a plain text file, often in a structured format like JSON, YAML or “INI-like”. These configuration files are frequently found hidden inside a user’s home directory. However, a basic ls won’t reveal them. UNIX standards require that any file or directory name that begins with a period (or “dot”) is considered “hidden” and will not be listed in directory listings unless requested by the user. For example, to list all files using the ls program, pass the -a command-line option.

      Over time, these configuration files become highly customized, and managing them becomes increasingly more challenging as time goes on. Not only that, but keeping them synchronized between multiple computers is a common challenge in large organizations. Finally, many users find a sense of pride in their unique configuration settings and want an easy way to share them with friends. That’s where rcm steps in.

      rcm is a “rc” file management suite (“rc” is another convention for naming configuration files that has been adopted by some GNU/Linux programs like screen or bash). rcm provides a suite of commands to manage and list files it tracks. Install rcm using dnf.

      Getting started

      By default, rcm uses ~/.dotfiles for storing all the dotfiles it manages. A managed dotfile is actually stored inside ~/.dotfiles, and a symlink is placed in the expected file’s location. For example, if ~/.bashrc is tracked by rcm, a long listing would look like this.

      [link@localhost ~]$ ls -l ~/.bashrc
      lrwxrwxrwx. 1 link link 27 Dec 16 05:19 .bashrc -> /home/link/.dotfiles/bashrc
      [link@localhost ~]$

      rcm consists of 4 commands:

      • mkrc – convert a file into a dotfile managed by rcm
      • lsrc – list files managed by rcm
      • rcup – synchronize dotfiles managed by rcm
      • rcdn – remove all the symlinks managed by rcm

      Share bashrc across two computers

      It is not uncommon today for a user to have shell accounts on more than one computer. Keeping dotfiles synchronized between those computers can be a challenge. This scenario will present one possible solution, using only rcm and git.

      First, convert (or “bless”) a file into a dotfile managed by rcm with mkrc.

      [link@localhost ~]$ mkrc -v ~/.bashrc
      '/home/link/.bashrc' -> '/home/link/.dotfiles/bashrc'
      '/home/link/.dotfiles/bashrc' -> '/home/link/.bashrc'
      [link@localhost ~]$

      Next, verify the listings are correct with lsrc.

      [link@localhost ~]$ lsrc
      [link@localhost ~]$

      Now create a git repository inside ~/.dotfiles and set up an accessible remote repository using your choice of hosted git repositories. Commit the bashrc file and push a new branch.

      [link@localhost ~]$ cd ~/.dotfiles
      [link@localhost .dotfiles]$ git init
      Initialized empty Git repository in /home/link/.dotfiles/.git/
      [link@localhost .dotfiles]$ git remote add origin git@github.com:linkdupont/dotfiles.git
      [link@localhost .dotfiles]$ git add bashrc
      [link@localhost .dotfiles]$ git commit -m "initial commit"
      [master (root-commit) b54406b] initial commit
      1 file changed, 15 insertions(+)
      create mode 100644 bashrc
      [link@localhost .dotfiles]$ git push -u origin master
      [link@localhost .dotfiles]$

      On the second machine, clone this repository into ~/.dotfiles.

      [link@remotehost ~]$ git clone git@github.com:linkdupont/dotfiles.git ~/.dotfiles
      [link@remotehost ~]$

      Now update the symlinks managed by rcm with rcup.

      [link@remotehost ~]$ rcup -v
      replacing identical but unlinked /home/link/.bashrc
      removed '/home/link/.bashrc'
      '/home/link/.dotfiles/bashrc' -> '/home/link/.bashrc'
      [link@remotehost ~]$

      Overwrite the existing ~/.bashrc (if it exists) and restart the shell.

      That’s it!  The host-specific option (-o) is a useful addition to the scenario above. And as always, be sure to read the manpages; they contain a wealth of example commands.

      F29-20190103 updated isos

      Posted by Ben Williams on January 04, 2019 02:04 AM

      The Fedora Respins SIG is pleased to announce the latest release of Updated F29-20190103 Live ISOs, carrying the 4.19.12-301 kernel.

      This set of updated isos will save about 920MBs of updates after install.  (for new installs.)

      We would also like to thank Fedora- QA  for running the following Tests on our ISOs.:


      These can be found at  http://tinyurl.com/Live-respins .We would also like to thank the following irc nicks for helping test these isos: dowdle, and Southern_Gentlem, vdamewood, adingman.

      As always we are always needing Testers to help with our respins. We have a new Badge for People whom help test.  See us in #fedora-respins on Freenode IRC.


      2018 blog review

      Posted by Kushal Das on January 04, 2019 02:03 AM

      Last year, I made sure that I spend more time in writing, mostly by waking up early before anyone else in the house. The total number of posts was 60, but, that number came down to 32 in 2018. The number of page views were though 88% of 2017.

      I managed to wake up early in most of the days, but, I spent that time in reading and experimenting with various tools/projects. SecureDrop, Tor Project, Qubes OS were in top of that list. I am also spending more time with books, though now the big problem is to find space at home to keep those books properly.

      I never wrote regularly through out the year. If you see the dates I published, you will find that sometimes I managed to publish regularly for a month and then again vanished for sometime.

      There was a whole paragraph here about why I did not write and vanish, but, then I deleted the paragraph before posting.

      You can read the last year’s post on the same topic here.

      Easy PXE boot testing with only HTTP using iPXE and libvirt

      Posted by Dusty Mabe on January 04, 2019 12:00 AM
      Introduction Occasionally I have a need to test out a PXE install workflow. All of this is super easy if you have a permanent PXE infrastructure you maintain which traditionally has consisted of DHCP, TFTP and HTTP/FTP servers. What if I just have my laptop and want to test something in a VM? It turns out it’s pretty easy to do using libvirt and a simple http server. In the steps below I walk through setting up libvirt to point to a web server for PXE booting that has been set up with all the files needed for testing out a PXE install workflow.

      Fedora Council December 2018 Hackfest Report

      Posted by Fedora Community Blog on January 03, 2019 09:58 PM

      In December, the Fedora Council met in Minneapolis, Minnesota for several days of meetings. With the holidays now behind us, here’s our summary of what happened.

      Strategic direction update

      The most important part of the meeting was our update to Fedora’s strategic direction. You may have read the Community Blog post about this already. While the wording — or at least the fact that we’ve written it down — may be new, the ideas aren’t. This document represents the next step from the efforts that began with Fedora.next back in 2013. Our goal is to allow members of the Fedora community to build solutions that focus on the specific requirements of their target users. This means, among other things, a more decomposed and self-service build process.

      The earlier post generated a lot of discussion and feedback. I’m planning a series of Community Blog articles summarizing and responding to that feedback — stay tuned!

      Objective updates

      We also heard from three of the four current Objective leads. (Our Internet of Things lead, Peter Robinson couldn’t make the meeting, unfortunately.) The CI/CI Objective is wrapping up and Dominik Perpeet will be publishing a summary soon. The Modularity Objective is also ending its current phase. Modules for Everyone was a key feature of Fedora 29, and the Modularity team is continuing to improve the technical implementation as well as make it more approachable for community contributors. Both Objective leads will be proposing next steps to the Council in early 2019.

      The Lifecycle Objective is just getting started. It has two goals: make the release process scale and allow & encourage more community ownership for deliverables. This ties in tightly to the strategy update — this work is required to get us to where we want to be. Paul Frields is working to gauge what’s needed to implement this now, including the proposed long cycle for Fedora 31 (discussed on the devel mailing list). The Council in general would prefer to keep to the regular six-month cycle.


      The Mindshare team previously adopted a policy of spending up to $100 for release parties with minimal approval required. This has been successful, and the Council would like Mindshare to build on this with further investment. We agreed that Mindshare should adopt a policy of allowing up to $150 for activities that promote the use of Fedora solutions in communities. This could include release parties, web hosting, or other relevant activities. We want to encourage experimentation, so we’re not requiring the activities to be successful to get reimbursed — they just need to be successful to get future funding. The Council’s goal is to have 100 of these proposals funded in FY2020 (which starts in March of 2019). In order to encourage funding these proposals throughout the year, unallocated funds from this budget will be pulled into the Council budget at the end of each fiscal quarter. Look for guidance from Mindshare on how to make these requests soon.


      We’ve started experimenting with using Discourse as the asynchronous communication tool for some teams. For synchronous communication, some teams are using Telegram in addition to — or instead of — IRC. Each platform has strengths and weaknesses. After discussion, the Council came to the conclusion that communication fragmentation is unavoidable. In a project as big as Fedora, people work in different ways and have different preferences. We leave it to each team to decide what communication methods work best for their team.

      To help connect everyone together despite this, we have requested a central project management service from the Infrastructure team — probably Taiga, although we’re asking the team to also look at GitLab for this purpose. We’ll have a dedicated instance, likely hosted, and we ask each team to have a minimum presence on that tool, whether they use it otherwise or not. The presence should, at a minimum, indicate the team’s communication methods for synchronous and asynchronous communication and where project information may be found if not in the shared tool. That way, there will be an automatically-curated list of active teams. (See https://tree.taiga.io/discover for an idea of what this would look like — imagine each project there is a Fedora team.)


      The success of our strategy depends on improvements to our infrastructure. The Infrastructure team has limited resources, so we need to ensure they’re able to work on areas that add the most value to the project. This means a shift away from running all layers of the stack and focusing more on application management. The goal is to have the Infra team administering applications, not low-level infrastructure. (Even if that makes the team name confusing — sorry!) We want agility in our applications and deployment. We want drive-by contributors to be able to realistically contribute to the infrastructure team.

      We also talked about GitHub. Ideally, we want everything to be on open source services (e.g. Taiga, Pagure, or GitLab). But, as a pragmatic matter, we recognize that GitHub has a huge network effect — there are millions of users and developers there, and millions of open source and free software projects hosted there, including software that’s fundamental to the Fedora operating system. We’d like better integration and syncing with tools like Pagure to give access to that network effect on all-free software, but we also know that there isn’t a lot of developer time to make and maintain those kind of features. Therefore, we’re willing to accept people in Fedora hosting their subprojects on GitHub. We’ve got to focus on what we do that’s unique (and only do things which are unique when we have a special need to meet our project goals). Git hosting is not one of those things.

      General Council business

      The Council wants to make it clear that community input is welcome on Council matters. Members of the Fedora community may provide non-binding votes on Council tickets and participate in meetings. Speaking of meetings, we’re replacing the Subproject report meetings with regular updates from the FESCo, Mindshare, and Diversity & Inclusion representatives. This should help provide better visibility into those organizations for the Council and the community at large.

      We will also move all Council policies out of the wiki to docs.fedoraproject.org. The Council will use the wiki only as a scratch space for works in progress. Durable documentation will live on the docs site. We encourage other teams to consider doing the same.

      Meeting Minutes

      We didn’t actually conduct the meeting in IRC, but we took minutes in the same way. Here’s our detailed record.

      #startmeeting Fedora Council Hackfest 2018

      #chair mattdm bcotton tyll contyk dperpeet langdon dgilmore bex jonatoni sumantro stickster

      #topic Mission overview + Strategic framework

      #info Community Members are encouraged to contribute to the decision process with non-binding votes

      #link https://qz.com/work/1468580/the-four-layers-of-communication-in-a-functional-team/

      #info mattdm shows his favorite graph

      #topic Fedora Project Strategy: “How do we make our mission real?”

      #action Dominik will speak more loudly

      #agreed Council will replace subproject report meetings with updates from FESCo, Mindshare, and D&I rep

      #agreed Council adopts the strategy proposal (+9, 0, -0)

      #action mattdm to post the strategy proposal to commblog

      #agreed Council will make a final vote on the strategy proposal on 9 January 2019 (+8,0,0)

      #topic Moving Council Policies to Docs

      #agreed Council will move all Council policies and other durable Council documents from the Wiki to docs.fedoraproject.org and remove old wiki pages. (8,0,-0)

      #info policies should be kept in a repo separate from other documents for ease of watching

      #topic Objective Update: CI/CD


      dperpeet to write up final objective report for publication on Community Blog by January 1

      #action dperpeet to propose a new Objective for future CI/CD work

      #info the new Objective should include working with other stakeholder groups including IoT and SilverBlue

      #topic Objective Update: Modularity

      #action langdon to propose a new Objective for future Modularity work

      #topic Objective Update: Lifecycle

      #topic $150 release parties

      #agreed Mindshare should move the easy process to $150 and encourage more non-RP uses (+9,0,-0)

      #agreed: Mindshare should start providing $150 base support for solutions to help them grow (+9,0,-0)

      #agreed: Mindshare should start attracting larger requests and develop a process. These requests are judged using Council provided Fedora Project strategy as guidance. (+9,0,-0)

      #agreed: Mindshare should target at least 100 $150 events in FY20 (+9,0,-0)

      #agreed Unspent budget allocated to the $150 event program will be pulled into the Council budget at the end of each fiscal quarter beginning with FY20 (+10,0,-0)

      #topic Localization

      #action bex to begin a conversation in the translation community about a platform that meets the needs of the translation workflow

      #topic Logo

      #info Adam Miller will not get a tattoo with our current logo because people will ask him why he has a Facebook tattoo

      #info The design team is working on getting a proposal to the community to vote soon so we can select the new one by January 31 in final form for production

      #topic Infrastructure

      #agreed The Council supports greater efficiency in the infrastructure to allow more to be done, even when this means that we move away from self-hosted or self-maintained infrastructure. (+9,0,-0)

      #agreed The Fedora Project wants to advance free and open source software and as a pragmatic matter we recognize that some infrastructure needs may be best served by using closed source or non-free tools today. Therefore the Council is willing to accept closed source or non-free tools in Fedora’s infrastructure where free and open source tools are not viable or not available. (+9,0,-0)

      #action contyk, FESCo to work with Infra to examine current applications and determine: 1. which applications can be moved out of the datacenter immediately or in the short term, 2. Which applications have industry-standard open source or proprietary alternatives that we could move to.

      #topic Communication

      #agreed Fedora will offer a central place for teams and SIGs to be discoverable, do project management, etc. Having a landing page will be a requirement for all teams and SIGs in Fedora. (+10, 0, -0)

      #agreed The Council will ask the Infrastructure team to evaluate providing the central place as Taiga versus Gitlab based on requirements provided by Council. (+10, 0, -0)

      #action mattdm to write requirements doc for these two things above.

      #agreed The Fedora Council embraces fragmentation in our communication platforms — this is a reality we can’t fight. The Central Place will provide a way for anyone to find the communication tools used by any group. (+10, 0, -0)

      #topic Ticket #198

      #agreed Document proposed in ticket #198 is accepted for delivery to Legal for drafting updates to the Trademark policy. (+10, 0, -0)

      #topic Code of Conduct enforcement

      #agreed The FCAIC is empowered to take action on Code of Conduct reports with an additional +1 from another core Council member or the Diversity & Inclusion Advisor and report back to Council. (+10, 0, -0)

      #topic Ask Fedora and getting help

      #agreed Council authorizes the hosting of a separate Discourse instance to replace ask.fedoraproject.org to be funded out of Fedora community budget. (+9, 0, 0)


      The post Fedora Council December 2018 Hackfest Report appeared first on Fedora Community Blog.