Fedora People

GUADEC 2017 Notes

Posted by Petr Kovar on August 17, 2017 05:08 PM

With GUADEC 2017 and the unconference days over, I wanted to share a few conference and post-conference notes with a broader audience.

First of all, as others have reported, at this year’s GUADEC, it was great to see an actual increase in numbers of attendees compared to previous years. This shows us that 20 years later, the community as a whole is still healthy and doing well.

<figure class="wp-caption aligncenter" id="attachment_405" style="width: 660px"><figcaption class="wp-caption-text">At the conference venue.</figcaption></figure>

While the Manchester weather was quite challenging, the conference was well-organized and I believe we all had a lot of fun both at the conference venue and at social events, especially at the awesome GNOME 20th Birthday Party. Kudos to all who made this happen!

<figure class="wp-caption aligncenter" id="attachment_406" style="width: 660px"><figcaption class="wp-caption-text">At the GNOME 20th Birthday Party.</figcaption></figure>

As I reported at the GNOME Foundation AGM, the docs team has been slightly more quiet recently than in the past and we would like to reverse this trend going forward.

<figure class="wp-caption aligncenter" id="attachment_411" style="width: 660px"><figcaption class="wp-caption-text">At the GNOME 20th Birthday Party.</figcaption></figure>
  • We held a shared docs and translation session for newcomers and regulars alike on the first two days of the post-GUADEC unconference. I was happy to see new faces showing up as well as having a chance to work a bit with long-time contributors. Special thanks goes to Kat for managing the docs-feedback mailing list queue, and Andre for a much needed docs bug triage.

    <figure class="wp-caption aligncenter" id="attachment_413" style="width: 660px"><figcaption class="wp-caption-text">Busy working on docs and translations at the unconference venue.</figcaption></figure>

  • Shaun worked on a new publishing system for help.gnome.org that could replace the current library-web scripts requiring release tarballs to get the content updated. The new platform would be a Pintail-based website with (almost) live content updates.
  • Localization-wise, there was some discussion around language packs, L10n data installation and initial-setup, spearheaded by Jens Petersen. While in gnome-getting-started-docs, we continue to replace size-heavy tutorial video files with lightweight SVG files, there is still a lot of other locale data left that we should aim to install on the user’s machine automatically when we know the user’s locale preference, though this is not quite what the user’s experience looks like nowadays. Support for that is something that I believe will require more input from PackageKit folks as well as from downstream installer developers.
  • The docs team also announced a change of leadership, with Kat passing the team leadership to me at GUADEC.
  • In other news, I announced a docs string freeze pilot that we plan to run post-GNOME 3.26.0 to allow translators more time to complete user docs translations. Details were posted to the gnome-doc-list and gnome-i18n mailing list. Depending on the community feedback we receive, we may run the program again in the next development cycle.
  • The docs team also had to cancel the planned Open Help Conference Docs Sprint due to most core members being unavailable around that time. We’ll try to find a better time for a docs team meetup some time later this year or early 2018. Let me know if you want to attend, the docs sprints are open to everybody interested in GNOME documentation, upstream or downstream.
<figure class="wp-caption aligncenter" id="attachment_412" style="width: 660px"><figcaption class="wp-caption-text">At the closing session.</figcaption></figure>

Last but not least, I’d like to say thank you to the GNOME Foundation and the Travel Committee for their continuous support, for sponsoring me again this year.

PHP version 7.0.23RC1 and 7.1.9RC1

Posted by Remi Collet on August 17, 2017 12:12 PM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.1.9RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26 or remi-php71-test repository for Fedora 23-25 and Enterprise Linux.

RPM of PHP version 7.0.23RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 25 or remi-php70-test repository for Fedora 23-24 and Enterprise Linux.

PHP version 5.6 is now in security mode only, so no more RC will be released.

PHP version 7.2 is in development phase, version 7.2.0beta3 is also available.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.0 as Software Collection:

yum --enablerepo=remi-test install php70

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.0:

yum --enablerepo=remi-php70,remi-php70-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.1.7RC1 is also available in Fedora rawhide (for QA).

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php70, php71)

Base packages (php)

5 apps to install on your Fedora Workstation

Posted by Fedora Magazine on August 17, 2017 08:00 AM

A few weeks ago, Fedora 26 was released. Every release of Fedora brings new updates and new applications into the official software repositories. Whether you were already a Fedora user and upgraded or you are a first-time user, you might be looking for some cool apps to try out on your Fedora 26 Workstation. In this article, we’ll round up five apps that you might not have known were available in Fedora.

Try out a different browser

By default, Fedora includes the Firefox web browser. But in Fedora 25, Chromium (the open source version of Chrome) was packaged into Fedora. You can learn how to start using and install Chromium below.

How to install Chromium in Fedora

<iframe class="wp-embedded-content" data-secret="tnjeTElMB9" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/install-chromium-fedora/embed/#?secret=tnjeTElMB9" title="“How to install Chromium in Fedora” — Fedora Magazine" width="600"></iframe>

Sort and categorize your music

Do you have a Fedora Workstation filled with local music files? When you open it in a music player, is there missing or just straight out wrong metadata? MusicBrainz is the Wikipedia of music metadata, and you can take back control of your music by using Picard. Picard is a tool that works with the MusicBrainz database to pull in correct metadata to sort and organize your music. Learn how to get started with Picard in Fedora Workstation below.

Picard brings order to your music library

<iframe class="wp-embedded-content" data-secret="lljIa7PX0q" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/picard-brings-order-music-library/embed/#?secret=lljIa7PX0q" title="“Picard brings order to your music library” — Fedora Magazine" width="600"></iframe>

Get ready for the eclipse

August 21st is the big day for the total solar eclipse in North America. Want to get a head start by knowing the sky before it starts? You can map out the sky by using Stellarium, an open source planetarium application available in Fedora now. Learn how to install Stellarium before the skies go dark in this article.

Track the night sky with Stellarium on Fedora

<iframe class="wp-embedded-content" data-secret="YgCLgONqF0" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/stellarium-on-fedora/embed/#?secret=YgCLgONqF0" title="“Track the night sky with Stellarium on Fedora” — Fedora Magazine" width="600"></iframe>

Control your camera from Fedora

Have an old camera lying down? Or maybe do you want to upgrade your webcam by using an existing camera? Entangle lets you take control of your camera all from the comfort of your Fedora Workstation. You can even adjust aperture, shutter speed, ISO settings, and more. Check out how to get started with it in this article.

Tether a digital camera using Entangle

<iframe class="wp-embedded-content" data-secret="pUjGLOPXP7" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/tether-digital-camera-fedora/embed/#?secret=pUjGLOPXP7" title="“Tether a digital camera using Entangle” — Fedora Magazine" width="600"></iframe>

Share Fedora with a friend

One of the last things you might need to do with your Fedora Workstation is extend it! With the Fedora Media Writer, you can create a USB stick loaded with any Fedora edition or spin of your choice and share it with a friend. Learn how to start burning your own USB drives in this how-to article below.

How to make a Fedora USB stick

<iframe class="wp-embedded-content" data-secret="YKp7rYathj" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/make-fedora-usb-stick/embed/#?secret=YKp7rYathj" title="“How to make a Fedora USB stick” — Fedora Magazine" width="600"></iframe>

Creating heat maps using the new syslog-ng geoip2 parser

Posted by Peter Czanik on August 17, 2017 06:26 AM

The new geoip2 parser of syslog-ng 3.11 is not only faster than its predecessor, but can also provide a lot more detailed geographical information about IP addresses. Next to the usual country name and longitude/latitude information, it also provides the continent, time zone, postal code and even county name. Some of these are available in multiple languages. Learn how you can utilize this information by parsing logs from iptables using syslog-ng, storing them to Elasticsearch, and displaying the results in Kibana!

Before you begin

First of all, you need some iptables log messages. In my case, I used logs from my Turris Omnia router. You could use logs from another device running iptables. Alternatively, with a small effort, you can replace iptables with an Apache web server or any other application that saves IP addresses as part of its log message.

You will also need a syslog-ng version that has the new geoip2 parser. The new geoip2 parser was released as part of version 3.11.1.

As syslog-ng packages in Linux distributions do not include the Elasticsearch destination of syslog-ng, you either need to compile it yourself or use one of the unofficial packages, as listed at https://syslog-ng.org/3rd-party-binaries/.

Last but not least, you will also need Elasticsearch and Kibana installed. I used version 5.5.1 of the Elastic stack, but any other version should work just fine.

What is new in GeoIP

The geoip2 parser of syslog-ng uses the maxminddb library to look up geographical information. It is considerably faster than its predecessor and also provides a lot more detailed information.

As usual, the packaging of maxminddb tools is different on different Linux distributions. You need to make sure that a tool to download / update database files is installed, together with the mmdblookup tool. On most distributions you need to use the former at least once as usually only the old type of databases are available packaged. The latter application can help you list what kind of information is available in the database.

Here is a shortened example:

[root@localhost-czp ~]# mmdblookup --file /usr/share/GeoIP/GeoLite2-City.mmdb --ip

          3054643 <uint32>
              "Budapest" <utf8_string>
              "Budapest" <utf8_string>
              "Budapest" <utf8_string>
              "Budapest" <utf8_string>
              "ブダペスト" <utf8_string>
              "Budapeste" <utf8_string>
              "Будапешт" <utf8_string>
              "布达佩斯" <utf8_string>
          100 <uint16>
          47.500000 <double>
          19.083300 <double>
          "Europe/Budapest" <utf8_string>

As you can see from the above command line, I use the freely available GeoLite2-City database. The commercial variant is also supported by syslog-ng, which is more precise and up-to-date.

In my configuration example below, I chose to simply store all available geographical data, but normally that is a waste of resources. You can figure out the hierarchy of names based on the JSON output of mmdblookup.

Configure Elasticsearch

The installation and configuration of Elasticsearch and Kibana are beyond the scope of this blog. The only thing I want to note here is that before sending logs from syslog-ng to Elasticsearch, you have to configure mapping for geo information.

If you follow my configuration examples below, you can use the following mapping. I use “syslog-ng” as the index name.

   "mappings" : {
      "_default_" : {
         "properties" : {
            "geoip2" : {
               "properties" : {
                  "location2" : {
                     "type" : "geo_point"

Configure syslog-ng

Complete these steps to get your syslog-ng ready for creating heat maps:

1. First of all, you need some logs. In my test environment I receive iptables logs from my router over a TCP connection to port 514. These are filtered on the sender side, so no other logs are included. If you do not have filtered logs, in most cases you can filter for firewall logs based on the program name.

source s_tcp {
  tcp(ip("") port("514"));

2. Process log messages. The first step of processing is using the key-value parser. It creates name-value pairs from the content of the message. You can store all or part of these name-value pairs in a database and search them at a field level instead of the whole message. A prefix for the name is used to make sure that the names do not overlap.

parser p_kv {kv-parser(prefix("kv.")); };

The source IP of the attacker is stored into the kv.SRC name-value pair.

3. Let’s analyze the kv.SRC name-value pair further, using the geoip2 parser. As usual, we use a prefix to avoid any naming problems. Note that the location of the database might be different on your system.

parser p_geoip2 { geoip2( "${kv.SRC}", prefix( "geoip2." ) database( "/usr/share/GeoIP/GeoLite2-City.mmdb" ) ); };

4. The next step is necessary to ensure that location information is in the form expected by Elasticsearch. It looks slightly more complicated than for the first version of the GeoIP parser as there is more information available and information is now structured.

rewrite r_geoip2 {
        value( "geoip2.location2" ),
        condition(not "${geoip2.location.latitude}" == "")

5. In the Elasticsearch destination we assume that both the cluster and index names are “syslog-ng”. We set the flush-limit to a low value as we do not expect a high message rate. A low flush-limit makes sure that we see logs in Kibana in near real-time. By default, it is set to a much higher value, which is perfect for performance. Unfortunately, timeout is not implemented in the Java destinations so with the default setting and low message rate, you might need to wait an hour before anything shows up in Elasticsearch.

destination d_elastic {
 elasticsearch2 (
  template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")

6. Finally we need a log statement which connects all of these building blocks together:

log {

Configuration to copy & paste

To make your life easier, I compiled these configuration snippets in one place for better copy & paste experience. You should append it to your syslog-ng.conf or place it in a separate .conf file under /etc/syslog-ng/conf.d/ if supported by your Linux distribution.

source s_tcp {
  tcp(ip("") port("514"));

parser p_kv {kv-parser(prefix("kv.")); };

parser p_geoip2 { geoip2( "${kv.SRC}", prefix( "geoip2." ) database( "/usr/share/GeoIP/GeoLite2-City.mmdb" ) ); };

rewrite r_geoip2 {
        value( "geoip2.location2" ),
        condition(not "${geoip2.location.latitude}" == "")

destination d_elastic {
 elasticsearch2 (
  template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")

log {

Visualize your data

By now you have configured syslog-ng to parse iptables logs, added geographical information to them, and stored the results in Elasticsearch. The next step is to verify if logs arrive to Elasticsearch. You should see messages in Kibana where many field names start with “kv.” and “geoip2.”

Once you verified that logs are arriving to Elasticsearch, you can start creating some visualizations. There are numerous tutorials on how to do it by Elastic and others.

You can see a world map below visualizing the IP addresses that attempt to connect to my router. You can easily create such a map just by clicking on the “geoip2.location2” field in the “Available fields” list in Kibana, and then clicking on the “Visualize” button when it appears below the field name.

<figure class="wp-caption aligncenter" id="attachment_2415" style="width: 600px">world map<figcaption class="wp-caption-text">Map of IP addresses from attempted connections.</figcaption></figure>

Even if I left out many details, this blog is now quite lengthy so I am going to point you to some further reading:

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Creating heat maps using the new syslog-ng geoip2 parser appeared first on Balabit Blog.

LxQT Test Day: 2017-08-17

Posted by Alberto Rodriguez (A.K.A bt0) on August 17, 2017 02:29 AM

Tuesday, 2017-08-17, is the DNF 2.0 Test Day! As part of this planned Change for Fedora 26, we need your help to test LxQT!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Original note here:

LxQT Test Day: 2017-08-17

<iframe class="wp-embedded-content" data-secret="oCw5tobFpO" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://communityblog.fedoraproject.org/lxqt-test-day-2017-08-17/embed/#?secret=oCw5tobFpO" title="“LxQT Test Day: 2017-08-17” — Fedora Community Blog" width="600"></iframe>


Posted by Casper on August 16, 2017 11:22 PM

Pour ceux qui ont des freebox, j'ai trouvé un petit tour rigolo. Si vous êtes comme moi et que vous voulez faire à l'occasion (petites coupures, autres...) des diagnostics rapides de l'ensemble de tous les composants réseau, afficher l'uptime de la freebox dans un terminal en une seule commande va être assurément intéressant.

Vous connaissez sans doute l'adresse pour afficher le rapport complet :


Donc on peut déjà afficher le rapport complet dans un terminal :

casper@falcon ~ % export FBX=http://mafreebox.free.fr/pub/fbx_info.txt
casper@falcon ~ % curl $FBX

C'est un début mais ça floode encore pas mal le terminal, on peut faire mieux...

casper@falcon ~ % curl $FBX 2>/dev/null | grep "mise en route" | cut -d " " -f10,11,12,13
4 heures, 33 minutes

Bon, j'ai rien inventé, mais j'espère que cette astuce vous sera utile un jour. N'hésitez pas à mettre un pouce vert, un com', tout ce que vous voulez, et surtout de vous abonner pour être automatiquement averti de la sortie d'une nouvelle vidéo !

Going to retire Fedora's OmegaT package

Posted by Ismael Olea on August 16, 2017 10:00 PM

OmegaT logo

Well, time has come and I must face my responsability on this.

My first important package in Fedora was for OmegaT. AFAIK OmegaT is the best FLOSS computer aid translator tool available. With the time OmegaT has been enjoying a very active development with a significant (to me) handicap: new releases adds new features with new dependencies on java libraries not available in Fedora. As you perfectly know, updating the package requires to add each one of those libraries as new packages. But I can’t find the time for such that effort. That’s the reason the last Fedora version is 2.6.3 and the lasts at upstream are 3.6.0 / 4.1.2.

So, I give up. I want to retire the package from Fedora because I’m sure I will not be able to update it anymore.

I’ll wait some days waiting someone expressing their interest on taking ownership. Otherwise I’ll start the retirement process.

PS: OTOH I plan to publish OmegaT as a flatpak package via Flathub. Seems to me it would be a lot easier to maintain that way. I’m aware Flathub is out of the scope of Fedora :-/

PPS: I send an announcement to the Fedora devel mailing list.

Recordando el planeta Chitón

Posted by Ismael Olea on August 16, 2017 10:00 PM

Planeta Chitón, tus amigos no te olvidan.

New badge: Badger Padawan !

Posted by Fedora Badges on August 16, 2017 04:03 PM
Badger PadawanYou attended a Fedora Badges workshop! May the badger be with you...

FreeNAS and check_mk

Posted by Jens Kuehnel on August 16, 2017 12:06 PM


I’m setting up two FreeNAS Server for Backup and Archiving and I really like FreeNAS 11. Thank good I didn’t have time to update it to FreeNAS Coral. 🙂

But I’m using check_mk for monitoring and I would like to use it to monitor FreeNAS as well. There is a check_mk agent for FreeBSD so the only problem is to run it.

I created this script to run it as a Init/Shutdown Script (both pre-init and post-init) . It will create everything you need, only define the BASEDIR at the beginning and put the check_mk_agent for FreeBSD in this directory. Make sure this script (check_mk_setup) and check_mk_agent are executable.

You also need to make sure inetd is running. I enable tftpd for that. Maybe some other service are possible as well. But I only tested it with tftpd.

if grep checkmk /conf/base/etc/inetd.conf &> /dev/null
  echo checkmk stream tcp nowait root $BASEDIR/check_mk_agent check_mk_agent >> /conf/base/etc/inetd.conf

if grep checkmk /conf/base/etc/services &> /dev/null
  echo "checkmk 6556/tcp #check_mk" >> /conf/base/etc/services

if grep checkmk /etc/services &> /dev/null
  echo "checkmk 6556/tcp #check_mk" >> /etc/services

killall -1 inetd

After the next reboot the system can be monitored by check_mk. It even survived the upgrade from FreeNAS 10 to 11.

LxQT Test Day: 2017-08-17

Posted by Fedora Community Blog on August 16, 2017 09:06 AM

Tuesday, 2017-08-17, is the DNF 2.0 Test Day! As part of this planned Change for Fedora 26, we need your help to test LxQT!

Why test LxQT?

LXQt is the Qt port and the upcoming version of LXDE, the Lightweight Desktop Environment. It is the product of the merge between the LXDE-Qt and the Razor-qt projects: A lightweight, modular, blazing-fast and user-friendly desktop environment.

We hope to see whether it’s working well enough and catch any remaining issues.

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post LxQT Test Day: 2017-08-17 appeared first on Fedora Community Blog.

IoT Security for Developers

Posted by Russel Doty on August 15, 2017 10:33 PM

Previous articles focused on how to securely design and configure a system based on existing hardware, software, IoT Devices, and networks. If you are developing IoT devices, software, and systems, there is a lot more you can do to develop secure systems.

The first thing is to manage and secure communications with IoT Devices. Your software needs to be able to discover, configure, manage and communicate with IoT devices. By considering security implications when designing and implementing these functions you can make the system much more robust. The basic guideline is don’t trust any device. Have checks to verify that a device is what it claims to be, to verify device integrity, and to validate communications with the devices.

Have a special process for discovering and registering devices and restrict access to it. Do not automatically detect and register any device that pops up on the network! Have a mechanism for pairing devices with the gateway, such as a special pairing mode that must be invoked on both the device and the gateway to pair or a requirement to manually enter a device serial number or address into the gateway as part of the registration process. For industrial applications adding devices is a deliberate process – this is not a good operation to fully automate!

A solid approach to gateway and device identity is to have a certificate provisioned onto the device at the factory, by the system integrator, or at a central facility. It is even better if this certificate is backed by a HW root of trust that can’t be copied or spoofed.

Communications between the gateway and the device should be designed. Instead of a general network connection, which can be used for many purposes, consider using a specialized interface. Messaging interfaces are ideal for many IoT applications. Two of the most popular messaging interfaces are MQTT (Message Queued Telemetry Transport) and CoAP. In addition to their many other advantages, these messaging interfaces only carry IoT data, greatly reducing their capability to be used as an attack vector.

Message based interfaces are also a good approach for connecting the IoT Gateway to backend systems. An enterprise message bus like AMQP is a powerful tool for handling asynchronous inputs from thousands of gateways, routing them, and feeding the data into backend systems. A messaging system makes the total system more reliable, more robust, and more efficient – and makes it much easier to implement large scale systems! Messaging interfaces are ideal for handling exceptions – they allow you to simply send the exception as a regular message and have it properly processed and routed by business logic on the backend.

Messaging systems are also ideal for handling unreliable networks and heavy system loads. A messaging system will queue up messages until the network is available. If a sudden burst of activity causes the network and backend systems to be overloaded the messaging system will automatically queue up the messages and then release them for processing as resources become available. Messaging systems allow you to ensure reliable message delivery, which is critical for many applications. Best of all, messaging systems are easy for a programmer to use and do the hard work of building a robust communications capability for you.

No matter what type of interface you are using it is critical to sanitize your inputs. Never just pass through information from a device – instead, check it to make sure that is properly formatted, that it makes sense, that it does not contain a malicious payload, and that the data has not been corrupted. The overall integrity of an IoT system is greatly enhanced by ensuring the quality of the data it is operating on. Perhaps the best example of this is Little Bobby Tables from XKCD (XKCD.com):

Importance of sanitizing your input.

Importance of sanitizing your input.

On a more serious level, poor input sanitization is responsible for many security issues. Programmers should assume that users can’t be trusted and all interactions are a potential attack.

Bodhi 2.9.1 released

Posted by Bodhi on August 15, 2017 09:22 PM

2.9.1 is a security release for CVE-2017-1002152.

Release contributors

Thanks to Marcel for reporting the issue. Randy Barlow wrote the fix.

Fedora Council Summer 2017 Election Results

Posted by Till Maas on August 15, 2017 09:06 PM


The results for the Fedora Council Summer 2017 Election are published. Congratulations to Justin W. Flory for winning! He is very committed and I am looking forward to his efforts to improve communication in Fedora.

Also, I would like to thank everyone who voted for me. Thank you very much for the trust you put into me! Since the FESCo election was restarted you have to vote again in case you voted last week. On a related note, my candidate interview is now available at the Community Blog. Please let me know if you have any questions or remarks.

A proposal: Ambassadors and Fedora strategy

Posted by Fedora Community Blog on August 15, 2017 05:06 PM

Fedora is big. We are a huge community of people with diverse interests. We have different ideas for what we want to build, and we want different things in return from our collective effort. At the same time, we are one project with shared goals and limited resources. We are more effective in this competitive world when we agree on common goals and work towards those, rather than everyone going in the direction each person thinks is best individually.¹

The Fedora Council is tasked with taking community input and shaping this shared strategy. As part of this, we’ve written a new mission statement and have a draft overview page presenting it. We’ve said for a while that we want the work of Fedora Ambassadors to align with this mission directly. We’re getting feedback, though, that this is easier to say than to put into practice, which is understandable because, by nature, mission statements are high-level.

So, I have a proposal. As part of the Fedora Council’s charter, we have Fedora Objectives:

On an ongoing basis, including sessions at Flock and in public online meetings, the Council will identify two to four key community objectives with a timeframe of approximately eighteen months, and appoint Objective Leads for each goal. […]

Each objective will be documented with measurable goals, and the objective lead is responsible for coordinating efforts to reach those goals, evaluating and reporting on progress, and working regularly with all relevant groups in Fedora to ensure that progress is made.

I propose that from now forward, all events and spending by Ambassadors should be directly related to  the target audience of a Fedora Edition or to a current Objective.²

Each Edition has a Product Requirements Document which describes the specific use-cases it is meant to address and gives a target audience for each — Atomic Host, Server, and Workstation. We should not aim scattershot at general audiences and hope some aspect of Fedora resonates. Instead, we should go to events centered around these specific groups of people and demonstrate the solutions we have for their real-world problems.

Unlike the mission, Objectives are scoped to a 12-18 month timeframe, and are concrete and immediately actionable. Each has an Objective Lead who is a subject-matter expert on the topic and who can be a resource for identifying related conferences and outreach opportunities. And, by definition, these Objectives will be aligned with the mission and broader project goals.

You might be, at this point, saying “But wait! I personally don’t care about any of the Editions or Modularity or Continuous Integration! Am I left out, now?”

Actually, not at all. We do have many different interests, and there is room for 2-4 concurrent Objectives. Anyone in the community can put together a proposal, and if we collectively agree that it’s important, anyone can be the Objective Lead. So, if many Ambassadors feel there’s something Fedora should be doing that isn’t covered currently, there is a straightforward path — form an Objective around it.

An Objective is a statement of a goal that is achievable in a year or year and a half, along with a plan to measure the results. Objectives could be technical advances, but they wouldn’t have to be. Examples³ might include:

Fedora for Students:

  • We increase Fedora’s popularity among university students through Install Days and new Fedora User Groups.
    • Measurable Result: We will have 100 install days at Universities in the next 12 months, with Fedora installed on 10,000 new systems.
    • Measurable Result: We will have 10 new Fedora User Groups with regular attendance in the next 12 months.

Fedora Python Classroom (For the Win):

  • We get Fedora’s Python Classroom Lab into classrooms worldwide.
    • Measurable Result: 10 professors or teachers new to Python Classroom using it in the next 12 months.
    • Measurable Result: 10,000 views on YouTube tutorials based around Python Classroom.

Release Parties (for New Contributors):

  • We will raise awareness of Fedora by holding well-publicized release-day parties committed to attracting and onboarding new contributors.
    • Measurable Result: 10 parties held at locations across the globe with  consistent branding and collective marketing.
    • Measurable Result: 10 new Fedora accounts from each party.
    • Measurable Result: 10 new active contributors at the end of 12 months.

Leading an Objective is work and a real commitment, but I don’t think that’s a problem for this proposal. In fact, it’s a strength — if there isn’t enough community interest to support an Objective, it’s probably not something we should be focusing hundreds of other people on, either.⁴

I suggest that Ambassadors as an organization focus on covering our Objectives and the Editions every year, worldwide. Let’s discuss this idea, and if we generally agree, I would like FAmSCo to adopt this as policy going forward. I’m posting this to the Fedora Community Blog, to the Fedora Ambassador’s Mailing List, and to the Fedora Council Discussion List. Since the Ambassador’s list isn’t open to the public, let’s use the Council list as the primary place for this conversation — thanks!

— Matthew Miller, Fedora Project Leader

FPL Badge





  1. That doesn’t mean we all have to do the same thing, or even completely agree. Recommended reading: this great site on consensus-based decision-making: http://www.consensusdecisionmaking.org/
  2. Although this location may change soon, the current list is at https://fedoraproject.org/wiki/Objectives. Currently, Modularity Prototype (objective, docs)  is the only active Objective, but we also are considering a proposal for Fedora Atomic Continuous Integration (objective, docs).
  3.  Thanks to Langdon for suggesting non-technical Objective ideas. I’ve given one example focused on growing a certain user audience, one on promoting a particular solution Fedora contributors have built, and one on growing the Fedora contributor community itself. If you are particularly inspired by any of these, I’d be happy to work on fleshing out a full Objective proposal.
  4. None of this means that people are blocked from anything constructive they want to work on, even if it’s not something we collectively identify as a focus. We will have more success creating and sustaining momentum with a directed official effort, but as always in open source, I expect individual people to put effort towards what they personally find interesting — that’s as it should be!

The post A proposal: Ambassadors and Fedora strategy appeared first on Fedora Community Blog.

Google - all features and options .

Posted by mythcat on August 15, 2017 04:48 PM
Not all Google options are available for all countries.
You should make a selection option depending on country and availability.
This will relieve us of unsuccessful attempts and queries to Google.
Here are all the Google options available now .

My first Keynote at CONECIT 2017 in Tingo Maria

Posted by Julita Inca Chiroque on August 15, 2017 04:33 PM

Yesterday, I open the KeyNote session at CONEISC 2017 with a talk in an hour and a half. I have presented some experiences I had with HPC (High Performance Computing) in universities and during the ISC 2016 to show what is going on in the world regarding HPC, not only in architecture matters, also in programming. Comming soon the video 🙂It was a large audicence that congregates more than 1000 students and professionals in Computer Science and all the Engineering School in Peru.People participated with question I did and also it seems that are so interested in the topic.I want to thanks all people who helped me backstage, this is not only my effort, this is a community effort! Thanks Leyla Marcelo and Toto Cabezas, part of the GNOME Lima! ❤Thanks so much CONECIT 2017 – Tingo Maria 😀

Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: CONECIT, CONECIT 2017, CONECIT TGI, CONECIT Tingo Maria, fedora, GNO, GNOME, High Performance, HPC, HPC in the jungle, Julita Inca, Julita Inca Chiroque, KeyNote, Selva Peru

Episode 59 - The VPN Episode

Posted by Open Source Security Podcast on August 15, 2017 03:14 PM
Josh and Kurt talk about VPNs and the upcoming eclipse.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/5644794/height/90/width/640/theme/custom/autonext/no/thumbnail/yes/autoplay/no/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="640"></iframe>

Show Notes

The workshop on Fedora Hubs at Flock 2017 will be awesome

Posted by Aurélien Bompard on August 15, 2017 03:08 PM

TL;DR: come to the Hubs workshop at Flock! 🙂

This is a shameless plug, I admit.

In a couple weeks, a fair number of people from the Fedora community will gather near Boston for the annual Flock conference. We’ll be able to update each other and work together face-to-face, which does not happen so often in Free Software.

For some months I’ve been working on the Fedora Hubs project, a web interface to make communication and collaboration easier for Fedora contributors. I really has the potential to change the game for a lot of us who still find some internal processes a bit tedious, and to greatly help new contributors.

The Fedora Hubs page is a personalized user or group page composed of many widgets which can individually inform you, remind you or help you tackle any part of you contributor’s life in the Fedora project. And it updates in realtime.

I’ll be giving a workshop on Wednesday 30th at 2:00PM to introduce developers to Hubs widgets. In half an hour, I’ll show you how to make a basic widget that will be already directly useful to you if you’re a packager. Then you’ll be able to join us in the following hackfest and contribute to Hubs. Maybe you have a great idea of a widget that would simplify your workflow. If so, that will be the perfect time to design and/or write it.

You need to know Python, and be familiar with basic web infrastructure technologies: HTML and CSS, requests and responses, etc. No Javascript knowledge needed at that point, but if you want to make a complex widget you’ll probably need to know how to write some JS (jQuery or React). The Hubs team will be around to help and guide you.

The script of the workshop is here: https://docs.pagure.org/fedora-hubs-widget-workshop/. Feel free to test it out and tell me if something goes wrong in your environment. You can also play with our devel Hubs instance, that will probably give you some ideas for the hackfest.

Remember folks: Hubs is a great tool, it will (hopefully) be central to contributors’ worflows throughout the Fedora project, and it’s the perfect time to design and write the widgets that will be useful for everyone. I hope to see you there! 🙂

ANNOUNCE: virt-viewer 6.0 release

Posted by Daniel Berrange on August 15, 2017 02:20 PM

I am happy to announce a new bugfix release of virt-viewer 6.0 (gpg), including experimental Windows installers for Win x86 MSI (gpg) and Win x64 MSI (gpg). The virsh and virt-viewer binaries in the Windows builds should now successfully connect to libvirtd, following fixes to libvirt’s mingw port.

Signatures are created with key DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF (4096R)

All historical releases are available from:


Changes in this release include:

  • Mention use of ssh-agent in man page
  • Display connection issue warnings in main window
  • Switch to GTask API
  • Add support changing CD ISO with oVirt foreign menu
  • Update various outdated links in README
  • Avoid printing password in debug logs
  • Pass hostname to authentication dialog
  • Fix example URLs in man page
  • Add args to virt-viewer to specify whether to resolve VM based on ID, UUID or name
  • Fix misc runtime warnings
  • Improve support for extracting listening info from XML
  • Enable connecting to SPICE over UNIX socket
  • Fix warnings with newer GCCs
  • Allow controlling zoom level with keypad
  • Don’t close app during seemless migration
  • Don’t show toolbar in kiosk mode
  • Re-show auth dialog in kiosk mode
  • Don’t show error when cancelling auth
  • Change default screenshot name to ‘Screenshot.png’
  • Report errors when saving screenshot
  • Fix build with latest glib-mkenums

Thanks to everyone who contributed towards this release.

ANNOUNCE: libosinfo 1.1.0 release

Posted by Daniel Berrange on August 15, 2017 11:09 AM

I am happy to announce a new release of libosinfo version 1.1.0 is now available, signed with key DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF (4096R). All historical releases are available from the project download page.

Changes in this release include:

  • Force UTF-8 locale for new glib-mkenums
  • Avoid python warnings in example program
  • Misc test suite updates
  • Fix typo in error messages
  • Remove ISO header string padding
  • Disable bogus gcc warning about unsafe loop optimizations
  • Remove reference to fedorahosted.org
  • Don’t hardcode /usr/bin/perl, use /usr/bin/env
  • Support eject-after-install parameter in OsinfoMedia
  • Fix misc warnings in docs
  • Fix error propagation when loading DB
  • Add usb.ids / pci.ids locations for FreeBSD
  • Don’t include private headers in gir/vapi generation

Thanks to everyone who contributed towards this release.

Introducing InfluxDB: Time-series database stack

Posted by Justin W. Flory on August 15, 2017 08:30 AM
Introducing InfluxDB: Time-series database stack

Article originally published on Opensource.com.

The needs and demands of infrastructure environments changes every year. With time, systems become more complex and involved. But when infrastructure grows and becomes more complex, it’s meaningless if we don’t understand it and what’s happening in our environment. This is why monitoring tools and software are often used in these environments, so operators and administrators see problems and fix them in real-time. But what if we want to predict problems before they happen? Collecting metrics and data about our environment give us a window into how our infrastructure is performing and lets us make predictions based on data. When we know and understand what’s happening, we can prevent problems before they happen.

But how do we collect and store this data? For example, if we want to collect data on the CPU usage of 100 machines every ten seconds, we’re generating a lot of data. On top of that, what if each machine is running fifteen containers? What if you want to generate data about each of those individual containers too? What about by the process? This is where time-series data becomes helpful. Time-series databases store time-series data. But what does that mean? We’ll explain all of this and more and introduce you to InfluxDB, an open source time-series database. By the end of this article, you will understand…

  • What time-series data / databases are
  • Quick introduction to InfluxDB and the TICK stack
  • How to install InfluxDB and other tools

Introducing time-series concepts

Example of table, or how a RDBMS like MySQL stores data

Example of table, or how a RDBMS like MySQL stores data. Image from DevShed.

If you’re familiar with relational database management software (RDBMS), like MySQL, tables, columns, and primary keys are familiar terms. Everything is like a spreadsheet, with columns and rows. Some data might be unique, other parts might be the same as other rows. RBDMS’s like MySQL are widely used and are great for reliable transactions that follow ACID (Atomicity, Consistency, Isolation, Durability) compliance.

With relational database software, you’re usually working with data that is something you could model in a table. You might update certain data by overwriting and replacing it. But what if you’re collecting on data on something that generates a lot of data and you want to watch change over time? Take a self-driving car. The car is constantly collecting information about its environment. It takes this data and it analyzes changes over time to behave correctly. The amount of data might be tens of gigabytes an hour. While you could use a relational database to collect this data, they’re not built for this. When it comes to scaling and usability of the data you’re collecting, an RBDMS isn’t the best tool for the job.

Why time-series is a good fit

And this is where time-series data makes sense. Let’s say you’re collecting data about a city traffic, temperature from farming equipment, or the production rate of an assembly line. Instead of going into a table with rows and columns, imagine pushing multiple rows of data that are uniquely sorted by a timestamp. This visual might help make more sense of this.

Imagine rows and rows of data, uniquely sorted by timestamps

Imagine rows and rows of data, uniquely sorted by timestamps. Image from Timescale.

Having the data in this format makes it easier to track and watch change over time. When data accumulates, you can see how something behaved in the past, how it’s behaving now, and how it might behave in the future. Your options to make smarter data decisions expands!

Curious how the data is stored and formatted? It depends on the time-series database (TSDB) you use. InfluxDB stores the data in the Line Protocol format. Queries return the data in JSON.

How InfluxDB stores time-series data in JSON

How InfluxDB stores time-series data in Line Protocol. Image from Roberto Gaudenzi.

If you’re still confused or trying to understand time-series data or why you would want to use it over another solution, you can read an excellent, in-depth explanation from Timescale’s blog or InfluxData’s blog.

InfluxDB: A time-series database

InfluxDB is an open source time-series database software developed by InfluxData. It’s written in Go (a compiled language), which means you can start using it without installing any dependencies. It supports multiple data ingestion protocols, such as Telegraf (also from InfluxData), Graphite, collectd, and OpenTSDB. This leaves you with flexible options for how you want to collect data and where you’re pulling it from. It’s also one of the fastest-growing time-series database software available. You can find the source code for InfluxDB on GitHub.

This article will focus on three tools in InfluxData’s TICK stack for how you can build a time-series database and begin collecting and processing data.

TICK stack

InfluxData creates a platform based on four open source projects that work and play well with each other for time-series data. When used together, you can collect, store, process, and view the data easily. The four pieces of the platform are known as the TICK stack. This stands for…

  • Telegraf: Plugin-driven server agent for collecting / reporting metrics
  • InfluxDB: Scalable data store for metrics, events, and real-time analytics
  • Chronograf: Monitoring / visualization UI for TICK stack (not covered in this article)
  • Kapacitor: Framework for processing, monitoring, and alerting on time-series data

These tools work and integrate well with the other pieces by design. However, it’s also easy to substitute one piece out for another tool of your choice. For this article, we’ll explore three parts of the TICK stack: InfluxDB, Telegraf, and Kapacitor.

Diagram of how the different components of the InfluxDB TICK stack connect with each other

Diagram of how the different components of the TICK stack connect with each other. From influxdata.com.


As mentioned before, InfluxDB is the time-series database (TSDB) of the TICK stack. Data collected from your environment is stored into InfluxDB. There are a few things that stand out about InfluxDB from other time-series databases.

Emphasis on performance

InfluxDB is designed with performance as one of the top priorities. This allows you to use data quickly and easily, even under heavy loads. To do this, InfluxDB focuses on quickly ingesting the data and using compression to keep it manageable. To query and write data, it uses an HTTP(S) API.

The performance notes are noteworthy standing up the amount of data InfluxDB is capable of handling. It can handle up to a million points of data per second, at a precise level even to the nanosecond.

SQL-like queries

If you’re familiar with SQL-like syntax, querying data from InfluxDB will feel familiar. It uses its own SQL-like syntax, InfluxQL, for queries. As an example, imagine you’re collecting data on used disk space on a machine. If you wanted to see that data, you could write a query that might look like this.

SELECT mean(diskspace_used) as mean_disk_used
FROM disk_stats
WHERE time() >= 3m
GROUP BY time(10d)

If you’re familiar with SQL syntax, this won’t feel too different. The above statement will pull the mean values of used disk space from a three-month period and group them by every ten days.

Downsampling / data retention

When working with large amounts of data, storing it becomes a concern. Over time, it can accumulate to huge sizes. With InfluxDB, you can downsample into less precise, but smaller metrics that you can store for longer periods of time. Data retention policies for your data enable you to do this.

For example, pretend you have sensors collecting data on the amount of RAM in a number of machines. You might collect metrics on the amount of memory in use by multiple users, the system, cached memory, and more. While it might make sense to hang on to that data for thirty days to watch what’s happening, after thirty days, you might not need it that precise. Instead, you might only want the ratio of total memory to memory in use. Using data retention policies, you can tell InfluxDB to hang on to the precise data for all the different usages for thirty days. After thirty days, you can average data to be less precise, and you can hold on to that data for six months, forever, or however long you like. This compromise meets in the middle between keeping historical data and reducing disk usage.


If InfluxDB is where all of your data is going, you need a way to collect and gather the data first. Telegraf is a metric collection daemon that gathers various metrics from system components, IoT sensors, and more. It’s open source and written completely in Go. Like InfluxDB, Telegraf is also written by the InfluxData team and is built to work with InfluxDB. It also includes support for different databases, such as MySQL / MariaDB, MongoDB, Redis, and more. You can read more about it on InfluxData’s website.

Telegraf is modular and heavily based on plugins. This means that Telegraf is either lean and minimal or as full and complex as you need it. Out of the box, it supports over a hundred plugins for various input sources. This includes Apache, Ceph, Docker, IPTables, Kubernetes, NGINX, and Varnish, just to name a few. You can see all the plugins, including processing and output plugins in their README.

Even if you’re not using InfluxDB as a data store, you may find Telegraf useful as a way to collect this data and information about your systems or sensors.


Now we have a way to collect and store our data. But what about doing things with it? Kapacitor is the piece of the stack that lets you process and work with the data in a few different ways. It supports both stream and batch data. Stream data means you can actively work and shape the data in real-time, even before it makes it to your data store. Batch data means you retroactively perform actions on samples, or batches, of the data.

One of the biggest pluses for Kapacitor is that it enables you to have real-time alerts for events happening in your environment. CPU usage overloading or temperatures too high? You can set up several different alert systems, including but not limited to email, triggering a command, Slack, HipChat, OpsGenie, and many more. You can see the full list in the documentation.

Like the previous tools, Kapacitor is also open source and you can read more about the project in their README.

Installing the TICK stack

Packages are available for nearly every distribution. You can install these packages from the command line. Use the instructions for your distribution.


sudo dnf install https://dl.influxdata.com/influxdb/releases/influxdb-1.3.1.x86_64.rpm \
https://dl.influxdata.com/telegraf/releases/telegraf-1.3.4-1.x86_64.rpm \

CentOS 7 / RHEL 7

sudo yum install https://dl.influxdata.com/influxdb/releases/influxdb-1.3.1.x86_64.rpm \
https://dl.influxdata.com/telegraf/releases/telegraf-1.3.4-1.x86_64.rpm \

Ubuntu / Debian

wget https://dl.influxdata.com/influxdb/releases/influxdb_1.3.1_amd64.deb \
https://dl.influxdata.com/telegraf/releases/telegraf_1.3.4-1_amd64.deb \
sudo dpkg -i influxdb_1.3.1_amd64.deb telegraf_1.3.4-1_amd64.deb kapacitor_1.3.1_amd64.deb

Other distributions

For help with other distributions, see the Downloads page.

See the data, be the data

Now that you have the tools installed, you can experiment with some of these tools. There’s plenty of upstream documentation on all three projects. You can the docs here:

Additionally, for more help, you can visit the InfluxData community forums. Happy hacking!

The post Introducing InfluxDB: Time-series database stack appeared first on Justin W. Flory's Blog.

Calling all UX peeps

Posted by Suzanne Hillman (Outreachy) on August 15, 2017 01:27 AM

Yesterday I mentioned a discussion I was involved with on Facebook in which someone on the board of UXPA Boston suggested that I could organize a program for UX newbies and career changers.

I’m really pleased by this idea, and very glad she suggested it. However, before I bring my ideas to the board and get advice and help, I want to have slightly more clue than I currently have.

So, research!

The best way I can think of to get more clue is to talk to people in the UX space. I’d like to talk to other people who are new, people who do the hiring, and people who are working in UX with other UX team members.

UX Job Seekers

Based on my instincts and some of the suggestions on the FB discussion, I suspect people trying to get into UX full-time struggle with:

  1. Getting experience
  2. How to best structure their portfolio and resume
  3. Becoming known to companies

Some off-the cuff ideas of ways to help with these:

  1. Internships, co-ops, programs like Outreachy/Google’s Summer of Code/akamai’s technical academy, mentorship, apprenticeship, small multi-person design projects, and UX hackathons
  2. Finding mentors, having get-togethers to review portfolios and resumes (among each other), and developing sustainable ways to get feedback from hiring managers
  3. Things that I listed in option #1, company visits, and informational interviews

UX Hiring Managers

I have a lot of interesting ideas above, but I would need to know more about what hiring managers are looking for to understand what would be most useful.

For example, in an ideal world, what do hiring managers want to see from candidates? What would be most useful to determine if they want to take a chance on someone? What do they want to see them do, have done, or be interested in doing? What do they _not_ want to see? What do they struggle with figuring out, but very much want in their employees?

People currently on UX teams

Of course, not only do I need to know what hiring managers look for, but I’d like to better understand what people look for in their co-workers.

Such as, what do UXers find most useful when working with other UXers? What do they especially dislike? How well do their hiring practices seem to tease these out? What do you most appreciate in your co-workers?

How can you help?

If you are in UX, or trying to get into UX, talk to me! Comment or email me!

Lire le journal système du dernier démarrage avec systemd

Posted by Jean-Baptiste Holcroft on August 14, 2017 10:00 PM

Écrire un rapport de bug est une bonne chose, mais accéder aux journaux système n’est pas toujours évident…

Grâce à cet article sur systemd, j’ai compris comment trouver les traces liés à chaque démarrage de la machine. Comme cela fait plusieurs années que je galère sans, je me …

Fedora Classroom Session 4

Posted by Fedora Magazine on August 14, 2017 08:43 PM

The Fedora Classroom sessions continues this week. You can find the general schedule for sessions on the wiki. You can also find resources and recordings from previous sessions there.

Here are details about this week’s session on Friday, August 18 at 1300 UTC.


Eduard Lucena is an IT Engineer and an Ambassador from the LATAM region. He started working with the community by publishing a simple article in the Magazine. Right now he actively works in the Marketing group and aims to be a FAmSCO member for the Fedora 26 release. He works in the telecommunication industry and uses the Fedora Cinnamon Spin as his main desktop, both at work and home. He isn’t a mentor, but tries to on-board people into the project by teaching them how to join the community in any area. His motto is: “Not everything is about the code.”

Topic: Vim 101

Like many classic utilities developed during UNIX’s early years, vi has a reputation for being hard to navigate. Bram Moolenaar’s enhanced and optimized clone, Vim (“vi Improved“), is the default editor in almost all UNIX-like systems. The world’s come a long way since Vim was written. Even though the system resources have grown, many still stick with the Vim editor, including Fedora.

This hands-on session will teach you about the different Vim versions packaged in Fedora. Then, we’ll go deeper into how to use this powerful tool. We’ll also teach you how not to flounder trying to close the editor!

Joining the session

Since this is a hands-on session, you’ll want to have a Linux installation to follow it properly. Preferably you’ll have Vim installed with full features. If you don’t have it, don’t worry — you’ll learn how to install it and what the differences are. No prior knowledge of the Vim editor is required.

This session will be held via IRC. The following information will help you join the session:

We hope you can attend and enjoy this experience from some of the people that work in the Fedora Project.

Photograph used in feature image is San Simeon School House by Anita Ritenour — CC-BY 2.0.

Downloading all the 78rpm rips at the Internet Archive

Posted by Richard W.M. Jones on August 14, 2017 08:43 PM

I’m a bit of a fan of 1930s popular music on gramophone records, so much so that I own an original early-30s gramophone player and an extensive collection of discs. So the announcement that the Internet Archive had released a collection of 29,000 records was pretty amazing.

[Edit: If you want a light introduction to this, I recommend this double CD]

I wanted to download it … all!

But apart from this gnomic explanation it isn’t obvious how, so I had to work it out. Here’s how I did it …

Firstly you do need to start with the Advanced Search form. Using the second form on that page, in the query box put collection:georgeblood, select the identifier field (only), set the format to CSV. Set the limit to 30000 (there are about 25000+ records), and download the huge CSV:

$ ls -l search.csv
-rw-rw-r--. 1 rjones rjones 2186375 Aug 14 21:03 search.csv
$ wc -l search.csv
25992 search.csv
$ head -5 search.csv

A bit of URL exploration found a fairly straightforward way to turn those identifiers into directory listings. For example:


What I want to do is pick the first MP3 file in the directory and download it. I’m not fussy about how to do that, and Python has both a CSV library and an HTML fetching library. This turns the CSV file of links into a list of MP3 URLs. You could easily adapt this to download FLAC files instead.


import csv
import re
import urllib2
import urlparse
from BeautifulSoup import BeautifulSoup

with open('search.csv', 'rb') as csvfile:
    csvreader = csv.reader(csvfile, delimiter=',', quotechar='"')
    for row in csvreader:
        if row[0] == "identifier":
        url = "https://archive.org/download/%s/" % row[0]
        page = urllib2.urlopen(url).read()
        soup = BeautifulSoup(page)
        links = soup.findAll('a', attrs={'href': re.compile("\.mp3$")})
        # Only want the first link in the page.
        link = links[0]
        link = link.get('href', None)
        link = urlparse.urljoin(url, link)
        print link

When you run this it converts each identifier into a download URL:

Edit: Amusingly WordPress turns the next pre section with MP3 URLs into music players. I recommend listening to them!

$ ./download.py | head -10
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-11" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_jeannine-i-dream-of-you-lilac-time_bar-harbor-society-orch.-irving-kaufman-shilkr_gbia0010841b/Jeannine%20I%20Dream%20Of%20You%20%22Lilac%20%20-%20Bar%20Harbor%20Society%20Orch..mp3?_=11" type="audio/mpeg">https://archive.org/download/78_jeannine-i-dream-of-you-lilac-time_bar-harbor-society-orch.-irving-kaufman-shilkr_gbia0010841b/Jeannine%20I%20Dream%20Of%20You%20%22Lilac%20%20-%20Bar%20Harbor%20Society%20Orch..mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-12" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_a-prisoners-adieu_jerry-irby-modern-mountaineers_gbia0000549b/A%20Prisoner%27s%20Adieu%20-%20Jerry%20Irby%20-%20Modern%20Mountaineers.mp3?_=12" type="audio/mpeg">https://archive.org/download/78_a-prisoners-adieu_jerry-irby-modern-mountaineers_gbia0000549b/A%20Prisoner%27s%20Adieu%20-%20Jerry%20Irby%20-%20Modern%20Mountaineers.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-13" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_if-i-had-the-heart-of-a-clown_bobby-wayne-joe-reisman-rollins-nelson-kane_gbia0004921b/If%20I%20Had%20The%20Heart%20of%20A%20Clown%20-%20Bobby%20Wayne.mp3?_=13" type="audio/mpeg">https://archive.org/download/78_if-i-had-the-heart-of-a-clown_bobby-wayne-joe-reisman-rollins-nelson-kane_gbia0004921b/If%20I%20Had%20The%20Heart%20of%20A%20Clown%20-%20Bobby%20Wayne.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-14" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_how-many-times-can-i-fall-in-love_patty-andrews-and-tommy-dorsey-victor-young-an_gbia0013066b/How%20Many%20Times%20%28Can%20I%20Fal%20-%20Patty%20Andrews%20And%20Tommy%20Dorsey.mp3?_=14" type="audio/mpeg">https://archive.org/download/78_how-many-times-can-i-fall-in-love_patty-andrews-and-tommy-dorsey-victor-young-an_gbia0013066b/How%20Many%20Times%20%28Can%20I%20Fal%20-%20Patty%20Andrews%20And%20Tommy%20Dorsey.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-15" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_ill-forget-you_alan-dean-ball-burns-joe-lipman_gbia0002540a/I%27ll%20Forget%20You%20-%20Alan%20Dean%20-%20Ball%20-%20Burns.mp3?_=15" type="audio/mpeg">https://archive.org/download/78_ill-forget-you_alan-dean-ball-burns-joe-lipman_gbia0002540a/I%27ll%20Forget%20You%20-%20Alan%20Dean%20-%20Ball%20-%20Burns.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-16" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_it-aint-gonna-rain-no-mo-ya-no-va-a-llover_international-novelty-orchestra-wend_gbia0014114a/It%20Ain%27t%20Gonna%20Rain%20No%20M%20-%20International%20Novelty%20Orchestra.mp3?_=16" type="audio/mpeg">https://archive.org/download/78_it-aint-gonna-rain-no-mo-ya-no-va-a-llover_international-novelty-orchestra-wend_gbia0014114a/It%20Ain%27t%20Gonna%20Rain%20No%20M%20-%20International%20Novelty%20Orchestra.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-17" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_i-still-keep-dreaming_leroy-holmes-and-his-orchestra-sourwine-johnny-corva_gbia0004815b/I%20Still%20Keep%20Dreaming%20-%20Leroy%20Holmes%20and%20his%20Orchestra.mp3?_=17" type="audio/mpeg">https://archive.org/download/78_i-still-keep-dreaming_leroy-holmes-and-his-orchestra-sourwine-johnny-corva_gbia0004815b/I%20Still%20Keep%20Dreaming%20-%20Leroy%20Holmes%20and%20his%20Orchestra.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-18" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_it-aint-nobodys-bizness_lulu-belle--scotty-browne-sampsel-markowitz_gbia0010017a/It%20Ain%27t%20Nobody%27s%20Bizness%20-%20Lulu%20Belle%20%26%20Scotty.mp3?_=18" type="audio/mpeg">https://archive.org/download/78_it-aint-nobodys-bizness_lulu-belle--scotty-browne-sampsel-markowitz_gbia0010017a/It%20Ain%27t%20Nobody%27s%20Bizness%20-%20Lulu%20Belle%20%26%20Scotty.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-19" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_i-still-get-a-thrill-thinking-of-you_art-lund-johnny-thompson-coots-davis_gbia0002767a/I%20Still%20Get%20A%20Thrill%20%28Thinking%20Of%20You%29%20-%20Art%20Lund.mp3?_=19" type="audio/mpeg">https://archive.org/download/78_i-still-get-a-thrill-thinking-of-you_art-lund-johnny-thompson-coots-davis_gbia0002767a/I%20Still%20Get%20A%20Thrill%20%28Thinking%20Of%20You%29%20-%20Art%20Lund.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-20" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_in-the-gloaming_art-hickmans-orchestra-logan_gbia0006430a/In%20The%20Gloaming%20-%20Art%20Hickman%27s%20Orchestra.mp3?_=20" type="audio/mpeg">https://archive.org/download/78_in-the-gloaming_art-hickmans-orchestra-logan_gbia0006430a/In%20The%20Gloaming%20-%20Art%20Hickman%27s%20Orchestra.mp3</audio>

And after that you can download as many 78s as you can handle 🙂 by doing:

$ ./download.py > downloads
$ wget -nc -i downloads


I only downloaded about 5% of the tracks, but it looks as if downloading it all would be ~ 100 GB. Also most of these tracks are still in copyright (thanks to insane copyright terms), so they may not be suitable for sampling on your next gramophone-rap record.

Update #2

Don’t forget to donate to the Internet Archive. I gave them $50 to continue their excellent work.

Servidor LAMP en Fedora 26

Posted by Ivan Fernandez Cid on August 14, 2017 08:12 PM
Configurar un servidor LAMP en Fedora es una tarea bastante sencilla. A continuación describo cómo hacerlo. Instalamos el servidor web(apache httpd) la forma más sencilla(todos estos comandos como root) dnf groupinstall "Web Server" **si muestra algún error de versiones (workstation, nonproduct) usar: dnf groupinstall "Web Server" --skip-broken Después el servidor mariadb(mysql) dnf

RHEL 7.4 multimedia packages and Skype repository removal

Posted by Simone Caronni on August 14, 2017 06:33 PM

The upgrade path from Red Hat Enterprise Linux 7.3 to 7.4 is a bit of a pain if you have the multimedia repository configured. This is because I’m rebuilding a few components for an upgraded libwebp package and because a lot of stuff has been rebased to versions that are in Fedora. Judging by the logs, I see that most of the downloads come from CentOS systems, so I just decided to hold on some updates that are required for the various package rebases for Red Hat Enterprise Linux 7.4. So until also CentOS releases version 7.4, I can’t make everyone happy and something (like Gstreamer plugin updates) will be stuck with 7.3 versions. Hopefully the new CentOS release will come quickly enough.

Also, I decided to stop rebuilding the base packages to use a newer libwebp version. This really had very few benefits and just a lot of pain due to the huge amount of packages involved in both x86_64 and i686 variants. The amount of packages affected by this weigh at around 3 gb.

In RHEL 7.4 there are additional WebKit variants that also would require a rebuild. So, as of today, to update the packages from the EPEL 7 multimedia repository you should run this command:

rpm -e --nodeps GraphicsMagick && yum distro-sync && yum -y install GraphicsMagick

Hopefully you would get an output similar to this:

Dependencies Resolved

 Package                         Arch        Version                  Repository         Size
 compat-ffmpeg-libs              x86_64      1:2.8.12-2.el7           epel-multimedia   5.6 M
 ffmpeg                          x86_64      1:3.3.3-2.el7            epel-multimedia   1.5 M
 ffmpeg-libs                     i686        1:3.3.3-2.el7            epel-multimedia   6.1 M
 ffmpeg-libs                     x86_64      1:3.3.3-2.el7            epel-multimedia   6.3 M
 gstreamer1-plugins-bad          x86_64      1:1.4.5-5.el7            epel-multimedia   1.8 M
 libavdevice                     x86_64      1:3.3.3-2.el7            epel-multimedia    63 k
 leptonica                       i686        1.72-2.el7               epel-multimedia   881 k
 leptonica                       x86_64      1.72-2.el7               epel              928 k
 libwebp                         i686        0.3.0-3.el7              base              169 k
 libwebp                         x86_64      0.3.0-3.el7              base              170 k
 lz4                             x86_64      1.7.3-1.el7              epel               82 k
 python-pillow                   x86_64      2.0.0-19.gitd1c6db8.el7  base              438 k
 webkitgtk                       x86_64      2.4.9-1.el7              epel               12 M
 webkitgtk3                      x86_64      2.4.9-6.el7              base               11 M
Installing for dependencies:
 libwebp0.6                      i686        0.6.0-1.el7              epel-multimedia   255 k
 libwebp0.6                      x86_64      0.6.0-1.el7              epel-multimedia   250 k

Transaction Summary
Install               ( 2 Dependent packages)
Upgrade    6 Packages
Downgrade  8 Packages

Total download size: 47 M
Is this ok [y/d/N]:

Basically libwebp should come again from the main CentOS/RHEL channels and the libwebp0.6 package should come from the multimedia repository. All the packages which were rebuilt for the previous libwebp 0.5 update should become synced again to their proper versions.

If you don’t get this output, but still get some dependency errors you have to do some debugging. For example, ffmpeg-libs.i686 requires libssh.i686, but the version of libssh in CentOS extras is different from the one in EPEL (it really depends on what kind of packages you have installed and with which repositories enabled) so I’m providing here the same version that is in CentOS extras but in both variants.

Update 16th August 2017

If you get many qt5 errors during the transactions, keep in mind that RHEL 7.4 has been rebased massively, and everyone else (including EPEL) is catching up. As of today, if you have the following errors (trimmed down) in a Yum transaction:

Error: Package: gvfs-1.30.4-3.el7.x86_64 (rhel-x86_64-server-7)
Error: Package: qt5-qtwebkit-5.6.1-3.b889f46git.el7.x86_64 (epel)
Error: Package: qt5-qtquickcontrols2-5.6.1-2.el7.x86_64 (@epel)
Error: Package: qt5-qtquickcontrols2-5.6.1-2.el7.x86_64 (@epel)
Error: Package: GraphicsMagick-1.3.26-3.el7.x86_64 (@epel-multimedia)
Error: Package: kf5-kdeclarative-5.36.0-1.el7.x86_64 (epel)
Error: Package: qt5-qtquickcontrols2-5.6.1-2.el7.x86_64 (@epel)
Error: Package: qt5-qtwebkit-5.6.1-3.b889f46git.el7.x86_64 (epel)
Transaction check error:
  file /usr/lib64/gstreamer-1.0/libgstopus.so from install of gstreamer1-plugins-bad-1:1.4.5-5.el7.x86_64 conflicts with file from package gstreamer1-plugins-base-1.10.4-1.el7.x86_64

You can do the following. For this:

Error: Package: GraphicsMagick-1.3.26-3.el7.x86_64 (@epel-multimedia)


rpm -e --nodeps GraphicsMagick && yum -y install GraphicsMagick

All of the QT7KDE 5 stuff:

Error: Package: qt5-qtwebkit-5.6.1-3.b889f46git.el7.x86_64 (epel)
Error: Package: qt5-qtquickcontrols2-5.6.1-2.el7.x86_64 (@epel)
Error: Package: qt5-qtquickcontrols2-5.6.1-2.el7.x86_64 (@epel)
Error: Package: kf5-kdeclarative-5.36.0-1.el7.x86_64 (epel)

Are in EPEL testing updates, so:

yum --enablerepo=epel-testing update


Error: Package: gvfs-1.30.4-3.el7.x86_64 (rhel-x86_64-server-7)Transaction check error:
  file /usr/lib64/gstreamer-1.0/libgstopus.so from install of gstreamer1-plugins-bad-1:1.4.5-5.el7.x86_64 conflicts with file from package gstreamer1-plugins-base-1.10.4-1.el7.x86_64

are some of the packages that are rebased in RHEL 7.4. I’ve created a temporary repository for those, it will disappear once CentOS 7.4 is released as the packages will be integrated in the main multimedia repository. You can install it through:

yum-config-manager \

With the above repository it is possible to install all the other multimedia packages.

Skype repository removal

Skype 4.3 is 32 bit only, is now obsolete and has been superseded by a package that actually lists proper dependencies. It is also one of the packages that required one of the above WebKit rebuilds in i686 form for RHEL/CentOS 7 x86_64.

If you have it installed, just remove it with:

yum remove webkitgtk.i686

The repository has been deleted; to install the new Skype provided version, just head to the following official link.

Summer 2017 Red Hat Intern Expo

Posted by Mary Shakshober on August 14, 2017 03:34 PM

Now wrapping up summer #2 as a Red Hat intern, the 2017 Intern Expo was a relatively familiar environment. This event this year for the Boston/Westford interns was held in the Westford office on August 17th, in the same “classic middle school science fair” manner as 2016. This year, though, I came prepared with visuals, visuals, and yes, more visuals (I’m a graphic designer, it’s in my blood)! I created a site, from scratch, that I had been working on in small bits and pieces throughout the course of the summer consisting of tutorials for getting involved in the Fedora Design-Team and Fedora-Badges groups, Fedora style basics, and a library of my entire summer of work. My original hope was to create the site using Fedora Bootstrap, but because of time constraints the static-HTML-to-Bootstrap conversion didn’t happen. Because I don’t have hosting for this site and cannot attach zip folders here, I’ve attached screenshots of the site!

My setup overall was my website running on my laptop as well as printouts of more of my print media designs for easy viewing. It was great to see a few familiar faces, to show fellow Red Hatters my adventures through Fedora designing, and to see what other interns have been up to throughout the course of the summer.

Tootaloo summer 2017 *insert a royalty wave here* 🙂


FESCo Elections: Interview with Till Maas (till)

Posted by Fedora Community Blog on August 14, 2017 01:07 PM
Fedora Engineering Steering Council badge, awarded after Fedora Elections - read the Interviews to learn more about candidates

Fedora Engineering Steering Council badge

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Tuesday, August 8th and closes promptly at 23:59:59 UTC on Monday, August 14th. Please read the responses from candidates and make your choices carefully. Feel free to ask questions to the candidates here (preferred) or elsewhere!

Interview with Till Maas (till)

  • Fedora Account: till
  • IRC: tyll (found in #fedora-releng #fedora #fedora-devel #fedora-admin #fedora-apps  #fedora-social #fedora-de  #epel )
  • Fedora User Wiki Page


What is your background in engineering?

Linux is my favourite operating system since 1999, when I got my first PC as a pupil. I started with SuSE 6.0 back then, switched to Gentoo and tried Ubuntu. In 2005 I tried Fedora Core 4. Thanks to the welcoming Fedora community I quickly became a contributor, starting as a packager. Nowadays, I am a sponsor, provenpackager, help release engineering with cleanup tasks and occasionally patch something in Fedora infrastructure. My Open Hub profile contains an overview of most of my FLOSS contributions in general: https://www.openhub.net/accounts/tillmaas

Formally I acquired the degree of a Diplom-Informatiker (Master of Science in Computer Science) at the RWTH Aachen University, Germany. In my dayjob I work as a penetration tester.

Why do you want to be a member of FESCo?

I would like to use my skills, knowledge and experience to help Fedora continuing to excel as a great FLOSS project.

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

The modularity initiative and the introduction of flatpaks into Fedora introduce new challenges in ensuring that our users get timely security updates. As a penetration tester I have a strong security background and as a packager I know the struggles in preparing upstream releases as consumable Fedora packages.

What are three personal qualities that you feel would benefit FESCo if you are elected?

  • I like to learn new technologies and therefore become quickly familiar with them. This will help me to quickly understand change requests and their implications.
  • I have an eye for details and often see or find connections, implications and issues that others miss. Therefore I will make founded decisions.
  • I am constantly trying to improve and therefore am open to change and see mistakes as an opportunity to learn. As a leading Linux distribution it is important for Fedora to introduce new technologies.

What is your strongest point as a candidate? What is your weakest point?

I am a long time Fedora contributor and contributed to several groups and projects in Fedora. Therefore I have a good insight into many details. Since I am contributing to Fedora in my free time, time might be an issue.

Currently, how do you contribute to Fedora? How does that contribution benefit the community?

I am a packager, help with release engineering and infrastructure projects. My focus is primarily on making Fedora more secure and making it easier to contribute to Fedora. On a non-technical level I represent Fedora as an Ambassador at conferences.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

In my opinion it is important to make contributing to Fedora as easy as possible. The tools to contribute to Fedora should be as straight-forward as possible. This is for example the reason I wrote fedora-easy-karma: It streamlined the process of submitted feedback about package updates. The less time we spend with our tools the more time we have to focus on the quality of the products we deliver.

Do you believe the Modularity objective is important to Fedora’s success? Is there anything you wish to bring to the modularity efforts?

Yes, I believe the Modularity objective is a great framework for Fedora to try new paths and add more value to the individual products. For me it is important to keep security and usability in mind.

What is the air-speed velocity of an unladen swallow?

It depends on the bikeshed it is flying over – what color is it?

Closing words

Thank you for your time reading this. Please do not forget to vote!

The post FESCo Elections: Interview with Till Maas (till) appeared first on Fedora Community Blog.

Slice of Cake #18

Posted by Brian "bex" Exelbierd on August 14, 2017 10:00 AM

A slice of cake

In the last week as FCAIC I:

  • So much Flock with the fantastic help of Kristyna, Jen, Stephen and the entire team.
  • Docs work continues onward. We should begin the staging this week.

À la mode

  • Finally moved my homedir and parts of my setup to a new F26 laptop. So far my compose key is broken and my sadistic need to only do setup via Ansible is slowing me down :).

Cake Around the World

I’ll be traveling some and hope you’ll ping me for coffee if you’re nearby.

  • Flock on Cape Cod, Massachusetts, USA from 29 August - 1 September.

Benchmarking small file performance on distributed filesystems

Posted by Jonathan Dieter on August 14, 2017 07:41 AM

The actual benches

As I mentioned in my last post, I’ve spent the last couple of weeks doing benchmarks on the GlusterFS, CephFS and LizardFS distributed filesystems, focusing on small file performance. I also ran the same tests on NFSv4 to use as a baseline, since most Linux users looking at a distributed filesystem will be moving from NFS.

The benchmark I used was compilebench, which was designed to emulate real-life disk usage by creating a kernel tree, simulating a compile of the tree, reading all the files in the tree, and finally deleting the tree. I chose this benchmark because it does a lot of work with small files, very similar to what most file access looks like in our school. I did modify the benchmark to only do one read rather than the default of three to match the single creation, compilation simulation and deletion performed on each client.

The benchmarks were run on three i7 servers with 32GB of RAM, connected using a gigabit switch, running CentOS 7. GlusterFS is version 3.8.14, CephFS is version 10.2.9, and LizardFS is version 3.11.2. For GlusterFS, CephFS and LizardFS, the three servers operated as distributed data servers with three replicas per file. I first had one server connect to the distributed filesystem and run the benchmark, giving us the single-client performance. Then, to emulate 30 clients, each server made ten connections to the distributed filesystem and ten copies of the benchmark were run simultaneously on each server.

For the NFS server, I had to do things differently because there are apparently some major problems with connecting NFS clients to a NFS server on the same system. For this one, I set up a fourth server that operated just as a NFS server.

All of the data was stored on XFS partitions on SSDs for speed. After running the benchmarks with one distributed filesystem, it was shut down and its data deleted, so each distributed filesystem had the same disk space available to it.

The NFS server was setup to export its shares async (also for speed). The LizardFS clients used the recommended mount options, while the other clients just used the defaults (I couldn’t find any recommended mount options for GlusterFS or CephFS). CephFS was mounted using the kernel module rather than the FUSE filesystem.

So, first up, let’s look at single-client performance (click for the full-size chart):

Initial creation didn’t really have any surprises, though I was really impressed with CephFS’s performance. It came really close to matching the performance of the NFS server. Compile simulation also didn’t have many surprises, though CephFS seemed to start hitting performance problems here. LizardFS initially surprised me in the read benchmark, though I realized later that the LizardFS client will prioritize a local server if the requested data is on it. I have no idea why NFS was so slow, though. I was expecting NFS reads to be the fastest. LizardFS also did really well with deletions, which didn’t surprise me too much. LizardFS was designed to make metadata operations very fast. GlusterFS, which did well through the first three benchmarks, ran into trouble with deletions, taking almost ten times longer than LizardFS.

Next, let’s look at multiple-client performance. With these tests, I ran 30 clients simultaneously, and, for the first three tests, summed up their speeds to give me the total speed that the server was giving the clients. CephFS ran into problems during its test, claiming that it had run out of disk space, even though (at least as far as I could see) it was only using about a quarter of the space on the partition. I went ahead and included the numbers generated before the crash, but I would take them with a grain of salt.

Once again, initial creation didn’t have any major surprises, though NFS did really well, giving much better aggregate performance than it did in the earlier single-client test. LizardFS also bettered its single-client speed, while GlusterFS and CephFS both were slower creating files for 30 clients at the same time.

LizardFS started to do very well with the compile benchmark, with an aggregate speed over double that of the other filesystems. LizardFS flew with the read benchmark, though I suspect some of that is due to the client preferring the local data server. GlusterFS managed to beat NFS, while CephFS started running into major trouble.

The delete benchmark seemed to be a continuation of the single-client delete benchmark with LizardFS leading the way, NFS just under five times slower, and GlusterFS over 25 times slower. The CephFS benchmarks had all failed by this point, so there’s no data for it.

I would be happy to re-run these tests if someone has suggestions on optimizations especially for GlusterFS and CephFS.

Installing FreeIPA with an Active Directory subordinate CA

Posted by Fraser Tweedale on August 14, 2017 06:04 AM

FreeIPA is often installed in enterprise environments for managing Unix and Linux hosts and services. Most commonly, enterprises use Microsoft Active Directory for managing users, Windows workstations and Windows servers. Often, Active Directory is deployed with Active Directory Certificate Services (AD CS) which provides a CA and certificate management capabilities. Likewise, FreeIPA includes the Dogtag CA, and when deploying FreeIPA in an enterprise using AD CS, it is often desired to make the FreeIPA CA a subordinate CA of the AD CS CA.

In this blog post I’ll explain what is required to issue an AD sub-CA, and how to do it with FreeIPA, including a step-by-step guide to configuring AD CS.

AD CS certificate template overview

AD CS has a concept of certificate templates, which define the characteristics an issued certificate shall have. The same concept exists in Dogtag and FreeIPA except that in those projects we call them certificate profiles, and the mechanism to select which template/profile to use when issuing a certificate is different.

In AD CS, the template to use is indicated by an X.509 extension in the certificate signing request (CSR). The template specifier can be one of two extensions. The first, older extension has OID and allows you to specify a template by name:

CertificateTemplateName ::= SEQUENCE {
   Name            BMPString

(Note that some documents specify UTF8String instead of BMPString. BMPString works and is used in practice. I am not actually sure if UTF8String even works.)

The second, Version 2 template specifier extension has OID and allows you to specify a template by OID and version:

CertificateTemplate ::= SEQUENCE {
    templateID              EncodedObjectID,
    templateMajorVersion    TemplateVersion,
    templateMinorVersion    TemplateVersion OPTIONAL

TemplateVersion ::= INTEGER (0..4294967295)

Note that some documents also show templateMajorVersion as optional, but it is actually required.

When submitting a CSR for signing, AD CS looks for these extensions in the request, and uses the extension data to select the template to use.

External CA installation in FreeIPA

FreeIPA supports installation with an externally signed CA certificate, via ipa-server-install --external-ca or (for existing CA-less installations ipa-ca-install --external-ca). The installation takes several steps. First, a key is generated and a CSR produced:

$ ipa-ca-install --external-ca

Directory Manager (existing master) password: XXXXXXXX

Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes
  [1/8]: configuring certificate server instance
The next step is to get /root/ipa.csr signed by your CA and re-run /sbin/ipa-ca-install as:
/sbin/ipa-ca-install --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate

The installation program exits while the administrator submits the CSR to the external CA. After they receive the signed CA certificate, the administrator resumes the installation, giving the installation program the CA certificate and a chain of one or more certificates up to the root CA:

$ ipa-ca-install --external-cert-file ca.crt --external-cert-file ipa.crt
Directory Manager (existing master) password: XXXXXXXX

Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes
  [1/29]: configuring certificate server instance
  [29/29]: configuring certmonger renewal for lightweight CAs
Done configuring certificate server (pki-tomcatd).

Recall, however, that if the external CA is AD CS, a CSR must bear one of the certificate template specifier extensions. There is an additional installation program option to add the template specifier:

$ ipa-ca-install --external-ca --external-ca-type=ms-cs

This adds a name-based template specifier to the CSR, with the name SubCA (this is the name of the default sub-CA template in AD CS).

Specifying an alternative AD CS template

Everything discussed so far is already part of FreeIPA. Until now, there is no way to specify a different template to use with AD CS.

I have been working on a feature that allows an alternative AD CS template to be specified. Both kinds of template specifier extension are supported, via the new --external-ca-profile installation program option:

$ ipa-ca-install --external-ca --external-ca-type=ms-cs \

(Note: huge OIDs like the above are commonly used by Active Directory for installation-specific objects.)

To specify a template by name, the --external-ca-profile value should be:


To specify a template by OID, the OID and major version must be given, and optionally the minor version too:


Like --external-ca and --external-ca-type, the new --external-ca-profile option is available with both ipa-server-install and ipa-ca-install.

With this feature, it is now possible to specify an alternative or custom certificate template when using AD CS to sign the FreeIPA CA certificate. The feature has not yet been merged but there an open pull request. I have also made a COPR build for anyone interested in testing the feature.

The remainder of this post is a short guide to configuring Active Directory Certificate Services, defining a custom CA profile, and submitting a CSR to issue a certificate.

Appendix A: installing and configuring AD CS

Assuming an existing installation of Active Directory, AD CS installation and configuration will take 10 to 15 minutes. Open Server Manager, invoke the Add Roles and Features Wizard and select the AD CS Certification Authority role:


Proceed, and wait for the installation to complete…


After installation has finished, you will see AD CS in the Server Manager sidebar, and upon selecting it you will see a notification that Configuration required for Active Directory Certificate Services.


Click More…, and up will come the All Servers Task Details dialog showing that the Post-deployment Configuration action is pending. Click the action to continue:


Now comes the AD CS Configuration assistant, which contains several steps. Proceed past the Specify credentials to configure role services step.

In the Select Role Services to configure step, select Certification Authority then continue:


In the Specify the setup type of the CA step, choose Enterprise CA then continue:


The Specify the type of the CA step lets you choose whether the AD CS CA will be a root CA or chained to an external CA (just like how FreeIPA lets you create root or subordinate CA!) Installing AD CS as a Subordinate CA is outside the scope of this guide. Choose Root CA and continue:


The next step lets you Specify the type of the private key. You can use an existing private key or Create a new private key, the continue.

The Specify the cryptographic options step lets you specify the Key length and hash algorithm for the signature. Choose a key length of at least 2048 bits, and the SHA-256 digest:


Next, Specify the name of the CA. This sets the Subject Distinguished Name of the CA. Accept defaults and continue.

The next step is to Specify the validity period. CA certificates (especially root CAs) typically need a long validity period. Choose a value like 5 Years, then continue:


Accept defauts for the Specify the database locations step.

Finally, you will reach the Confirmation step, which summarises the chosen configuration options. Review the settings then Configure:


The configuration will take a few moments, then the Results will be displayed:


AD CS is now configured and you can begin issuing certificates.

Appendix B: creating a custom sub-CA certificate template

In this section we look at how to create a new certificate template for sub-CAs by duplicating an existing template, then modifying it.

To manage certificate templates, from Server Manager right-click the server and open the Certification Authority program:


In the sidebar tree view, right-click Certificate Templates then select Manage.


The Certificate Templates Console will open. The default profile for sub-CAs has the Template Display Name Subordinate Certification Authority. Right-click this template and choose Duplicate Template.


The new template is created and the Properties of New Template dialog appears, allowing the administrator to customise the template. You can set a new Template display name, Template name and so on:


You can also change various aspects of certificate issuance including which extensions will appear on the issued certificate, and the values of those extensions. In the following screenshot, we see a new Certificate Policies OID being defined for addition to certificates issued via this template:


Also under Extensions, you can discover the OID for this template by looking at the Certificate Template Information extension description.

Finally, having defined the new certificate template, we have to activate it for use with the AD CA. Back in the Certification Authority management window, right-click Certificate Templates and select Certificate Template to Issue:


This will pop up the Enable Certificate Templates dialog, containing a list of templates available for use with the CA. Select the new template and click OK. The new certificate template is now ready for use.

Appendix C: issuing a certificate

In this section we look at how to use AD CS to issue a certificate. It is assumed that the CSR to be signed exists and Active Directory can access it.

In the Certification Authority window, in the sidebar right-click the CA and select All Tasks >> Submit new request…:


This will bring up a file chooser dialog. Find the CSR and Open it:


Assuming all went well (including the CSR indicating a known certificate template), the certificate is immediately issued and the Save Certificate dialog appear, asking where to save the issued certificate.

radv on SI and CIK GPU - update

Posted by Dave Airlie on August 14, 2017 03:16 AM
I recently acquired an r7 360 (BONAIRE) and spent some time getting radv stable and passing the same set of conformance tests that VI and Polaris pass.

The main missing thing was 10-bit integer format clamping for a bug in the SI/CIK fragment shader output hardware, where it truncates instead of clamps. The other missing piece was code for handling f16->f32 conversions according to the vulkan spec that I'd previously fixed for VI.

I also looked at a trace from amdgpu-pro and noticed it was using a ds_swizzle for the derivative calculations which avoided accessing LDS memory. I wrote support to use this path for radv/radeonsi since LLVM supported the intrinsic for a while now.

With these fixed CIK is pretty much in the same place as VI/Polaris.

I then plugged in my SI (Tahiti), and got lots of GPU hangs and crashes. I fixed a number of SI specific bugs (tiling and MSAA handling, stencil tiling). However even with those fixed I was getting random hangs, and a bunch of people on a bugzilla had noticed the same thing. I eventually discovered adding a shader pipeline and cache flush at the end of every command buffer (this took a few days to narrow down exactly). We aren't 100% sure why this is required on SI only, it may be a kernel bug, or a command processor bug, but it does mean radv on SI now can run games without hanging.

There are still a few CTS tests outstanding on SI only, and I'll probably get to them eventually, however I also got an RX Vega and once I get a newer BIOS for it from AMD I shall be spending some time fixing the radv support for it.

Patternfly User Dropdown

Posted by Suzanne Hillman (Outreachy) on August 13, 2017 07:51 PM

I’m back with more about Patternfly’s navigation bar user dropdown.

More from developer

I’ve done a brief, remote, contextual interview with the developer who originally asked for this to be researched. With this, I confirmed a few things about what his concerns are:

  • Accessing items within a dropdown takes more time and more clicks than without one
  • Dropdowns can be extra slow to interact with when on a slow network connection, especially when animations are involved
  • It’s easier to remember where to go to get to menu items that are top-level rather than under a dropdown
  • Frequent use items need to be easily accessed and easily discovered

Discussion with patternfly UX researcher

I’ve started a conversation with the UX researcher at Patternfly, Sara Chizari. In large part, I wanted additional perspectives on the problem. I was also hoping to learn if there is existing research on this topic that I’d missed.

My inclination is that the major goal of this research is two-fold:

  1. In the specific case of the developer I’m working with, what are the best guidelines for the use — or lack of use — of navigation bar dropdowns.
  2. In general, we need guidelines for the use of dropdowns.

I expect that these will also change with the display screen size: limited space constrains what can be at the top level.

How do we figure this out?

I’m not yet certain of the best way to go about figuring this out, which is part of what I’m discussing with Sara.

In this particular case, the dropdown is not expected to contain high-use items. While useful, I wouldn’t expect things like ‘settings’ and ‘log out’ to come up during the course of everyday use of an application or webpage. It’s difficult to be sure what other categories of items people are likely to wait to use here, but the real-world examples I have are suggestive:


It looks like basically everyone includes settings and sign (or log) out. Many also include help. Of these, I would expect that sign out would be highest use, especially for those folks who access the applications on computers that are not their own.

Because these won’t be high use items, I’m not yet sure how best to create tasks for people to do during a usability session. I don’t think I want to overemphasize actions that they might not otherwise do, as it’ll make it somewhat difficult to identify the highest use items. At the same time, I need to have people try different prototypes of the menu and menu area to see how they turn out in practice.

What do I think so far?

My instinct suggests that we will specifically want to test the usability of a few different things:

  • Dropdown of 3 or fewer items vs not being in a dropdown.
  • Logout, settings, and/or help being inside or outside of a dropdown.
  • Mobile vs tablet vs computer monitor

These feel like they will address the ‘dropdown vs not dropdown’ item number cuttoff point on various screen sizes, and the specific menu items that I believe to be the most frequent use.

I may want to identify the most used items in those dropdowns, before I go into more specific testing as per the above list. I’m not yet certain of the best way to approach this, however.

Now what?

Patternfly dropdown

Sara will be doing some literature research this coming week, and will then be busy until mid-Sept on her own projects. I’m hoping to figure out the kinds of things to be testing with the aim of starting usability sessions in September.

UX Newbies and career changers group?

In the meantime, due to conversations with the local UXPA group on Facebook, I’ve started investigating both problems and potential solutions facing UX newbies and career changers within the Boston area. The major goal here will be to figure out what types of things are interfering with getting new people into UX jobs, coming up with concrete things to do about them, and figuring out how to make those things available to people locally. I’d love additional perspectives and ideas, since I am only one of many folks trying to get into UX, and will definitely not have thought of all the obstacles (or possible solutions!).

But that's not my job!

Posted by Josh Bressers on August 13, 2017 07:45 PM
This week I've been thinking about how security people and non security people interact. Various conversations I have often end up with someone suggesting everyone needs some sort of security responsibility. My suspicion is this will never work.

First some background to think about. In any organization there are certain responsibilities everyone has. Without using security as our specific example just yet, let's consider how a typical building functions. You have people who are tasked with keeping the electricity working, the plumbing, the heating and cooling. Some people keep the building clean, some take care of the elevators. Some work in the building to accomplish some other task. If the company that inhabits the building is a bank you can imagine the huge number of tasks that take place inside.

Now here's where I want our analogy to start. If I work in a building and I see a leaking faucet. I probably would report it. If I didn't, it's likely someone else would see it. It's quite possible if I'm one of the electricians and while accessing some hard to reach place I notice a leaking pipe. That's not my job to fix it, I could tell the plumbers but they're not very nice to me, so who cares. The last time I told them about a leaking pipe they blamed me for breaking it, so I don't really have an incentive here. If I do nothing, it really won't affect me. If I tell someone, at best it doesn't affect me, but in reality I probably will get some level of blame or scrutiny.

This almost certainly makes sense to most of us. I wonder if there are organizations where reporting things like this comes with an incentive. A leaking water pipe could end up causing millions in damage before it's found. Nowhere I've ever worked ever really had an incentive to report things like this. If it's not your job, you don't really have to care, so nobody ever really cared.

Now let's think about phishing in a modern enterprise. You see everything from blaming the user who clicked the link, to laughing at them for being stupid, to even maybe firing someone for losing the company a ton of money. If a user clicks a phishing link, and suspects a problem, they have very little incentive to be proactive. It's not their job. I bet the number of clicked phish links we find out about is much much lower than the total number clicked.

I also hear security folks talking about educating the users on how all this works. Users should know how to spot phishing links! While this won't work for a variety of reasons, at the end of the day, it's not their job so why do we think they should know how to do this? Even more important, why do we think they should care?

The think I keep wondering is should this be the job of everyone or just the job of the security people? I think the quick reaction is "everyone" but my suspicion is it's not. Electricity is a great example. How many stories have you heard of office workers being electrocuted in the office? The number is really low because we've made electricity extremely safe. If we put this in the context of modern security we have a system where the office is covered in bare wires. Imagine wires hanging from the ceiling, some draped on the floor. The bathroom has sparking wires next to the sink. We lost three interns last week, those stupid interns! They should have known which wires weren't safe to accidentally touch. It's up to everyone in the office to know which wires are safe and which are dangerous!

This is of course madness, but it's modern day security. Instead of fixing the wires, we just imagine we can train everyone up on how to spot the dangerous ones.

FESCo Elections: Interview with Dominik Mierzejewski (rathann)

Posted by Fedora Community Blog on August 12, 2017 11:48 PM
Fedora Engineering Steering Council badge, awarded after Fedora Elections - read the Interviews to learn more about candidates

Fedora Engineering Steering Council badge

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Tuesday, August 8th and closes promptly at 23:59:59 UTC on Monday, August 14th. Please read the responses from candidates and make your choices carefully. Feel free to ask questions to the candidates here (preferred) or elsewhere!

Interview with Dominik Mierzejewski (rathann)

  • Fedora Account: rathann
  • IRC: Rathann (found in #fedora-devel, #fedora-pl, #fedora-science, #ffmpeg-devel, #mplayerdev, #rpmfusion)
  • Fedora User Wiki Page


What is your background in engineering?

I have a Master’s degree in software engineering from the Faculty of Electronics and Information Technology, Warsaw University of Technology. Throughout my professional career, I worked as a system administrator of various Unix flavours for over 12 years. I also did a bit of programming (C, C++, Python, SQL). These days, I am a senior Linux engineer at Citi, where I’m responsible for standards development and internal OS platforms development, integration and certification. In my previous jobs, I worked as a sysadmin at the supercomputing centre of the University of Warsaw (ICM) as well as a programmer at the TOTEM experiment at CERN.

My open source contributions outside Fedora include patches, translations and detailed bug reports to major projects like the Linux kernel, FFmpeg or MPlayer, as well as a number of smaller ones like MDAnalysis. My experience is quite diverse and dates back to 2002.

Why do you want to be a member of FESCo?

I think a community as diverse as Fedora needs equally diverse representation. I was deeply honoured to be elected by my fellow Fedora contributors to serve on the Committee a year ago. This past year was my first year as a FESCo member. I learned a lot and gained even more appreciation for the work done by my colleagues and predecessors. I want to follow the path I took a year ago and continue to serve the Fedora community with my experience, knowledge and passion. Having served my first term, I see FESCo not as a governing body, but rather as enablers of others’ work and as a sounding board for ideas from the community. As a member of FESCo, I’ll be able to continue to listen to the many voices of the Fedora community and make sure they are heard.

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

The IT landscape is changing constantly and Fedora must adapt in order to stay relevant and increase its user and contributor base. Without a doubt, containers are the current game-changing technology. Everyone, including the largest companies is going to use it in one form or another. The main challenge is how best to embrace this technology while following Fedora’s core values.

Since the beginning of this year, I’ve been involved in making containers part of my employer’s technology stack and I’m very happy to see how various groups in Fedora are doing the same and in very innovative ways. The Atomic Host and Flatpak are both great examples, even if they’re not perfect yet.

The Modularity initiative is another great idea that would ensure a stable OS base while turning some package groups into exchangeable building blocks which may follow a different release schedule.

With these new technologies being tested and implemented in Fedora, I want to ensure that they do not repeat past mistakes or alienate the existing users and contributors, but instead become best-of-breed and examples for other distributions to follow.

I’m confident that my knowledge and experience combined with that of the other FESCo members will help me provide the best advice to Fedora contributors moving forward.

What are three personal qualities that you feel would benefit FESCo if you are elected?

My diverse engineering experience tells me to always consider various solutions before settling on the final one. I’m a strong opponent of continuing to do things in a certain way just because “we’ve always done this before”, while being keenly aware that there are always reasons behind traditions. However, such reasons need to be reevaluated periodically because they often become irrelevant in time.

I’m an avid believer in diplomacy, though I don’t shy away from telling the truth straight. For the last six years, I’ve worked in an international, multi-cultural environment, which made me sensitive to the differences in attitudes, cultures and values. I’m a good listener and I don’t take offence easily. In conflict situations, I can usually get the parties to reach common ground without causing hostilities and I already have a number of successes in this field among the open-source communities.

I can perform miracles immediately. Wonders take a bit longer. Just kidding, of course, but I do seem to have a knack for solving difficult and unique problems, which nobody can find an answer to very often.

What is your strongest point as a candidate? What is your weakest point?

I started contributing to open-source over 15 years ago and I don’t foresee stopping anytime soon. It’s my passion. I gained a lot of experience during this time and I’m happy to be able to use it for the benefit of Fedora community.

I still have a lot to learn, but then again, don’t we all? In this day and age it’s impossible to know everything. Luckily, we have a lot of smart and knowledgeable folks around Fedora and I feel privileged to be able to learn from them.

Currently, how do you contribute to Fedora? How does that contribution benefit the community?

Over the last 10+ years, I’ve been maintaining a growing number of packages (over 80 today), mostly related to either science or multi-media, but also picking up various orphans that I saw as useful and worth saving. Among these, I count my contributions to RPMFusion, which I treat as an integral part of the Fedora ecosystem. My roles at Fedora include being a provenpackager, a sponsor, an ambassador, and a member of the Fedora Packaging Committee. Additionally, I’ve been serving on FESCo for the past year.

Whenever I can, I lend a helping hand as provenpackager, encourage people to join Fedora as users and contributors and spread my knowledge both internally at my company and while attending open-source conferences. I give talks and lead workshops, as well as talk about Fedora and open-source in general.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

The primary focus should remain on attracting and retaining developers, providing them with a “just working” development environment. Providing a secure and reasonably stable server platform should be another focus area. I think Fedora is doing well on both of these fields, but we can always do better. Modularity is a step in the right direction, as was migration to Pagure, even if some rough edges remain. We need better tools and more automation to limit the amount of manual tasks while developing on Fedora (or Fedora itself) as well as when setting up Fedora servers.

If a past member of FESCo, identify a negative factor you noticed while serving in FESCo. How would you propose to improve on that for the next cycle?

I noticed that we often lacked quorum during our weekly meetings, which was in part my own fault. I think it might be difficult to find one meeting time that is suitable to all, given that some members live in different time zones, so adopting what FPC did might be a good option, i.e. alternating between two meeting times every week. Another factor is communication with Change owners. Often, we weren’t able to obtain answers to key questions about the proposed Changes. I will endeavour (and encourage my colleagues) to reach out to Change owners directly and ask them to answer our questions on the devel mailing list so that they may be easily discussed by Fedora community as well.

Do you believe the Modularity objective is important to Fedora’s success? Is there anything you wish to bring to the modularity efforts?

I believe so, for the most part. The devil is, as always, in the details. Switching from traditional releases to a curated set of modules will be challenging and will take time. I do welcome the automated rebuilds wholeheartedly, though. Koschei was a step in the right direction, but didn’t go far enough.

What is the air-speed velocity of an unladen swallow?

So, knowledge of Monty Python references is one of the key attributes of FESCo members, now? Having a sense of humour does help a lot, that’s for sure. Anyway, African, or European?

Closing words

Thank you for reading so far. I encourage every Fedora contributor to vote and wish every Nominee success in the Elections.

The post FESCo Elections: Interview with Dominik Mierzejewski (rathann) appeared first on Fedora Community Blog.

Confirming Fedora and GNOME presence in INFOSOFT 2017

Posted by Julita Inca Chiroque on August 12, 2017 05:20 PM

INFOSOFT is a tech event organized by AAII (asociation of alumni of informatic and engineering)  from Pontificia Universidad Católica del Perú – PUCP.

This year is going to happen on September, and as you can see in the official Web, it is going to be a free fee event with previous registration until September 1st. It is open to everyone!Thanks to my trip and little training at GUADEC, I will be able to talk about GTK, Python and WebKit, with the Web Browser example

And I will also talk about tools to writer papers and good quality and free academic programs as Latex and Octave.

Here you can see my schedules of the workshops: I think it is important to show Linux, in my case with Fedora and GNOME in tech conferences where most of the time, other big companies have a strong presence.

Here is the list with some speakers:Thanks so much to the organizers to consider GNOME and Fedora in INFOSOFT 2017:I am exciting to be part of this edition of INFOSOFT in the 100th years of PUCP! <3

Filed under: FEDORA, GNOME Tagged: event, fedora, GNOME, Infosoft, INFOSOFT 2017:, Julita Inca, Julita Inca Chiroque, Lima, linux, Perú, PUCP, PUCP 100 years

Code coverage from Nightmare.js tests

Posted by Alexander Todorov on August 12, 2017 03:11 PM

In this article I'm going to walk you through the steps required to collect code coverage when running an end-to-end test suite against a React.js application.

The application under test looks like this

<!doctype html>
<html lang="en-us" class="layout-pf layout-pf-fixed">
    <!-- js dependencies skipped -->
    <div id="main"></div>
    <script src="./dist/main.js?0ca4cedf3884d3943762"></script>

It is served as an index.html file and a main.js file which intercepts all interactions from the user and sends requests to the backend API when needed.

There is an existing unit-test suite which loads the individual components and tests them in isolation. Apparently people do this!

There is also an end-to-end test suite which does the majority of the testing. It fires up a browser instance and interacts with the application. Everything runs inside Docker containers providing a full-blown production-like environment. They look like this

test('should switch to Edit Recipe page - recipe creation success', (done) => {
  const nightmare = new Nightmare();
    .wait(page => document.querySelector(page.dialogRootElement).style.display === 'block'
      , createRecipePage)
    .insert(createRecipePage.inputName, createRecipePage.varRecName)
    .insert(createRecipePage.inputDescription, createRecipePage.varRecDesc)
    .end() // remove this!
    .then((element) => {
      // here goes coverage collection helper
      done(); // remove this!
}, timeout);

The browser interaction is handled by Nightmare.js (sort of like Selenium) and the test runner is Jest.

Code instrumentation

The first thing we need is to instrument the application code to provide coverage statistics. This is done via babel-plugin-istanbul. Because unit-tests are executed a bit differently we want to enable conditional instrumentation. In reality for unit tests we use jest --coverage which enables istanbul on the fly and having the code already instrumented breaks this. So I have the following in webpack.config.js

if (process.argv.includes('--with-coverage')) {

and then build my application with node run build --with-coverage.

You can execute node run start --with-coverage, open the JavaScript console in your browser and inspect the window.__coverage__ variable. If this is defined then the application is instrumented correctly.

Fetching coverage information from within the tests

Remember that main.js from the beginning of this post? It lives inside index.html which means everything gets downloaded to the client side and executed there. When running the end-to-end test suite that is the browser instance which is controlled via Nightmare. You have to pass window.__coverage__ from the browser scope back to nodejs scope via nightmare.evaluate()! I opted to directly save the coverage data on the file system and make it available to coverage reporting tools later!

My coverage collecting snippet looks like this

  .evaluate(() => window.__coverage__) // this executes in browser scope
  .end() // terminate the Electron (browser) process
  .then((cov) => {
    // this executes in Node scope
    // handle the data passed back to us from browser scope
    const strCoverage = JSON.stringify(cov);
    const hash = require('crypto').createHmac('sha256', '')
    const fileName = `/tmp/coverage-${hash}.json`;
    require('fs').writeFileSync(fileName, strCoverage);

    done(); // the callback from the test
.catch(err => console.log(err));

Nightmare returns window.__coverage__ from browser scope back to nodejs scope and we save it under /tmp using a hash value of the coverage data as the file name.

Side note: I do have about 40% less coverage files than number of test cases. This means some test scenarios exercise the same code paths. Storing the individual coverage reports under a hashed file name makes this very easy to see!

Note that in my coverage handling code I also call .end() which will terminate the browser instance and also execute the done() callback which is being passed as parameter to the test above! This is important because it means we had to update the way tests were written. In particular the Nightmare method sequence doesn't have to call .end() and done() except in the coverage handling code. The coverage helper must be the last code executed inside the body of the last .then() method. This is usually after all assertions (expectations) have been met!

Now this coverage helper needs to be part of every single test case so I wanted it to be a one line function, easy to copy&paste! All my attempts to move this code inside a module have been futile. I can get the module loaded but it kept failing with Unhandled promise rejection (rejection id: 1): cov_23rlop1885 is not defined;`

At the end I've resorted to this simple hack


Shout-out to Krasimir Tsonev who joined me on a two days pairing session to figure this stuff out. Too bad we couldn't quite figure it out. If you do please send me a pull request!

Reporting the results

All of these coverage-*.json files are directly consumable by nyc - the coverage reporting tool that comes with the Istanbul suite! I mounted .nyc_output/ directly under /tmp inside my Docker container so I could

nyc report
nyc report --reporter=lcov | codecov

We can also modify the unit-test command to jest --coverage --coverageReporters json --coverageDirectory .nyc_output so it produces a coverage-final.json file for nyc. Use this if you want to combine the coverage reports from both test suites.

Because I'm using Travis CI the two test suites are executed independently and there is no easy way to share information between them. Instead I've switched from Coveralls to CodeCov which is smart enough to merge coverage submissions coming from multiple jobs on the same git commits. You can compare the commit submitting only unit-test results with the one submitting coverage from both test suites.

All of the above steps are put into practice in PR #136 if you want to check them out!

Thanks for reading and happy testing!

All systems go

Posted by Fedora Infrastructure Status on August 12, 2017 12:24 AM
New status good: Everything seems to be working. for services: Fedora Infrastructure Cloud, COPR Build System

How to sync *.ssh keys and set permissions

Posted by Robbi Nespu on August 12, 2017 12:00 AM

Assalamualaikum wbt and greeting everyone! I’ve had the same ssh keys for years. I just rsync them to a new system when I get one. The problem is I always seem to end up with mucked up permissions moving them around and never seem to remember how the permissions were set. Now let put some note as my future reference here.

$ chmod 700 ~/.ssh
$ chmod 600 ~/.ssh/authorized_keys
$ chmod 644 ~/.ssh/config
$ chmod 600 ~/.ssh/id_dsa
$ chmod 644 ~/.ssh/id_dsa.pub
$ chmod 644 ~/.ssh/known_hosts

It should be okey and worked!

Update : Another great tips from mcepl is to use chmod 644 on ~/.ssh/config and ~/.ssh/known_hosts file plus more athmane said if you selinux is enforcing mode, just execute restorecon -Rv ~/.ssh from terminal.

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on August 11, 2017 09:09 PM
New status scheduled: Scheduled maintenance in progress for services: COPR Build System, Fedora Infrastructure Cloud

What is Querki?

Posted by Suzanne Hillman (Outreachy) on August 11, 2017 06:05 PM

I’ve met with Mark, the owner and developer of Querki, a couple of times now. I shall now do my best to summarize my understanding thereof, with the purpose of identifying any obvious gaps in my knowledge and any clarifications that may be needed.

Querki is intended to support and encourage collaboration about and sharing of information within communities. The longer blurb on the Querki help page is:

It is a way for you to keep track of your information, the way you want it, not the way some distant programmer or corporate executive thinks you should. You should be able to share that information with exactly who you want, and it should be easy to work together on it. You should be able to use it from your computer or your smartphone, having it all there when you need it.


Everyone has a lot of information to keep track of, much of which they would also like to be able to share and discuss with others. Querki offers a customizable interface in which to manage, display, discuss, share, and explore small to medium data sets with small to medium-sized groups.

An existing example of this is from the Cook’s Guild in the local SCA chapter: they have recipes from specific time periods, and they figured out reconstructions of those recipes so that they can be made nowadays.


As you can see in the screenshot above, the recipes are categorized by type of food, period of food, and culture. Clicking on any of those — also known as tags — will bring you to a list of relevant recipes.

Many of Querki’s useful abilities are currently only possible using the Querki programming language (QL, said as ‘cool’) — such as finding a recipe for 14th century French pancakes in the above cook’s guild space. In the future, the plan is to make common tasks easy to do without the use of QL.

Basic Usage


To view a Querki space, one only needs a link to said space. Precisely what a space will look like varies depending on the desires of the owner of that space.

One of the topics that Mark and I are currently discussing is the idea of a basic default structure for a space . This would hopefully mean that those who don’t want to spend a lot of time structuring their space will still have usable spaces for people to access, discuss, and interact with the data. For those who want to affect structure, that can be done when one has time and inclination, smoothly allowing the transition from a basic Querki space structure to whatever modifications are desired.


A Querki space is meant to be a place for information to be stored and shared. To do this, however, one needs to tell Querki how you want your information to be structured. A model is how you tell Querki the structure you would like for your information.

For example, a model for a CD might include properties for the album title, author, song lists, genre, and publication date, as well as an auto-generated name of the model. Properties can be added when the model is first created, as well as after the fact.

Properties have types. Types include the tag type, the text type, the photo type, and the views type, among others. Properties can themselves have properties, such as with tags. Tags are both the name of a thing and have the possibility of pointing to a related model. Tags may have a description, visible when the tag name is clicked, or simply show a list of things with that tag.

Views are ways to display models. The current default view shows a list of things with that model associated with it. There is also the possibility of a ‘print view’, which will tell Querki how to print the model.

Models will have instances of that model: rather than the generic properties that models contain, instances contain information specific to the instance of that model. In our CD model example, you might have the CD Zoostation by U2, as an instance of the CD model.

In addition to models and their associated pieces, Querki has pages. These are unstructured, and may be understood as a report from a data base, or a wiki page.

What else?

Major goals of Querki

  • Allow for integration with existing social networks in order to help people connect with and invite people they know to work together on Querki.
  • Get Querki to the point of general availability (it’s currently in beta) and having people interested in paying for it. It’s not yet clear what this entails. More investigation required.

Different skill levels of users

Currently, there is an idea of an ‘easy’, ‘medium’, and ‘hard interface. These largely describe the degree to which one needs to be able to program to get the interface to do what you want.

  • The “easy” interface is meant to allow people to use a published template (aka ‘an app’) from an existing Querki space to structure their own, and to use someone else’s space.
  • The “medium” interface allows more customization, but doesn’t present the more complicated/confusing programming options to the user.
  • The “hard” interface is meant for hard-core programmers, allowing the use of every tool available in Querki. This allows the building of templates (apps) and lots of power user commands through the underlying programming language QL (pronounced “COOL”).

It is not currently very clear to users what their options are for using QL.


Search is very basic right now, with searches being within a Querki space, on plain text strings. The goal in the future is to include the ability to search across spaces as well as objects, including tags.

There are currently icons for editing a page, refreshing a page (with your changes?), and publishing a page (for those spaces which do not want changes to happen immediately during editing). Are these reasonable things to have as icons? Do they need text also/instead?

Mobile is very important! Consuming a page should be possible even on small phone screens. Editing should also be possible on a mobile phone. Designing a page for a phone isn’t likely due to small screen, but planned for tablets.

Data manipulation/query building talks about the need to do some basic filtering and sorting of the information in an instance. We need to figure out the most common queries of this type, and see how many can be abstracted away from the underlying programming language for use by anyone/everyone.

Specific pages in need of (design?) work: front page, help, contextual help, model design page/advanced editor. The programming UI needs help (see the design page), and likely needs a simple IDE.


Querki spaces are mostly publicly visible, which should help come time to improve the login page/start page.


Tag names cannot currently be the same name as the model associated with them, due to the fact that tags point to a related model rather than containing it. This may need to be invisible to users to avoid confusion?

Fedora August 2017 election change

Posted by Fedora Magazine on August 11, 2017 03:40 PM

UPDATE (2017-Aug-14): The Fedora Engineering Steering Committee (FESCo) voting also had to be rescheduled due to a candidate listing error. It will run during the same time as the FAMSCo voting period listed below. You can read more information here.

As seen earlier this week, the Fedora community holds elections in several groups. One group that elects seats this month is the Fedora Ambassador Steering Committee (FAMSCo).

The FAMSCo election started along with others this week. However, due to a technical error, the voting system prevented some eligible people from voting. Contributors have now fixed this issue. Fedora Program Manager Jan Kurik announced the issue and the fix on the Ambassadors’ mailing list.

What does this mean?

Of course the project wants to ensure the election is open and fair for all. Therefore, the FAMSCo election will restart next week. The new voting period begins on Tuesday, August 15 at 0000 UTC (click the link for local time). It ends on Monday, August 21 at 2359 UTC.

Votes from the original FAMSCo election do not count. Only the new votes are valid. So if you voted in the original election, you must vote again to be counted.

What about other elections?

As mentioned in the announcement, the Fedora Council and FESCo elections continue unaffected. The announcement here contains additional details.

Upgrading Fedora 25 to Fedora 26

Posted by Radka Janek on August 11, 2017 02:37 PM

Upgrading Linux workstation to a new version? Everything is going to break!!! O_O
…oh don’t worry, it will be fine with Fedora =)

So here I am, upgrading my Fedora to 26 a month late again. I did not want to risk screwing it up just before my talks. I’m finally free of anything stressful and have just regular work on my plate, I can do it now!


It took a little over half an hour and nearly 2GBs of downloaded data to get it upgraded. No issues, just one surprise when I started weechat. There were two buffer lists and some random errors printed in the middle of the chat screen.

Surprise issues with WeeChat

Surprise issues with WeeChat

Locale issue

One by one, from the random errors I can see that it’s something to do with locale.

locale -a | grep en - quickly shows that I don’t have en_SE in the list, this was the locale of my choice to get the right date and time format for Europe, and still keep it in English. I changed it en_DK which is what I’m looking for. Surprisingly this has nothing to do with Denmark, they just used that as a country code which has to be there, and EU could not be used for Europe.

The first problem solved.

Weechat issue

Weechat showing two almost identical lists of buffers? Turns out that they introduced new way to display the list of buffers, so now I have both of them active - buffers and buflist.

Quick search for reference on how to disable one of them, and typing it in.

/set buflist.look.enabled false - Second problem solved.


Go and upgrade your system if you haven’t done so yet. =)

Fedora 26 and the Console Bunneh

Fedora 26 and the Console Bunneh

How to upgrade from Fedora 25 Atomic Host to 26

Posted by Fedora Magazine on August 11, 2017 12:28 PM

In July the Atomic Working Group put out the first and second releases of Fedora 26 Atomic Host. This article shows you how to prepare an existing Fedora 25 Atomic Host system for Fedora 26 and do the upgrade.

If you really don’t want to upgrade to Fedora 26 see the later section: Fedora 25 Atomic Host Life Support.

Preparing for Upgrade

Before you perform an update to Fedora 26 Atomic Host, check the filesystem to verify that at least a few GiB of free space exists in the root filesystem. The update to Fedora 26 may retrieve more than 1GiB of new content (not shared with Fedora 25) and thus needs plenty of free space.

Luckily Upstream OSTree has implemented some filesystem checks to ensure an upgrade stops before it fills up the filesystem.

The example here is a Vagrant box. First, check the free space available:

[vagrant@host ~]$ sudo df -kh /
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/atomicos-root  3.0G  1.4G  1.6G  47% /

Only 1.6G free means the root filesystem probably needs to be expanded to make sure there is plenty of space. Check the free space by running the following commands:

[vagrant@host ~]$ sudo vgs
  VG       #PV #LV #SN Attr   VSize  VFree
  atomicos   1   2   0 wz--n- 40.70g 22.60g
[vagrant@host ~]$ sudo lvs
  LV          VG       Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  docker-pool atomicos twi-a-t--- 15.09g             0.13   0.10                            
  root        atomicos -wi-ao----  2.93g

The volume group on the system in question has 22.60g free and the atomicos/root logical volume is 2.93g in size. Increase the size of the root volume group by 3 GiB:

[vagrant@host ~]$ sudo lvresize --size=+3g --resizefs atomicos/root
  Size of logical volume atomicos/root changed from 2.93 GiB (750 extents) to 5.93 GiB (1518 extents).
  Logical volume atomicos/root successfully resized.
meta-data=/dev/mapper/atomicos-root isize=512    agcount=4, agsize=192000 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1 spinodes=0 rmapbt=0
         =                       reflink=0
data     =                       bsize=4096   blocks=768000, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal               bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 768000 to 1554432
[vagrant@host ~]$ sudo lvs
  LV          VG       Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  docker-pool atomicos twi-a-t--- 15.09g             0.13   0.10                            
  root        atomicos -wi-ao----  5.93g

The lvresize command above also resized the filesystem all in one shot. To confirm, check the filesystem usage:

[vagrant@host ~]$ sudo df -kh /
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/atomicos-root  6.0G  1.4G  4.6G  24% /


Now the system should be ready for upgrade. If you do this on a production system, you may need to prepare services for downtime.

If you use an orchestration platform, there are a few things to note. If you use Kubernetes, refer to the later section on Kubernetes: Upgrading Systems with Kubernetes. If you use OpenShift Origin (i.e. via being set up by the openshift-ansible installer), the upgrade should not need any preparation.

Currently the system is on Fedora 25 Atomic Host using the fedora-atomic/25/x86_64/docker-host ref.

[vagrant@host ~]$ rpm-ostree status
State: idle
● fedora-atomic:fedora-atomic/25/x86_64/docker-host
                Version: 25.154 (2017-07-04 01:38:10)
                 Commit: ce555fa89da934e6eef23764fb40e8333234b8b60b6f688222247c958e5ebd5b

In order to do the upgrade the location of the Fedora 26 repository needs to be added as a new remote (like a git remote) for ostree to know about:

[vagrant@host ~]$ sudo ostree remote add --set=gpgkeypath=/etc/pki/rpm-gpg/RPM-GPG-KEY-fedora-26-primary fedora-atomic-26 https://kojipkgs.fedoraproject.org/atomic/26

It can be seen from the command that a new remote known as fedora-atomic-26 was added with a remote url of https://kojipkgs.fedoraproject.org/atomic/26. The gpgkeypath variable was also set in the configuration for the remote. This tells OSTree that it should verify commit signatures when downloading from a remote.This is something new that was enabled for Fedora 26 Atomic Host.

Now that the system has the fedora-atomic-26 remote the upgrade can be performed:

[vagrant@host ~]$ sudo rpm-ostree rebase fedora-atomic-26:fedora/26/x86_64/atomic-host

Receiving metadata objects: 0/(estimating) -/s 0 bytes
Signature made Sun 23 Jul 2017 03:13:09 AM UTC using RSA key ID 812A6B4B64DAB85D
  Good signature from "Fedora 26 Primary <fedora-26-primary@fedoraproject.org>"

Receiving delta parts: 0/27 5.3 MB/s 26.7 MB/355.4 MB
Signature made Sun 23 Jul 2017 03:13:09 AM UTC using RSA key ID 812A6B4B64DAB85D
  Good signature from "Fedora 26 Primary <fedora-26-primary@fedoraproject.org>"

27 delta parts, 9 loose fetched; 347079 KiB transferred in 105 seconds                                                                                                                                            
Copying /etc changes: 22 modified, 0 removed, 58 added
Transaction complete; bootconfig swap: yes deployment count change: 1
  GeoIP 1.6.11-1.fc25 -> 1.6.11-1.fc26
  GeoIP-GeoLite-data 2017.04-1.fc25 -> 2017.06-1.fc26
  NetworkManager 1:1.4.4-5.fc25 -> 1:1.8.2-1.fc26
Run "systemctl reboot" to start a reboot
[vagrant@host ~]$ sudo reboot
Connection to closed by remote host.
Connection to closed.

After reboot the status looks like:

$ vagrant ssh
[vagrant@host ~]$ rpm-ostree status
State: idle
● fedora-atomic-26:fedora/26/x86_64/atomic-host
                Version: 26.91 (2017-07-23 03:12:08)
                 Commit: 0715ce81064c30d34ed52ef811a3ad5e5d6a34da980bf35b19312489b32d9b83
           GPGSignature: 1 signature
                         Signature made Sun 23 Jul 2017 03:13:09 AM UTC using RSA key ID 812A6B4B64DAB85D
                         Good signature from "Fedora 26 Primary <fedora-26-primary@fedoraproject.org>"

                Version: 25.154 (2017-07-04 01:38:10)
                 Commit: ce555fa89da934e6eef23764fb40e8333234b8b60b6f688222247c958e5ebd5b
[vagrant@host ~]$ cat /etc/fedora-release
Fedora release 26 (Twenty Six)

The system is now on Fedora 26 Atomic Host. If this were a production system now would be a good time to check services, most likely running in containers, to see if they still work. If a service didn’t come up as expected, you can use the rollback command: sudo rpm-ostree rollback.

To track updated commands for upgrading Atomic Host between releases, visit this wiki page.

Upgrading Systems with Kubernetes

Fedora 25 Atomic Host ships with Kubernetes v1.5.3, and Fedora 26 Atomic Host ships with Kubernetes v1.6.7. Before you upgrade systems participating in an existing Kubernetes cluster from 25 to 26, you must make a few configuration changes.

Node Servers

In Kubernetes 1.6, the --config argument is no longer valid. If systems exist that have the KUBELET_ARGS variable in /etc/kubernetes/kubelet that point to the manifests directory using the --config argument, you must change the argument name to --pod-manifest-path. Also in KUBELET_ARGS, add an additional argument: --cgroup-driver=systemd.

For example, if the /etc/kubernetes/kubelet file started with the following:

KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --config=/etc/kubernetes/manifests --cluster-dns= --cluster-domain=cluster.local"

Then change it to:

KUBELET_ARGS="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig --pod-manifest-path=/etc/kubernetes/manifests --cluster-dns= --cluster-domain=cluster.local --cgroup-driver=systemd"

Master Servers

Staying With etcd2

From Kubernetes 1.5 to 1.6 upstream shifted from using version 2 of the etcd API to version 3. The Kubernetes documentation instructs users to add two arguments to the KUBE_API_ARGS variable in the /etc/kubernetes/apiserver file:

--storage-backend=etcd2 --storage-media-type=application/json

This ensures that Kubernetes continues to find any pods, services or other objects stored in etcd once the upgrade has been completed.

Moving To etcd3

You can migrate etcd data to the v3 API later. First, stop the etcd and kube-apiserver services. Then, assuming the data is stored in /var/lib/etcd, run the following command to migrate to etcd3:

# ETCDCTL_API=3 etcdctl --endpoints https://YOUR-ETCD-IP:2379 migrate --data-dir=/var/lib/etcd

After the data migration, remove the --storage-backend=etcd2 and --storage-media-type=application/json arguments from the /etc/kubernetes/apiserver file and then restart etcd and kube-apiserver services.

Fedora 25 Atomic Host Life Support

The Atomic WG decided to keep updating the fedora-atomic/25/x86_64/docker-host ref every day when Bodhi runs within Fedora. A new update is created every day. However, it is recommended you upgrade systems to Fedora 26 because future testing and development focus on Fedora 26 Atomic Host. Fedora 25 OSTrees won’t be explicitly tested.


The transition to Fedora 26 Atomic Host should be a smooth process. If you have issues or want to be involved in the future direction of Atomic Host, please join us in IRC (#atomic on freenode) or on the atomic-devel mailing list.

Attended GUADEC 2017

Posted by Jiri Eischmann on August 11, 2017 12:27 PM

Although I was still recovering from bronchitis and the English weather was not helping much, I really enjoyed this year’s GUADEC. Last 3 GUADECs suffered a bit from lower attendance, so it was great to see that the conference is bouncing back and the attendance is getting close to 300 again.

What I value the most about GUADEC are hallway conversations. A concrete outcome of it is that we’re currently working with Endless people on getting LibreOffice to Flathub. In the process of it we’d like to improve the LibreOffice flatpak, so that it will be a full replacement for the traditional version in packages: having Java available, having spell-checking dictionaries available etc.

I also spent quite a lot of time with the Engagement team because they’re trying to build local GNOME communities and also make improvements in their budgeting. This is something I spent several years working on in the Fedora Project and we have built robust systems for it there. The GNOME community can get an inspiration from it or even reuse it. That’s why I’d like to be active in the Engagement team at least a bit to help bring those things into life.

Blocking USB devices while the screen is locked

Posted by Daniel Kopeček on August 11, 2017 08:00 AM

Since the 0.7.0 release, it is possible to influence how an already running usbguard-daemon instance handles newly inserted USB devices. The behaviour is defined by the value of the InsertedDevicePolicy runtime parameter and the default choice is to apply the policy rules to figure out whether to authorize the device or not.

The parameter can be read and modified via the usbguard CLI:

$ sudo usbguard get-parameter InsertedDevicePolicy

To change the policy to block use:

$ sudo usbguard set-parameter InsertedDevicePolicy block

Now try to insert a USB device and it won’t be authorized even if there’s a rule in your policy that says otherwise. Devices connected before the parameter value change aren’t affected and remain in the same state.

Please note that for the examples below to work, you need to allow your desktop user to modify the USBGuard runtime parameters. This can be done either with USBGuard IPC access control or by giving sudo permissions to run usbguard set-parameter without authentication.

The following command will allow user joe to read and modify the runtime parameters via USBGuard IPC:

$ sudo usbguard add-user joe --parameters ALL

Note that the command will set the ACL exactly to what is specified on the command line, not append to the existing ACL settings for the user in case they exist.

Blocking new USB device while the screen is locked

Method #1: Screen locker wrapper script

If you are using a custom screen locker like i3lock, you’ll need to create a wrapper script that takes care of setting the InsertedDevicePolicy parameter, something like this:



revert() {
  usbguard set-parameter InsertedDevicePolicy $POLICY_UNLOCKED

usbguard set-parameter InsertedDevicePolicy $POLICY_LOCKED
i3lock -n

Now adjust your screen locker shortcuts and setting to point to this wrapper script instead of the original locker command and that’s it.

Method #2: D-Bus screen (un)lock signals

If you are using a desktop environment which has built-in screen locking support, then it probably signals the “screen (un)locked” state via D-Bus. In that case you need to create a script to watch for these signals and set the InsertedDevicePolicy parameter appropriately. The script should be running in your session (refer to your desktop environment’s documentation on how to automatically start the script after you log in).

Example script:



dbus-monitor --session "type='signal',interface='"$DBUS_INTERFACE"'" |
  while read x; do
    case "$x" in 
      *"boolean true"*) usbguard set-parameter InsertedDevicePolicy $POLICY_LOCKED
      *"boolean false"*) usbguard set-parameter InsertedDevicePolicy $POLICY_UNLOCKED

Διαδικτυακή Ουδετερότητα και η κατάσταση στην Ελλάδα

Posted by Nikos Roussos on August 11, 2017 02:57 AM

Net Neutrality (Διαδικτυακή Ουδετερότητα): είναι η αρχή πως όλοι οι πάροχοι internet (ISPs) και οι κυβερνητικές αρχές που ρυθμίζουν το internet, πρέπει να αντιμετωπίζουν και να διαχειρίζονται όλα τα δεδομένα ισότιμα. Χωρίς να κάνουν διακρίσεις ή να έχουν διαφορετικές χρεώσεις ανάλογα με τον χρήστη, το περιεχόμενο, το website, την εφαρμογή ή την υπηρεσία.

Υπάρχουν πολλοί λόγοι που η παραβίαση αυτής της αρχής δεν είναι πολύ καλή ιδέα. Το σχετικό άρθρο της Wikipedia είναι υπερπλήρες και θα ήταν άσκοπο να προσπαθήσω να το αναπαράγω εδώ. Για την καλύτερη ανάγνωση αυτού του post, ας μείνουμε προς το παρόν στο κομμάτι του αθέμιτου ανταγωνισμού. Αν ένας πάροχος προκρίνει συγκεκριμένες υπηρεσίες, είτε αντιμετωπίζοντας κατά προτεραιότητα το traffic που προέρχεται από/προς αυτές είτε χρεώνοντας πιο φτηνά, τότε τι τύχη και απήχηση μπορεί να έχουν ανταγωνιστικές και εναλλακτικές υπηρεσίες;

Διεθνώς υπάρχουν αρκετές προσπάθειες παράκαμψης και παραβίασης αυτής της αρχής. Όπως είναι εύκολο να μαντέψει κανείς, κυρίως απ' τους "μεγάλους" παίχτες της αγοράς, καθώς αυτοί είναι που εν δυνάμει επωφελούνται περισσότερο. Αν οι πάροχοι αντιμετωπίσουν κατά προτεραιότητα το traffic που αφορά το Facebook τι πιθανότητες επιτυχίας έχει μια εναλλακτική πλατφόρμα; Το παράδειγμα αυτό είναι υπαρκτό, και αφορά το Facebook Zero που παρέχει δωρεάν πρόσβαση στο Facebook (και στην Ελλάδα). Δυστυχώς και το Wikimedia Foundation έχει κάνει το ίδιο λάθος με το Wikipedia Zero. Το Facebook μάλιστα το πήγε κι ένα βήμα παραπέρα, δημιουργώντας το πρόγραμμα Free Basics που παρέχει δωρεάν πρόσβαση σε αρκετά sites δωρεάν. Η Mitchell Baker, chairwoman του Mozilla, εξηγεί αρκετά καλά γιατί το λεγόμενο "Zero Rating" δεν βοηθάει στην εξάπλωση του διαδικτύου σε αναπτυσσόμενες χώρες.

Στις ΗΠΑ το θέμα πήρε διαστάσεις πριν από δύο χρόνια (και συνεχίσει να απασχολεί τις αρμόδιες αρχές) καθώς ακόμα και οι πάροχοι περιεχομένου πιέζουν για άρση του net neutrality ώστε να χρεώνουν τις υπηρεσίες για "πιο γρήγορες" γραμμές. Οι πάροχοι βασίζουν την άποψη αυτή στο πνεύμα ενός οικονομικού φιλελευθερισμού και στην ελευθερία να υιοθετούν όποιο business model θέλουν για τις υπηρεσίες που παρέχουν. Έστω κι αν κάτι τέτοιο οδηγήσει σε πιο αργό Internet για την πλειοψηφία των χρηστών ή στην δημιουργία (ή εδραίωση) μονοπολίων σε διάφορους τομείς διαδικτυακών υπηρεσιών. Το θέμα στις ΗΠΑ είναι τόσο σημαντικό και σε κρίσιμο στάδιο που ακόμα κι ο John Oliver έχει αφιερώσει αρκετό χρόνο στο θέμα.

Παρακάτω είναι ένα γράφημα με την ταχύτητα του Netflix μέσω του παρόχου Comcast, κατά τη διάρκεια των διαπραγματεύσεων μεταξύ των δύο εταιρειών για την παροχή μεγαλύτερων ταχυτήτων στη Netflix. Είναι αρκετά εύκολο να καταλάβει κανείς πως η "βύθιση" στην ταχύτητα συνέπεσε με τις διαπραγματεύσεις, ως μέσο πίεσης προς τη Netflix.


Αυτός είναι και ο λόγος που αντίστοιχοι κανόνες υπάρχουν ώστε να μην επιτρέπουν τους παρόχους περιεχομένου να είναι και πάροχοι δικτύου. Ώστε να μην μπορούν να προκρίνουν τις δικές τους υπηρεσίες περιεχομένου έναντι του ανταγωνισμού. Κανόνας ο οποίος επίσης δεν τηρείται στην Ελλάδα. Όλοι οι μεγάλοι πάροχοι internet διαθέτουν και αντίστοιχες υπηρεσίες internet-tv, ενώ ακόμα και στο εκτός internet τηλεοπτικό τοπίο ο πάροχος του δικτύου (Digea) πρακτικά ελέγχεται από συγκεκριμένα κανάλια (πάροχοι περιεχομένου).

Αφορμή για το παρόν κείμενο ήταν η πρόσφατη διαφήμιση της Vodafone που έτυχε να δω για το πρόγραμμα PASS, το οποίο παρέχει μειωμένες χρεώσεις σε κάποιες πολύ γνωστές υπηρεσίες (ενδεικτικά: Facebook, Instagram, Twitter, Snapchat, LinkedIn, Pinterest). Δεν είναι η πρώτη φορά που η αρχή της διαδικτυακής ουδετερότητας παραβιάζεται στην Ελλάδα. Έχει γράψει σχετικό άρθρο κι ο Διομήδης Σπινέλλης. To Zero Facebook είχε υιοθετηθεί κι απ' τους τρεις παρόχους κινητής τηλεφωνίας και η Cosmote φαίνεται να είναι η μόνη που το διατηρεί ακόμα σε συγκεκριμένα προγράμματα. Η Cosmote επίσης είχε προνομιακή διαχείριση του Spotify για περίπου δύο χρόνια (η συνεργασία έχει σταματήσει πια). Αλλά το παράδειγμα της Vodafone νομίζω είναι το πιο ηχηρό και απροκάλυπτο παράδειγμα παραβίασης του net neutrality που είχαμε ως τώρα.

IANAL: I'm not a Laywer.

Υπάρχουν νομικοί που έχουν περισσότερες γνώσεις και μεγαλύτερη εξειδίκευση από μένα για το νομικό κομμάτι της ιστορίας. Οπότε δεν θα μπω στη διαδικασία να ισχυριστώ αν πέρα όλων των άλλων υπάρχει και θέμα νομιμότητας στα συγκεκριμένα προγράμματα. Θα παραθέσω μερικές παραπομπές και ελπίζω πως υπάρχουν νομικοί εκεί έξω που μπορούν να ασχοληθούν σε περισσότερο βάθος με το θέμα.

Αντιγράφω απ' το άρθρο της ΕΕΤΤ:

την ισότιμη διαχείριση από τους παρόχους του συνόλου της κίνησης στο Διαδίκτυο χωρίς διακρίσεις ή χρεώσεις ανάλογα με το χρήστη, το περιεχόμενο, το διαδικτυακό τόπο, την πλατφόρμα, την εφαρμογή, τον τύπο εξοπλισμού ή τον τρόπο επικοινωνίας.

Στα δικά μου μάτια το παραπάνω απόσπασμα φαίνεται να έρχεται σε αντίθεση με τις συγκεκριμένες πρακτικές των ελληνικών παρόχων διαδικτύου. Θα παρέμβει άραγε η ΕΕΤΤ στη συγκεκριμένη περίπτωση;

Χτες έκανα το παρακάτω tweet, αλλά μέχρι στιγμής δεν υπάρχει κάποιο σχόλιο απ' τη Vodafone

<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

Σχόλια και παρατηρήσεις σε Twitter, Diaspora και Facebook.

Fedora Classroom: Command Line 101: report

Posted by Ankur Sinha "FranciscoD" on August 10, 2017 10:03 PM

We've gotten the Fedora classroom sessions going again. After two really good ones, I taught the third one today - a beginners session to the command line. Unlike the previous ones that used video platforms, I decided that the IRC was best suited to this session, even more so because I wanted it to be a hands-on session. It went off pretty well. Here are a few notes. Links to the logs are at the bottom of this post.

Some metrics

  • Length: 2 hours (was planned to be an hour, but we quickly realised that it wouldn't be enough!)
  • Attendees: 29 - a few of us had FAS usernames too (so we shared cookies!)
  • About 800 sentences were spoken (I spoke about half of these, of course)


I quite enjoyed it, but then I enjoy tinkering with the command line anyway. A few folks stuck around for the full two hours, so that does indicate that they found the session somewhat useful. I'd put up a gist here with a tentative agenda. We didn't manage to go more than half way through it, though. We:

  • did a quick introduction to what a shell is
  • learned how to get help using local information - using the man pages
  • quickly saw the difference between absolute and relative paths, and also learned about .. and .
  • went on to look at some more basic commands/built-ins and their switches/flags/options: ls, apropos, clear, cd, pwd, which, alias, rm, tree, mkdir, wget, rmdir, rm, fpaste, wc, head, tail, more, less, cat, tac, grep, sort, uniq
  • used these commands to download a copy of "The tragedy of Julius Caesar" from Project Gutenberg, and then extracted some information from it. For example, we obtained how many times Caesar was mentioned in the text. For a more advanced task we also obtained how many times Caesar, Brutus, Cassius, and Casca were each mentioned using a single set of commands. This required the use of grep, sort, uniq, wc in different combinations using input-output redirection (pipes in this case). At no point did we use a text editor, and we stuck to using local man pages.


I hope that this rather quick session gave the participants some idea of how the shell can be used for lots of tasks. I also hoped to show them that there's a lot of information available on the system itself that a user can refer to.

I learned a few things myself. I learned that an hour is too short for a proper online session, for example. My supposition that demonstrating commands using tasks would make the session more appealing seems to have been correct too. Only, maybe next time I'll pick a more contemporary text?

For the next session, I'll try and cover slightly more advanced topics, such as tests, loops, maybe even a bit of awk. We shall see.

Feedback is always welcome

If you had attended the session, or have gone through the logs and have some feedback, please get in touch. You can use the Fedora classroom channels:

You can even comment on this blog post, and of course, you can give me feedback privately. I'm also looking to make a list of tasks that I can use in future sessions - tasks that would be useful, fun, and that would also require some command line tricks - such that they would demonstrate the power of the command line. So, if you have your pet command line tricks/aliases, please do get in touch.

I'm FranciscoD on quite a few Fedora IRC channels, and I can be reached via e-mail on my Fedora address at ankursinha AT fedoraproject DOT org. All suggestions, comments, criticism is most welcome.

More instructors needed!

The classroom sessions are going rather well, but we still need more help. We need more people helping with logistics, and of course, if we are to continue these sessions every week, we need more instructors! If there's anything at all you think is worth a classroom session, please get in touch with the team on the Fedora classroom mailing list. A log of all past sessions - whether on IRC or on a video platform are maintained on the wiki page here for everyone to peruse at their convenience.