Fedora People

Choosing YAML for a Configuration File

Posted by David Cantrell on June 06, 2020 05:14 PM

Recently I have been working to clean up the configuration file syntax and parsing in rpminspect. Several months back there were suggestions on fedora-devel to improve things with the configuration files. The ideas were good improvements, so I added them to my to do list and am now at a point where I can work on making those changes. The main ideas:

  • Move the configuration files out of /etc and in to /usr/share. Have these be the defaults.
  • Let local overrides exist in /etc.
  • Allow for multiple rpminspect-data vendor packages to be concurrently installed.

In addition to the above, I was planning on implementing support for a local configuration file to be sourced last. Sort of like having pylintrc in a Python project to drive pylint. I wanted the ability to have rpminspect read a final configuration file for local package configuration. My thinking is that package maintainers could put a per-package rpminspect configuration file in the dist-git repo.

Picking A Parser

Before doing this rearrangement, I was looking at the syntax of the configuration file. It has evolved over the past year as new features have been added. The configuration file follows an INI style layout which is the ‘key = value’ style syntax. This is a long established common practice for any kind of configuration file which spans many different kinds of formats. INI file syntax is understood and easy to follow. I have been using libiniparser in rpminspect to handle reading the file. This works but has presented a challenge for two types of settings I need to represent in the configuration file. The first is a simple list. INI syntax does not really allow for this in a well defined way. I get around the limitation by having my lists be space-delimited strings which I then tokenize in the source code. Not ideal because the obvious limitation is that I have now made it difficult to have a list member with a space in it. The second data type is a hash table. I want to capture user-defined key=value settings for a particular category. I get around this by making the setting be the section name (e.g., ‘[products]’) and within that section reading every key and value and adding them to my hash table. It’s not entirely clear in the configuration file and the syntax could lead to confusion. So cleaning all of this up has been on the to do list.

What to do? The program has existed in the wild for over a year so the existing format is now established. I need to either honor the existing format or make a flag-day style change and migrate everything. The latter is possible since the configuration data for rpminspect is nearly exclusive to the vendor data packages. If I had already established the per-package configuration file functionality, this would be a harder change.

Looking at options, here’s how I broke down things:

  • Continue using the INI style format, possibly switching libraries. libconfini offers a bit more on top of the INI format, but still does not get me all the way there. There are other libraries and I could extend one of the existing ones. I would want any extensions to go upstream and that may or may not happen.
  • Investigate new formats and switch everything over to something else.
  • Define a new format and implement a lexer and parser in rpminspect.

I spent a lot of time looking at different INI libraries available. They all more or less provide the same type functionality which left me with limited or no list or hash table options. I then looked at defining a new configuration file format based on what I was already doing and implementing a parser in yacc. While this is possible, I was not really interested in going down this path because I didn’t want to run in to situations where the config file format was limiting a feature for some reason and then get stuck. Basically, I don’t want to be in the business of defining a config file format. Lastly, I moved on to looking at different existing options for configuration file formats. Here’s what I looked at:

  • JSON - Already in use for the license database (inherited from another project). Already using the json-c library. The syntax is frustrating, which would make it a pain for a configuration file. Brief survey of applications show that JSON is not really used in this capacity.
  • XML - I have used XML for configuration data and libxml provides a reasonable API for this. But it suffers from the same problem JSON has in that it’s a pain to edit and maintain by hand.
  • YAML - My experience with YAML is limited and what YAML files I have seen, I do not like. The files I’ve seen tend to be very brief and cryptic and offer no real clue as to what is a setting and what is a value. Short files that might look like this:
  - process: yes
      when: now
      - how: you_know
  - yes: process
      - now: when
      you_know: how

What is the significance of the hyphen? What are possible options? What am I even looking at? This file is not really helpful at discovering what you can do with a program, which is one thing I expect out of configuration files.

  • TOML - This looked exactly like what I was wanting. Looks like INI style but adds more types and lists and things like that. The downside here is the lack of available libraries. I found libtoml on github which may or may not completely implement the specification and it’s made no releases. I consider this specification evolving and may look at it in a few months.

There are other things to consider for the configuration file format. Who are the target users? In the case of rpminspect it would be developers and package maintainers. The program runs in a CI capacity in Fedora. Of the formats above, YAML has been established for a number of scenarios, many driven by the use of Ansible. What about my converns with YAML? I decided to look in to things a bit more.

I found that YAML does allow comments, so that’s a huge win. And indentation can be more than the nearly unreadable 2 spaces that I see commonly used. Sections are denoted by indentation and hyphens are used for list members. Key=value pairs are of the form ‘key: value’. I rewrote rpminspect.conf as a YAML file and looked at the result. I kept comments and used 4 space indentation. The result was very readable to me so I decided to use this format. The libyaml library provides an entirely usable API for working with YAML data streams.

Making The Change

Because of the parser change, I decided to rename the configuration file to rpminspect.yaml. This both reflects the specification used but also keeps it distinct from the existing configuration file format used. I bumped the major version of rpminspect to ‘1’ as well as on the data packages to account for this change. The profile configuration files will also end with ‘.yaml’.

I rearranged the rpminspect.yaml file as well and broke up what used to be the [settings] section. I give each inspection its own block in the configuration file for more clarity. Some sections do not tie to a specific inspection but are for the entire run of the program. I may move those in to a larger section on its own, but I am not sure yet.

The file parsing happens in lib/init.c so that was where the bulk of the changes went. And moving to YAML meant a lot of this code could be deleted. That is always satisfying even though it’s code that I wrote in the first place.

The project also drops a dependency on the libiniparser library, so I updated the documentation and the meson.build files. With all of these changes in place, I built the program and ran the test suite. I fixed up various things until the test suite passed and pushed the commits. The first big part of this change was now complete.

These changes have been pushed to the master branch and current Copr builds now use YAML configuration files for both the main configuration file and profiles. The next steps are adjusting things to allow for concurrent data package installation and honoring an rpminspect.yaml file in the current directory.

I like the new configuration file layout. libyaml is easy to use and I like having fewer runtime dependencies. I do feel that there will come a time where we talk about using YAML for these types of files like we talk about XML for config files now. There is not a lot I can do about that though, so we will stick with YAML for now.

Fedora program update: 2020-23

Posted by Fedora Community Blog on June 05, 2020 09:16 PM

Here’s your report of what has happened in Fedora this week. Elections voting is open through 11 June. I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. Announcements Help wanted Upcoming meetings Releases CPE update Announcements Orphaned packages seeking maintainers […]

The post Fedora program update: 2020-23 appeared first on Fedora Community Blog.

Looking forward to a recyclable, open sourced, wearable, contact tracing device

Posted by Harish Pillay 9v1hp on June 05, 2020 04:00 PM

So, the other shoe is about to drop. Gov is planning on providing everyone a wearable contacting tracing device to counter the limitations of apps that run in mobile phones (from the Apple ecosystem) – like the TraceTogether (the downstream of OpenTrace).

It also seems that a “newer” version of TT will now ask for NRIC (National Registration Identity Card) in addition to mobile phone number at registration.

Not a good thing, really, NOT A GOOD THING.

Why? One can generate any NRIC  number.

This report claims that the hardware device will be like TT/OT and only do contact tracing. What the device would be doing is to implement the BlueTrace protocol. This device can take on the form factor of a watch, a pen, or a key fob. It should be easy to design and build. And once built, put the designs, schematics etc on a Open Hardware License and published on Thinkiverse (or anywhere else). There are plenty of examples of wearables there. No need to reinvent the wheel.

And once the designs are published say under the CERN Open Hardware licence or the licenses under the Open Source Hardware Association. We need to spread the ideas far and wide and get a greater (re)use.

Why does this matter? First, we need to build trust in these devices. This is the same effort as in open sourcing of TraceTogether that was done in April and helped significantly to raise the level of trust in the application. There are challenges in adoption of the app because of battery and application run issues in the Apple mobile phone ecosystem. We need a usage population of about 65% of the local population in Singapore for it to be useful. TT is apparently at about 1.5 million of downloads, but there is no way to know if it is actually running.

If there is a separate device that runs the same BlueTrace protocol, it will operate with devices that run TT (or OT) so we can have a good chance to go highly reliable contact tracing.


I can already see that perhaps in a year or two from now, there will be millions of these devices that are thrown away and adding to the enormous waste – batteries etc. The device has to be designed with recycling as a default. This is 2020 and we must, as default, build devices that can be recycled trivially.

We don’t have to wait for G to do the design and build and distribution. The local open source community can step up and do this. We can design something that can then be sliced and diced as needed with different form factors.

If you are keen to work on this, please leave a comment or send me email at h dot pillay at ieee dot org. I will be calling for an online meeting of interested developers, designers, engineers soon.

Contribute at the Fedora CoreOS Test Day

Posted by Fedora Magazine on June 05, 2020 04:00 PM

The Fedora CoreOS team released the first Fedora CoreOS testing release based on Fedora 32. They expect that this release will promote to the stable channel in two weeks, on the usual schedule. As a result, the Fedora CoreOS and QA teams have organized a test day on Monday, June 08, 2020. Refer to the wiki page for links to the test cases and materials you’ll need to participate. Read below for details.

How does a test day work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

Happy testing, and we hope to see you on test day.

Contribute at the Fedora CoreOS Test Day

Posted by Fedora Community Blog on June 05, 2020 07:00 AM
Fedora 32 Kernel Test Week

The Fedora CoreOS team has released the first Fedora CoreOS testing release based on Fedora 32. They expect that this release will promote to the stable channel in two weeks, on the usual schedule. As a result, the Fedora CoreOS and QA teams have organized a test day on Monday, June 08, 2020. Refer to […]

The post Contribute at the Fedora CoreOS Test Day appeared first on Fedora Community Blog.

Working around Linux capabilities problems for syslog-ng

Posted by Peter Czanik on June 04, 2020 11:57 AM

No, SELinux is not the cause of all permission troubles on Linux. For example, syslog-ng makes use of the capabilities system on Linux to drop as many privileges as possible, as early as possible. But it might cause problems in some corner cases, as even when running as root, syslog-ng cannot read files owned by a different user. Learn from this blog how you can figure out if you have a SELinux or capabilities problem and how to fix it if you do.


Yes, SELinux is the primary suspect when something does not work as expected on your RHEL or CentOS system. It can cause any kind of mysterious file permission problems and can even prevent network connections when the configuration contains an unusual port number. To verify if a problem is caused by SELinux, check the audit logs on your system, normally in /var/log/audit/audit.log. If it is SELinux preventing syslog-ng from running as expected, you will see one or more related messages in that file. Check my earlier blog at https://www.syslog-ng.com/community/b/blog/posts/using-syslog-ng-with-selinux-in-enforcing-mode for more information about how to resolve these problems.

No, turning off SELinux does not solve your problems, merely treats the symptoms. Unless you are just quickly testing something, you should take the time and create the additional rules for SELinux. Even if SELinux comes from the NSA, it actually enhances the security of your systems. Just search for “vulnerabilities stopped by SELinux” on Google or your favorite search engine.


Spotting a Linux capabilities problem is not as easy as for SELinux, since there are no audit logs mentioning them. Right now, I am only aware of a file permission problem. Even when syslog-ng is running as root, it cannot read files owned by a different user.

This problem was reported to me as something related to TLS. So – using the TLS guide I created many years ago – I configured and tested an encrypted connection between syslog-ng instances. Then, I started to play with file ownership and permissions, and it turned out that the problem is not limited to certificates, but more generic instead. That moment, Linux capabilities came to my mind, and a minute later I had a working solution for the problem.

Why might you have files with different owners? A typical source of the problem is when you compile syslog-ng as a regular user and then try to run it as root to gain additional privileges, like opening network ports under 1024. Another case is when syslog-ng is running as root, but certificates or configuration are managed by scripts running as a regular user. In either case, Linux capabilities support enabled in syslog-ng prevents reading these files.

Most syslog-ng packages on Linux have capabilities support enabled. You can check it from the command line by running syslog-ng with the -V option:

[root@centos7 ~]# syslog-ng -V
syslog-ng 3 (3.27.1)
Config version: 3.22
Installer-Version: 3.27.1
Compile-Date: May  4 2020 08:17:51
Module-Directory: /usr/lib64/syslog-ng
Module-Path: /usr/lib64/syslog-ng
Include-Path: /usr/share/syslog-ng/include
Available-Modules: add-contextual-data,affile,afprog,afsocket,afstomp,afuser,appmodel,basicfuncs,cef,confgen,cryptofuncs,csvparser,dbparser,disk-buffer,examples,graphite,hook-commands,json-plugin,kvformat,linux-kmsg-format,map-value-pairs,pseudofile,sdjournal,stardate,syslogformat,system-source,tags-parser,tfgetent,timestamp,xml
Enable-Debug: off
Enable-GProf: off
Enable-Memtrace: off
Enable-IPv6: on
Enable-Spoof-Source: on
Enable-TCP-Wrapper: on
Enable-Linux-Caps: on
Enable-Systemd: on

The “Enable-Linux-Caps: on” line shows that capabilities support is enabled. This way, syslog-ng can drop most of its privileges on start.

Workaround / fix

Just like with SELinux, there are multiple ways of resolving this problem. One way is to disable capabilities support in syslog-ng completely. You can do this with the --no-caps command line option of syslog-ng. Even if there were no known security problems within syslog-ng for a long time, I do not recommend using it. Just like with SELinux, it can protect against unknown problems.

If you take a look at the syslog-ng manual page, you can see a nice, long list of capabilities. You can modify it to get file reading working by adding a single “e” to cap_fowner parameters. The full command line would look like this:

syslog-ng --caps cap_sys_admin,cap_chown,cap_dac_override,cap_net_bind_service,cap_fowner=eip

Checking something quickly from the command line using --no-caps is definitely easier. For production environments, I would rather recommend using the longer form, as it enables just a single additional privilege instead of everything.

Depending on your Linux distribution, the configuration of services might be different. In CentOS 7, you can pass command line parameters to syslog-ng using the /etc/sysconfig/syslog-ng file and adding the following line to it:

SYSLOGNG_OPTS="--caps cap_sys_admin,cap_chown,cap_dac_override,cap_net_bind_service,cap_fowner=eip"

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

nbdkit C script plugins

Posted by Richard W.M. Jones on June 04, 2020 11:41 AM

Should you want to, you can now write your nbdkit plugins like scripts, chmod +x plugin.c and run them …

#if 0
exec nbdkit cc "$0" "$@"
#include <stdint.h>
#include <string.h>
#include <nbdkit-plugin.h>

char data[100*1024*1024];


static void *
my_open (int readonly)

static int64_t
my_get_size (void *handle)
  return (int64_t) sizeof (data);

static int
my_pread (void *handle, void *buf,
          uint32_t count, uint64_t offset,
          uint32_t flags)
  memcpy (buf, data+offset, count);
  return 0;

static int
my_pwrite (void *handle, const void *buf,
           uint32_t count, uint64_t offset,
           uint32_t flags)
  memcpy (data+offset, buf, count);
  return 0;

static struct nbdkit_plugin plugin = {
  .name              = "myplugin",
  .open              = my_open,
  .get_size          = my_get_size,
  .pread             = my_pread,
  .pwrite            = my_pwrite,

$ chmod +x plugin.c
$ ./plugin.c

How to generate an EPUB file on Fedora

Posted by Fedora Magazine on June 04, 2020 08:00 AM

It is becoming more popular to read content on smartphones. Every phone comes with its own ebook reader. Believe or not, it is very easy to create your own ebook files on Fedora.

This article shows two different methods to create an EPUB. The epub format is one of the most popular formats and is supported by many open-source applications.

Most people will ask “Why bother creating an EPUB file when PDFs are so easy to create?” The answer is: “Have you ever tried reading a sheet of paper when you can only see a small section at a time?” In order to read a PDF you have to keep zooming and moving around the document or scale it down to a small size to fit the screen. An EPUB file, on the other hand, is designed to fit many different screen types.

Method 1: ghostwriter and pandoc

This first method creates a quick ebook file. It uses a Markdown editor named ghostwriter and a command-line document conversion tool named pandoc.

You can either search for them and install them from the Software Center or you can install them from the terminal. If you are going to use the terminal to install them, run this command: sudo dnf install pandoc ghostwriter.

For those who are not aware of what Markdown is, here is a quick explanation. It is a simple markup language created a little over 15 years ago. It uses simple syntax to format plain text. Markdown files can then be converted to a whole slew of other document formats.

<figure class="aligncenter size-large"><figcaption>ghostwriter</figcaption></figure>

Now for the tools. ghostwriter is a cross-platform Markdown editor that is easy to use and does not get in the way. pandoc is a very handy document converting tool that can handle hundreds of different formats.

To create your ebook, open ghostwriter, and start writing your document. If you have used Markdown before, you may be used to making the title of your document Heading 1 by putting a pound sign in front of it. Like this: # My Man Jeeves. However, pandoc will not recognize that as the title and put a big UNTITLED at the top of your ebook. Instead put a % in front of your title. For example, % My Man Jeeves. Sections or chapters should be formatted as Heading 2, i.e. ## Leave It to Jeeves. If you have subsections, use Heading 3 (###).

<figure class="wp-block-image size-large"></figure>

Once your document is complete, click File -> Export (or press Ctrl + E). In the dialog box, select between several options for the Markdown converter. If this is the first time you have used ghostwriter, the Sundown converter will be picked by default. From the dialog box, select pandoc. Next click Export. Your EPUB file is now created.

<figure class="aligncenter size-large"><figcaption>ghostwriter export dialog box</figcaption></figure>

Note: If you get an error saying that there was an issue with pandoc, turn off Smart Typography and try again.

Method 2: calibre

If you want a more polished ebook, this is the method that you are looking for. It takes a few more steps, but it’s worth it.

<figure class="wp-block-image size-large"></figure>

First, install an application named calibre. calibre is not just an ebook reader, it is an ebook management system. You can either install it from the Software Center or from the terminal via sudo dnf install calibre.

In this method, you can either write your document in LibreOffice, ghostwriter, or another editor of your choice. Make sure that the title of the book is formatted as Heading 1, chapters as Heading 2, and sub-sections as Heading 3.

Next, export your document as an HTML file.

Now add the file to calibre. Open calibre and click “Add books“. It will take calibre a couple of seconds to add the file.

<figure class="wp-block-image size-large"></figure>

Once the file is imported, edit the file’s metadata by clicking on the “Edit metadata” button. Here you can fill out the title of the book and the author’s name. You can also upload a cover image (if you have one) or calibre will generate one for you.

<figure class="wp-block-image size-large"></figure>

Next, click the “Convert books” button. In the new dialog box, select the “Look & Feel” section and the “Layout” tab. Check the “Remove spacing between paragraphs” option. This will tighten up the contents as indent each paragraph.

<figure class="wp-block-image size-large"></figure>

Now, set up the table of contents. Select the “Table of Contents” section. There are three options to focus on: Level 1 TOC, Level 2 TOC, and Level 3 TOC. For each, click the wand at the end. In this new dialog box, select the HTML tag that applies to the table of contents entry. For example, select h1 for Level 1 TOC and so on.

<figure class="wp-block-image size-large"></figure>

Next, tell calibre to include the table of contents. Select the “EPUB output” section and check the “Insert Inline Table of Contents“. To create the epub file, click “OK“.

<figure class="wp-block-image size-large"></figure>

Now you have a professional-looking ebook file.

Fedora CoreOS Test Day coming up on 2020-06-08

Posted by Adam Williamson on June 03, 2020 10:45 PM

Mark your calendars for next Monday, folks: 2020-06-08 will be the very first Fedora CoreOS test day! Fedora QA and the CoreOS team are collaborating to bring you this event. We'll be asking participants to test the bleeding-edge next stream of Fedora CoreOS, run some test cases, and also read over the documentation and give feedback.

All the details are on the Test Day page. You can join in on the day on Freenode IRC, we'll be using #fedora-coreos rather than #fedora-test-day for this event. Please come by and help out if you have the time!

Kiwi TCMS 8.4

Posted by Kiwi TCMS on June 03, 2020 08:06 PM

We're happy to announce Kiwi TCMS version 8.4!

IMPORTANT: this is a medium sized release which includes minor security fixes, many improvements & bug-fixes and translations in several new languages. It is the second release to include contributions via our open source bounty program. You can explore everything at https://public.tenant.kiwitcms.org!

Supported upgrade paths:

5.3   (or older) -> 5.3.1
5.3.1 (or newer) -> 6.0.1
6.0.1            -> 6.1
6.1              -> 6.1.1
6.1.1            -> 6.2 (or newer)

Docker images:

kiwitcms/kiwi       latest  602dddcf41a7    646 MB
kiwitcms/kiwi       6.2     7870085ad415    957 MB
kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955 MB
kiwitcms/kiwi       6.1     b559123d25b0    970 MB
kiwitcms/kiwi       6.0.1   87b24d94197d    970 MB
kiwitcms/kiwi       5.3.1   a420465852be    976 MB

Changes since Kiwi TCMS 8.3


  • Update Django from 3.0.5 to 3.0.7 - security update for functionality not used by Kiwi TCMS
  • Update bleach from 3.1.4 to 3.1.5
  • Update django-grappelli from 2.14.1 to 2.14.2
  • Update django-simple-history from 2.9.0 to 2.10.0
  • Update markdown from 3.2.1 to 3.2.2
  • Update pygithub from 1.50 to 1.51
  • Update python-redmine from 2.2.1 to 2.3.0
  • Update patternfly from 3.59.4 to 3.59.5
  • Add manage.py set_domain command to change Kiwi TCMS domain. Fixes Issue #971 (Ivajlo Karabojkov)
  • GitHub bug details now works for private issues
  • Gitlab bug details now works for private issues
  • JIRA bug details now works for private issues
  • Redmine bug details now works for private issues
  • New feature: 1-click bug report for Bugzilla
  • New feature: 1-click bug report for Gitlab
  • New feature: 1-click bug report for JIRA
  • New feature: 1-click bug report for Redmine
  • Reverting to older historical version via Admin panel now redirects to object which was reverted. Fixes Issue #1074
  • Documentation updates


Starting from v8.4 all supported bug trackers now feature 1-click bug report integration! Here's an example of how they look like for GitHub and JIRA:


Some external bug trackers like Bugzilla & JIRA provide more flexibility over which fields are required for a new bug report. The current functionality should work for vanilla installations and would fall back to manual bug reporting if it can't create a new bug automatically!


  • Force creation of missing permissions for m2m fields from the tcms.bugs app:
    • bugs.add_bug_tags
    • bugs.change_bug_tags
    • bugs.delete_bug_tags
    • bugs.view_bug_tags
    • bugs.add_bug_executions
    • bugs.change_bug_execution
    • bugs.delete_bug_execution
    • bugs.view_bug_executions


TCMS admins of existing installations will have to assign these by hand to users/groups who will be allowed to change tags on bugs!


  • Define the KIWI_DISABLE_BUGTRACKER=yes environment variable if you wish to disable the internal bug tracker. Closes Issue #1370

Bug fixes

  • Workaround missing MariaDB CHARSET/COLLATION support, see our docker-compose.yml. Fixes Issue #1700
  • Install missing /usr/bin/mysql in container
  • Warning message for unconfigured Kiwi TCMS domain does not show HTML tags in Admin anymore. Fixes Issue #964
  • Unescape the &amp; string when trying to open new windows after clicking the 'Report bug' button in TestExecution. Fixes Issue #1533
  • Try harder to restore the original navigation menu instead of leaving bogus menu items. Fixes Issue #991
  • Robot Framework plugin is now GA. Close Issue #984
  • Add LinkReference to TestExecution after creating bug via 1-click. The UI still needs to be refreshed which will be implemented together with the redesign of the TestRun page
  • Update documented signature for API method TestCase.add_component to match current behavior, see https://stackoverflow.com/questions/61648405/

Refactoring & testing

  • Migrate check-docs-source-in-git to GitHub workflows. Fixes Issue #1552 (@Prome88)
  • Migrate build-for-pypi to GitHub workflows. Fixes Issue #1554 (@lcmtwn)
  • Add tests for TestCaseAdmin (Mariyan Garvanski)
  • Add tests for BugAdmin. Fixes Issue #1596 (Mariyan Garvanski)
  • Omit utils/test from coverage reports. Fixes Issue #1631 (@cmbahadir)
  • Omit tcms/tests from coverage reports. Fixes Issue #1630 (@cmbahadir)
  • Add tests for tcms.core.forms.fields - Fixes Issue #1629 (@cmbahadir)
  • Add tests for TestExecution.update() for case_text_version field (Rosen Sasov)
  • Refactor bulk-update methods in TestRun page to use JSON-RPC. Fixes Issue #1063 (Rosen Sasov)
  • Start using _change_reason instead of changeReason field in django-simple-history
  • Remove unused StripURLField & Version.string_to_id()
  • Refactoring around TestCase and TestPlan cloning methods
  • Start testing with the internal bug tracker disabled
  • Start testing with all supported external bug trackers. Fixes Issue #1079
  • Start Codecov for coverage reports
  • Add tests for presence of mysql/psql binaries in container
  • Add APIPermissionsTestCase with example in TestVersionCreatePermissions
  • Move most test jobs away from Travis CI to GitHub workflows



Some of the translations in Chinese and German and all of the strings in Japanese and Korean have been contributed by a non-native speaker and are sub-optimal, see OpenCollective #18663. If you are a native in these languages and spot strings which don't sit well with you we kindly ask you to contribute a better translation via the built-in translation editor!

Kiwi TCMS Enterprise v8.4-mt

  • Based on Kiwi TCMS v8.4
  • Update social-auth-app-django from 3.1.0 to 3.4.0
  • Add django-python3-ldap add-on for LDAP logins

For more info see https://github.com/MrSenko/kiwitcms-enterprise/#v84-mt-03-june-2020

Vote for Kiwi TCMS

Our website has been nominated in the 2020 .eu Web Awards and we've promised to do everything in our power to greet future FOSDEM visitors with an open source billboard advertising at BRU airport. We need your help to do that!

How to upgrade

Backup first! If you are using Kiwi TCMS as a Docker container then:

cd path/containing/docker-compose/
docker-compose down
# !!! docker tag to keep older image version on the machine
docker pull kiwitcms/kiwi
docker pull centos/mariadb-103-centos7
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate

Refer to our documentation for more details!

Happy testing!

Firefox on Fedora finally gets VA-API on Wayland.

Posted by Martin Stransky on June 03, 2020 09:50 AM
<figure aria-describedby="caption-attachment-147" class="wp-caption alignnone" data-shortcode="caption" id="attachment_147" style="width: 639px">video1<figcaption class="wp-caption-text" id="caption-attachment-147">I used of Toy Story 3 trailer as a test video and saw it a thousand times during the VA-API debugging. I should definitely watch the movie one day.</figcaption></figure>

Yes, it’s finally here. One and half year after Tom Callaway, Engineering Manager @ Red Hat added the patch to Chromium we also get hardware accelerated video playback for Firefox. It’s shame it took too long but I’m still learning.

The VA-API support in Firefox is a bit specific as it works under Wayland only right now. There isn’t any technical reason for that, I just don’t have enough time to implement it for X11 so Bug 1619523 is waiting for brave hackers.

There are a lot of people who greatly contributed to the Firefox Wayland port. Jan Horak (Red Hat) did all the uneasy Wayland patches reviews I threw at him. Jonas Ådahl (Red Hat) helped me with Wayland backend since the first Wayland patch four years ago. Robert Mader faced various Mutter/Gtk compositor bugs, Kenny Levinsen implemented adaptive Wayland vsync handlers, Jan Andre Ikenmeyer has been tirelessly triaged new Wayland bugs and cleaning bugzilla. Sotaro Ikeda (Mozilla) reviewed almost all Wayland patches for graphics subsystem, Jean-Yves Avenard (Mozilla) reviewed VA-API video patches and Jeff Gilbert (Mozilla) faced to my OpenGL Wayland patches.

The contributor list is not exhaustive as I mentioned only the most active ones who comes to mind right now. There are a lot of people who contribute to Firefox/Wayland. You’re the best!

How to enable it in Fedora?

When you run Gnome Wayland session on Fedora you get Firefox with Wayland backend by default. Make sure you have the latest Firefox 77.0 for Fedora 32 / Fedora 31.

You also need working VA-API acceleration and ffmpeg (valib) packages. They are provided by RPM Fusion repository. Enable it and install ffmpeg, libva and libva-utils.

Intel graphics card

There are two drivers for Intel cards, libva-intel-driver (provides i965_drv_video.so) and libva-intel-hybrid-driver (iHD_drv_video.so). Firefox works with libva-intel-driver only, intel-media-driver is broken due to sandboxing issues (Bug 1619585). I strongly recommend to avoid it all cost and don’t disable media sandbox for it.

AMD graphics card

AMD open source drivers decode video with radeonsi_drv_video.so library which is provided by mesa-dri-drivers package and it comes with Fedora by default.

NVIDIA graphics cards

I have no idea how NVIDIA cards are supported because I don’t owny any. Please refer to Fedora VA-API page for details.

Test VA-API state

When you have the driver set it’s time to prove it. Run vainfo on terminal and check which media formats are decoded on the hardware.


There’s vainfo output from my laptop with integrated Intel UHD Graphics 630. Loads i965_drv_video.so driver and decodes H.264/VP8/VP9 video formats. I don’t expect much more from it – seems to be up.

Configure Firefox

It’s time to whip up the lazy fox 🙂 At about:config set gfx.webrender.enabled and widget.wayland-dmabuf-vaapi.enabled. Restart browser, go to about:support and make sure WebRender is enabled…


…and Window Protocol is Wayland/drm.


Right now you should be able to decode and play clips on your graphics cards only without any CPU interaction.

Get more info from Firefox log

VA-API video playback may not work from various reason. Incompatible video codec, large video size, missing system libraries and so on. All those errors can be diagnosed by Firefox media log. Run on terminal

MOZ_LOG="PlatformDecoderModule:5" MOZ_ENABLE_WAYLAND=1 firefox

and you should see something like


VA-API FFmpeg init successful” claims the VA-API is up and running, VP9 is the video format and “Got one VAAPI frame output…” line confirms that frame decoding works.

VA-API and Youtube

Unfortunately Youtube tends to serve various video formats, from H.264 to AV1. Actual codec info is shown after right click on video under “Stats for nerds” option.

<figure aria-describedby="caption-attachment-146" class="wp-caption alignnone" data-shortcode="caption" id="attachment_146" style="width: 1095px">vaapi4<figcaption class="wp-caption-text" id="caption-attachment-146">Surprisingly “avc1” means H.264 video. You can expect also AV1 and VP8/VP9 there.</figcaption></figure>

Youtube video codec can be changed by enhanced-h264ify Firefox add-on, so disable all SW decoded formats there. And that’s it. If you’re running Fedora you should be settled for now.

<figure aria-describedby="caption-attachment-148" class="wp-caption aligncenter" data-shortcode="caption" id="attachment_148" style="width: 639px">video2<figcaption class="wp-caption-text" id="caption-attachment-148">We’re done, bro!</figcaption></figure>

VA-API with stock Mozilla binaries

Stock Mozilla Firefox 77.0 is missing some important stability/performance VA-API fixes which hit Firefox 78.0 and are backported to Fedora Firefox package. You should grab latest nightly binaries or Developer/Beta versions and run them under Wayland as


Mozilla binaries perform VP8/VP9 decoding by bundled libvpx library which is missing VA-API decode path. If your hardware supports it and you want to use VA-API for VP8/VP9 decoding, you need to disable bundled libvpx and force external ffmpeg. Go to about:config and set media.ffvpx.enabled to false. Fedora sets that by default when VA-API is enabled.


Onion location and Onion names in Tor Browser 9.5

Posted by Kushal Das on June 03, 2020 03:07 AM

Yesterday the Tor Browser 9.5 was released. I am excited about this release for some user-focused updates.

Onion-Location header

If your webserver provides this one extra header Onion-Location, the Tor Browser will ask the user if they want to visit the onion site itself. The user can even mark to visit every such onion site by default. See it in action here.

To enable this, in Apache, you need a configuration line like below for your website’s configuration.

Onion location demo

Header set Onion-Location "http://your-onion-address.onion%{REQUEST_URI}s"

Remember to enable rewrite module.

For nginx, add the following in your server configuration.

add_header Onion-Location http://<your-onion-address>.onion$request_uri;

URL which we can remember aka onion names

This is the first proof of concept built along with Freedom of the Press Foundation (yes, my team) and HTTPS Everywhere to help people to use simple names for onion addresses. For example, below, you can see that I typed theintercept.securedrop.tor.onion on the browser, and that took us to The Intercept’s SecureDrop address.

Onion name

آموزش فعال کردن TLS 1.3 در Apache, Ngnix و Cloudflare

Posted by Fedora fans on June 02, 2020 07:30 AM


امنیت یکی از نکات بسار مهم می باشد که برای دستیابی به امنیت بالاتر ابزارها و روش های گوناگونی وجود دارد. یکی از موارد جهت بالا بردن امنیت سرویس خود استفاده از TLS می باشد که سر نام واژه های  Transport Layer Security می باشد و جدیدترین نسخه ی آن TLS 1.3 می باشد.

TLS 1.3 بر اساس نسخه ی قبلی همین پروتکل یعنی TLS 1.2 می باشد با این تفاوت که TLS 1.3 دارای کارایی (performance) بهتر و امنیت (security) بیشتر می باشد.



پروتکل TLS قابلیت فعال شدن در وب سرورها، CDN ها و Load Balancer ها را دارا می باشد که در این مطلب قصد داریم تا TLS 1.3 را بر روی Web Server هایی چون Nginx و Apache و همچنین Cloudflare که یک CDN می باشد، فعال نماییم.

فعال کردن TLS 1.3 در Nginx:

TLS 1.3 از نسخه ی Nginx 1.13 به بعد پشتیبانی می شود. پس اگر از نسخه های قدیمی تر Nginx استفاده می کنید کافیست تا وب سرور خود را بروزرسانی کنید.

برای فعال کردن TLS 1.3 در Nginx کافیست تا فایل پیکریندی وب سرور خود یعنی nginx.conf را باز کنید. پیکربندی پیش فرض در قسمت تنظیمات SSL ممکن است مانند خط پایین باشد:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

اکنون کافیست تا TLSv1.3 را در آخر همین خط اضافه کنید تا تغییر مانند خط پایین شود:

ssl_protocols TLSv1 TLSv1.1 TLSv1.2 TLSv1.3;

نکته اینکه تنظیم بالا به TLS 1/1.1/1.2/1.3 اجازه خواهد داد. اگر بخواهید سرویس خود را اندکی ایمن تر کنید، کافیست تا فقط TLS 1.2/1.3 را اضافه کند:

ssl_protocols TLSv1.2 TLSv1.3;

پس از انجام تغییرات گفته شده کافیست تا یکبار سرویس Nginx را restart کنید.


فعال کردن TLS 1.3 در Apache:

وب سرور آپاچی از نسخه ی Apache HTTP 2.4.38 قابلیت پشتیبانی از TLS 1.3 را دارد. اگر از نسخه های قدیمی تر Apache استفاده می کنید کافیست تا وب سرور آپاچی خود را بروزرسانی کنید.

پیکربندی TLS 1.3 هم مانند فعال کردن TLS 1.2 یا TLS 1.1 می باشد. کافیست تا فایل پیکربندی ssl.conf یا فایل پیکربندی مربوط به SSL بر روی سرور خود را باز کنید. سپس خط SSLProtocol را پیدا کنید و TLSv1.3+ را به انتهای خط اضافه کنید. نمونه تنظیم پایین اجازه می دهد تا TLS 1.2 و TLS 1.3 فعال باشند:

SSLProtocol -all +TLSv1.2 +TLSv1.3

پس از انجام تغییرات کافیست تا سرویس Apache را restart کنید.


فعال کردن TLS 1.3 در Cloudflare:

Cloudflare اولین CDN بود که TLS 1.3 را پیاده سازی و از آن پشتبانی کرد. TLS 1.3 برای تمامی مشتریان Cloudflare شامل مشتریان حساب رایگان و غیر رایگان در دسترس می باشد و به صورت پیش فرض برای همه وب سایت ها فعال می باشد.

برای بررسی، فعال و یا غیر فعال کردن TLS 1.3 در Cloudflare کافیست پس از ورود به حساب کاربری خود به قسمت SSL/TLS بروید و سپس سربرگ Edge Certificates را انتخاب کنید. در این برگه مانند تصویر پایین قسمتی برای فعال نمودن TLS 1.3 مشاهده می کنید:


همچنین در همین برگه قسمت دیگری وجود دارد که می توانید پایین ترین نسخه از پروتکل TLS را انتخاب کنید:


بررسی نسخه TLS :

برای بررسی اینکه سرویس ما یا وب سایت ما از TLS 1.3 استفاده می کند یا خیر روش ها و ابزارهای گوناگونی وجود دارد که در ادامه به برخی از آنها اشاره خواهد شد.


روش ۱ :

برای بررسی نسخه ی TLS وب سایت خود می توانید به آدرس زیر مراجعه کنید:


سپس برای بررسی نسخه ی TLS وب سایت خود کافیست تا آدرس وب سایت خود را در کادر مشخص شده وارد کنید:


یک نمونه از اینکار را در تصویر پایین مشاهده می کنید:


روش ۲ :

روش دیگر برای بررسی نسخه ی TLS استفاده از وب سایت SSL Labs می باشد که برای اینکار کافیست تا به آدرس زیر مراجعه کنید و آدرس وب سایت خود را با HTTPS در کادر مشخص شده وارد کنید:


پس از پایان بررسی کافیست تا قسمت Protocols را مشاهده کنید. تصویری از اینکار را در پایین مشاهده می کنید:


روش ۳ :

برای تشخیص نسخه ی TLSو SSL وب سایت خود می توانید به آدرس زیر مراجعه کنید:


سپس در کادر مشخص شده آدرس وب سایت خود را وارد کنید. یک نمونه خروجی از روش گفته شده را در تصویر پایین مشاهده می کنید:

cdn77-tlsروش ۴ :

روش دیگر برای تشخیص نسخه ی TLS استفاده از وب سایت hardenize.com می باشد که کافیست تا به آدرس آن مراجعه کنید:


سپس در کادر مشخص شده آدرس وب سایت خود را وارد کنید و نتایج TLS را بررسی کنید. یک نمونه از اینکار را در تصویر پایین مشاهده می کنید:


روش ۵ :

با استفاده از مرورگر اینترنتی Firefox نیز می توان نسخه ی TLS را تشخیص داد که برای اینکار ابتدا کافیست تا مرورگر فایرفاکس را اجرا کنید، سپس از منوی Tools گزینه ی Web Developer و بعد از آن Network را انتخاب کنید و یا اینکه مستقیما کلید های Ctrl+Shift+E را فشار دهید. اکنون آدرس وب سایت خود را در مروگر فایرفاکس وارد کنید. اکنون پس از انتخاب آدرس اصلی وب سایت خود از قسمت Network، نسخه ی TLS وب سایت را می توانید از سریرگ security مشاهده کنید. در زیر تصویری از اینکار را مشاهده می کنید:


روش ۶ :

روش دیگر جهت بررسی نسخه ی TLS استفاده از مرورگر اینترنتی Google Chrome می باشد. برای اینکار ابتدا مرورگر Google Chrome را باز کنید، سپس Developer Tools مرورگر را باز کنید که برای اینکار می توانید کلید F12 را نیز فشار دهید. اکنون سربرگ security را انتخاب کنید و سپس آدرس وب سایت خود را در مرورگر Google Chrome وارد کنید. اکنون در قسمت main origin آدرس اصلی وب سایت خود را جهت بررسی نسخه ی TLS انتخاب کنید. در ادامه تصویری از این روش را مشاهده می کنید:


روش ۷ :

روش دیگر استفاده از openssl می باشد که برای تشخیص نسخه ی TLS 1.3 می توانید دستور زیر را اجرا کنید:

$ openssl s_client -connect fedorafans.com:443 -tls1_3

نکته اینکه بجای آدرس fedorafans.com شما باید آدرس مورد نظر خود را بنویسید. اگر در خروجی دستور certificate chain و handshake دریافت کردید به این معنی می باشد که وب سایت شما از نسخه ی TLS نوشته شده در دستور پشتیبانی می کند.

همانطور که مشخص می باشد جهت بررسی نسخه های TLS 1.2 و TLS 1.1 می توانید از دستورهای زیر استفاده کنید:

$ openssl s_client -connect fedorafans.com:443 -tls1_2


$ openssl s_client -connect fedorafans.com:443 -tls1_1

امید است تا از این مطلب استفاده ی لازم را برده باشید و همیشه سرویس های امنی داشته باشید.


F32-20200601 Updated Live isos Released

Posted by Ben Williams on June 01, 2020 10:41 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated F32-20200601-Live ISOs, carrying the 5.6.14-300 kernel.

Welcome to Fedora 32.

This set of updated isos will save considerable amounts  of updates after install.  ((for new installs.)(New installs of Workstation have about 840+MB of updates)).

A huge thank you goes out to irc nicks dowdle,  Southern-Gentleman for testing these iso.

We would also like to thank Fedora- QA  for running the following Tests on our ISOs.:



As always our isos can be found at  http://tinyurl.com/Live-respins . 

Fedora Community Blog monthly summary: May 2020

Posted by Fedora Community Blog on June 01, 2020 07:37 PM

This is the first in what I hope to make a monthly series summarizing the past month on the Community Blog. Please leave a comment below to let me know what you think. Stats In May, we published 31 posts. The site had 4,964 visits from 2,392 unique viewers. Readers wrote 13 comments. 202 visits […]

The post Fedora Community Blog monthly summary: May 2020 appeared first on Fedora Community Blog.

[Howto] My own mail & groupware server, part 1: what, why, how?

Posted by Roland Wolters on June 01, 2020 01:00 PM
<figure class="alignright size-thumbnail"></figure>

Running your own mail and groupware server can be challenging. I recently had to re-create my own setup from the ground, and this blog post is the first in a series describing all the steps.

Running mail servers on your own

Running your own mail server sounds tempting: having control over a central piece of your own communication does sound good, right?

But that can be quite challenging: The mail standards are not written for today’s way to operate technology. Many of them are vague, too generic to be really helpful, or just not deployed widely in reality. Other things are not standardized at all so you have to guess and test (max message sizes, anyone?). Also, even the biggest providers have their own interpretations of the standards and sometimes aggressively ignore them which you are forced to accept.

Also there is the spam problem: there is still a lot of spam out there. And since this is an ongoing fight mail server admins have to constantly adjust their systems to newest tricks and requirements. Think of SPF, DKIM, DMARC and DANE here.

Last but not least the market is more and more dominated by large corporations. If your email is tagged as spam by one of those, you often have no way to figure out what the problem is – or how to fix it. They simply will not talk to you if you are not of equal size (or otherwise important). In fact, if I have a pessimistic look into the future of email, it might happen that all small mail service providers die and we all have to use the big services.

Thus the question is if anyone should run their own mail server at all. Frankly, I would not recommend it if you are not really motivated to do so. So be warned.

However, if you do decide to do that on your own, you will learn a lot about the underlying technology, about how a core technology of “the internet” works, how companies work and behave, and you will have huge control about a central piece of today’s communication: mail is still a corner stone of today’s communication, even if we all hate it.

My background

To better understand my motivation it helps to know where I come from: In my past job at credativ I was project manager for a team dealing with large mail clusters. Like, really large. The people in the team were and are awesome folks who *really* understand mail servers. If you ever need help running your own open source mail infrastructure, get them on board, I would vouch for them anytime.

And while I never reached and never will reach the level of understanding the people in my team had, I got my fair share of knowledge. About the the technological components, the developments in the field, the challenges and so on. Out of this I decided at some point that it would be fun to run my own mail server (yeah, not the brightest day of my life, in hindsight…).

Thus at some point I set up my own domain and mail server. And right from the start I wanted more than a mail server: I wanted a groupware server. Calendars, address books, such a like. I do not recall how it all started, and how the first setup looked like, but I know that there was a Zarafa instance once, in 2013. Also I used OpenLDAP for a while, munin was in there as well, even a trac service to host a git repository. Certificates were shipped via StartSSL. Yeah, good times.

In summer 2017 this changed: I moved Zarafa out of the picture, in came SOGo. Also, trac was replaced by Gitlab and that again by Gitea. The mail server was completely based on Postfix, Dovecot and the likes (Amavisd, Spamassassin, ClamAV). OpenLDAP was replaced by FreeIPA, StartSSL by letsencrypt. All this was setup via docker containers, for easier separation of services and for simpler management. Nginx was the reverse proxy. Besides the groupware components and the git server there was also a OwnCloud (later Nextcloud) instance. Some of the container images were upstream, some I built myself. There was even a secondary mail server for emergencies, though that one was always somewhat out of date in terms of configuration.

This all served me well for years. Well, more or less. It was never perfect and missed a lot of features. But most mail got through.

Why the restart?

If it all served me well, why did I have to re-create the setup? Well, a few days ago I had to run an update of the certificates (still manually at that time). Since I had to bring down the reverse proxy for it, I decided run a full update of the underlying OS and also of the docker images and to reboot the machine.

It went fine, came back up – but something was wrong. Postfix had problems accepting mails. The more I dug down, the deeper the rabbit hole got. Postfix simply didn’t answer after the “DATA” part in the SMTP communication anymore. Somehow I got that fixed – but then Dovecot didn’t accept the mails for unknown reasons, and bounced were created!

I debugged for hours. But every time I thought I had figured it out, another problem came up. At one point I realized that the underlying FreeIPA service had erratic restarts and I had no idea why.

After three or four days I still had no idea what was going on, why my system was behaving that bad. Even with a verified working configuration from backup things went randomly broken. I was not able to receive or send mails reliably. My three major suspects were:

  • FreeIPA had a habit in the past to introduce new problems in new images – maybe this image was broken as well? But I wasn’t able to find overly obvious issues or reports.
  • Docker was updated from an outdated version to something newer – and Docker never was a friend of CentOS firewall rules. Maybe the recent update screwed up my delicate network setup?
  • Faulty RAM? Weird, hard to reproduce and changing errors of known-to-be-working setups can be the sign of faulty RAM. Maybe the hardware was done for.

I realized I had to make a decision: abandon my own mail hosting approaches (the more sensible option) – or get a new setup running fast.

Well – guess what I did?

Running your own mail server: there is a project for that!

I decided to re-create my setup. And this time I decided to not do it all by myself: Over the years I noticed that I was not the only person with the crazy idea to run their own mail server in containers. Others started entire projects around this with many contributors and additional tooling. I realized that I would loose little by using code from such existing projects, but would gain a lot: better tested code, more people to ask and discuss if problems arise, more features added by others, etc.

Two projects caught my interest over time, I followed them on Github for quite a while already: Mailu and mailcow. Indeed, my original plan was to migrate to one of them in the long term, like in 2021 or something, and maybe even hosted on Kubernetes or at least Podman. However, with the recent outage of my mail server I had to act quickly, and decided to go with a Docker based setup again.

Both projects mentioned above are basically built around Docker COmpose, Postfix, Dovecot, RSpamd and some custom admin tooling to make things easier. If you look closer they both have their advantages and special features, so if you think to run your own mail server I suggest you look into them yourself.

For me the final decision was to go with mailu: mailu does support Kubernetes and I wanted be prepared for a kube based future.

What’s next?

So with all this background you already know what to expect from the next posts: how to bring up mailu as a mail server, how to add Nextcloud and Gitea to the picture, and a few other gimmicks.

This will all be tailored to my needs – but I will try to keep it all as close to the defaults as possible. First to keep it simple but also to make this content reusable for others. I do hope that this will help others to start using their own setups or fine tuning what they already have.

Image by Gerhard Gellinger from Pixabay

<script> __ATA.cmd.push(function() { __ATA.initDynamicSlot({ id: 'atatags-26942-5edc7909f21fc', location: 120, formFactor: '001', label: { text: 'Advertisements', }, creative: { reportAd: { text: 'Report this ad', }, privacySettings: { text: 'Privacy settings', } } }); }); </script>

Use FastAPI to build web services in Python

Posted by Fedora Magazine on June 01, 2020 08:00 AM

FastAPI is a modern Python web framework that leverage the latest Python improvement in asyncio. In this article you will see how to set up a container based development environment and implement a small web service with FastAPI.

Getting Started

The development environment can be set up using the Fedora container image. The following Dockerfile prepares the container image with FastAPI, Uvicorn and aiofiles.

FROM fedora:32
RUN dnf install -y python-pip \
    && dnf clean all \
    && pip install fastapi uvicorn aiofiles
CMD ["uvicorn", "main:app", "--reload"]

After saving this Dockerfile in your working directory, build the container image using podman.

$ podman build -t fastapi .
$ podman images
localhost/fastapi latest 01e974cabe8b 18 seconds ago 326 MB

Now let’s create a basic FastAPI program and run it using that container image.

from fastapi import FastAPI

app = FastAPI()

async def root():
    return {"message": "Hello Fedora Magazine!"}

Save that source code in a main.py file and then run the following command to execute it:

$ podman run --rm -v $PWD:/srv:z -p 8000:8000 --name fastapi -d fastapi
$ curl
{"message":"Hello Fedora Magazine!"

You now have a running web service using FastAPI. Any changes to main.py will be automatically reloaded. For example, try changing the “Hello Fedora Magazine!” message.

To stop the application, run the following command.

$ podman stop fastapi

Building a small web service

To really see the benefits of FastAPI and the performance improvement it brings (see comparison with other Python web frameworks), let’s build an application that manipulates some I/O. You can use the output of the dnf history command as data for that application.

First, save the output of that command in a file.

$ dnf history | tail --lines=+3 > history.txt

The command is using tail to remove the headers of dnf history which are not needed by the application. Each dnf transaction can be represented with the following information:

  • id : number of the transaction (increments every time a new transaction is run)
  • command : the dnf command run during the transaction
  • date: the date and time the transaction happened

Next, modify the main.py file to add that data structure to the application.

from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI()

class DnfTransaction(BaseModel):
    id: int
    command: str
    date: str

FastAPI comes with the pydantic library which allow you to easily build data classes and benefit from type annotation to validate your data.

Now, continue building the application by adding a function that will read the data from the history.txt file.

import aiofiles

from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI()

class DnfTransaction(BaseModel):
    id: int
    command: str
    date: str

async def read_history():
    transactions = []
    async with aiofiles.open("history.txt") as f:
        async for line in f:
                id=line.split("|")[0].strip(" "),
                command=line.split("|")[1].strip(" "),
                date=line.split("|")[2].strip(" ")))
    return transactions

This function makes use of the aiofiles library which provides an asyncio API to manipulate files in Python. This means that opening and reading the file will not block other requests made to the server.

Finally, change the root function to return the data stored in the transactions list.

async def read_root():
    return await read_history()

To see the output of the application, run the following command

$ curl | python -m json.tool
"id": 103,
"command": "update",
"date": "2020-05-25 08:35"
"id": 102,
"command": "update",
"date": "2020-05-23 15:46"
"id": 101,
"command": "update",
"date": "2020-05-22 11:32"


FastAPI is gaining a lot a popularity in the Python web framework ecosystem because it offers a simple way to build web services using asyncio. You can find more information about FastAPI in the documentation.

The code of this article is available in this GitHub repository.

Photo by Jan Kubita on Unsplash.

Episode 199 – Special cases are special: DNS, Websockets, and CSV

Posted by Josh Bressers on June 01, 2020 12:01 AM

Josh and Kurt talk about a grab bag of topics. A DNS security flaw, port scanning your machine from a web browser, and CSV files running arbitrary code. All of these things end up being the result of corner cases. Letting a corner case be part of a default setup is always a mistake. Yes always, not even that one time.

<audio class="wp-audio-shortcode" controls="controls" id="audio-1641-1" preload="none" style="width: 100%;"><source src="https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_199_Special_cases_are_special_DNS_Websockets_and_CSV.mp3?_=1" type="audio/mpeg">https://traffic.libsyn.com/secure/opensourcesecuritypodcast/Episode_199_Special_cases_are_special_DNS_Websockets_and_CSV.mp3</audio>

Show Notes

Creating a multi-region Wireguard VPN in AWS

Posted by Jonathan Dieter on May 31, 2020 08:48 PM

On January 2nd, 2020, I started as the Head of Software Services at Spearline, an audio quality testing company based out of Skibbereen, Ireland. At Spearline, most of our infrastructure is in Amazon’s cloud, but we do have over a hundred callservers around the world. These servers are the ones that actually place the phone calls that we use to check the audio quality on the lines that we’re testing. One of my tasks is to improve security, and one way I’ve done that is to move our callservers behind a VPN that’s connected to our primary Amazon VPC.

Now, to give a bit of background, most of our work and all of our data processing happens in the eu-west-1 region, but we do actually have VPCs with one or two servers setup in most of the available AWS regions. These regions are connected with all other regions with a Peering Connection, which allows us to, for example, have a server in Singapore connect to one of our servers in Ireland using private IP addresses only.

The problem is that we have many callservers that aren’t in AWS, and, traditionally, these servers would have been whitelisted in our infrastructure based on their public IP address. This meant that we sometimes had unencrypted traffic passing between our callservers and the rest of our infrastructure, and that there was work to do when a callserver changed its public IP address. It looked like the best solution was to setup a Wireguard VPN server and have our callservers connect using Wireguard.

Since the VPN server was located in eu-west-1, this had the unfortunate side effect of dramatically increasing the latency between the callserver and servers in other regions. For example, we have a non-AWS callserver located in Singapore that was connecting to a server in the AWS region ap-southeast-1 (Singapore) to figure out where it was supposed to call. The latency between the two servers was about 3 ms, but when going through our VPN server in Ireland, the latency jumped to almost 400ms.

The other problem is that Amazon VPC peering agreements do not allow you to forward traffic from a non-VPC private IP address. So, if the private IP range for our Ireland VPC was and the private range for our callservers was, Singapore would only allow traffic coming from the Ireland VPC if it was from and drop all traffic originating from a VPN client. AWS does allow you to create Transit Gateways that will allow extra ranges through, but they cost roughly $36 a month per region, which was jacking up the cost of this project significantly.

Diagram of VPN server per region configuration

My solution was to setup a VPN server (mostly t3.nano instances) in each region that we have servers. These VPN servers communicate with each other over a “backbone” VPN interface, where they forward traffic from the VPN client to the appropriate VPN server for the region. So, for example, if a VPN client connected to the vpn-ireland server wanted to connect to a server in the ap-southeast-1 region, the vpn-ireland server would forward the traffic to the vpn-singapore server, which would then send the traffic into our ap-southeast-1 VPC. The server in the VPC would respond, and since its target is a VPN address, the traffic would go back to the vpn-singapore server, which would send it back to vpn-ireland, which would then pass it back to the VPN client.

Traffic route from VPN client in Ireland to server in Singapore

I then wrote a simple script to run on the VPN servers to compare each client’s latest handshake with the other VPN servers and automatically route traffic to the appropriate server. This led me to my final optimization. I did some investigation, and Amazon has product, the AWS Global Accelerator that allows you to forward a single public IP address to different servers in different regions, depending on where the client connecting to the IP is located. Because Wireguard is stateless, this allows us to have clients automatically connect to the closest VPN server, and, within about five seconds, all the VPN servers will be routing traffic appropriately.

Using the Singapore example above, this setup allows our non-AWS Singapore server to once again ping a server in AWS region ap-southeast-1 with a latency of 3 ms, without affecting its latency to Ireland in any significant way. And the best part is that we don’t have to tell the Singapore server which VPN server is closest. It goes to the closest one automatically.

Building the VPN

To setup your own multi-region Wireguard VPN network, do the following. Note that we use ansible to do most of it.

  1. Setup a VPC in each region you care about. For each VPC, setup a peering connection with all of the other VPCs. Make sure each VPC uses a different subnet (I’d suggest using something like,, etc). Creating a VPC is beyond the scope of this blog entry.
  2. Setup a t3.nano instance in each region you care about in the VPC you created above. I would suggest using a distribution with a new enough kernel that Wireguard is built-in, something like Fedora. Make sure each instance has an Elastic IP.
  3. Verify that each VPN server can ping the other VPN servers using their private (in-VPC) IPs
  4. Turn on IP forwarding (net.ipv4.ip_forward=1) and turn off the return path filter (net.ipv4.conf.all.rp_filter). Also, make sure to disable the “Source destination check” in AWS.
  5. Setup a new route table called backbone in /etc/iproute2/rt_tables
  6. Open up UDP ports 51820-51821 and TCP port 51819 in the firewall.
  7. Setup a “backbone” Wireguard interface on each VPN server, using the config here as a starting point. Each server must have a unique key and unique IP address, but they should all use the same port. Each server should have entries for all the other servers with their public key and (assuming you want to keep AWS traffic costs down) private IP address. AllowedIPs for each entry should include the server’s backbone IP address (10.50.0.x/32) and the server’s VPC IP range (10.x.0.0/16). This will allow traffic to be forwarded through the VPN server to the attached VPC. Ping the other VPN server backbone IP addresses to verify connectivity over the VPN.
  8. Add the backbone interface to your firewall’s trusted zone
  9. Setup a “client” Wireguard interface on each VPN server, using the config here as a starting point. This should contain the keys and IP addresses for all your VPN clients, and should be identical on all the VPN servers
  10. Start the wg-route service from wg-route on GitHub on all the VPN servers. The service will automatically detect the other VPN servers and start exchanging routes to the VPN clients. Please note that the VPN server time needs to be fairly synchronized on all the VPN servers
  11. Connect a VPN client to one of the VPN servers. Within five to ten seconds, all the servers should be routing any traffic to that VPN client through the server that it’s connected to. Test by pinging the different VPN server’s backbone IP addresses from the client
  12. Start the wg-status service from wg-route on GitHub on all the VPN servers. This service will let the Global Accelerator know that this VPN server is ready for connections
  13. Setup an AWS Global Accelerator and add a listener for the UDP port setup in your “client” Wireguard interface. For the listener, add an endpoint group for each region that you’ve setup a VPN server, with a TCP health check on port 51819. Then, in each endpoint group, add the VPN server in the region as an endpoint.
  14. Point your VPN client to the Global Accelerator IP. You should be able to ping any of the VPN servers. If you login to one of the VPN servers and run journalctl -f -u wg-route -n 100, you should see a log message telling which VPN server your client connected to.

Problems and limitations

  • If you bring down a VPN server (by running systemctl stop wg-status), any clients connected to that server will continue to stay connected to that server until there’s been 30 seconds of inactivity on the UDP connection. If you’re using a persistent keep-alive of less than 30 seconds, that means the client will always stay connected, even though a new client will bee connected to a different server. This is due to a bug in the AWS Global Accelerator, and, according to the Amazon technician I spoke to, they are working on fixing it. For now, a script on the VPN client that re-initialized the UDP connection when it’s unable to ping the VPN server is sufficient.
  • If a VPN server fails, the VPN clients should switch to another VPN server (see the limitation above), but they will be unable to access any servers in the VPC that the failed VPN server is in. There are two potential solutions. Either move all servers in the VPC onto the VPN, removing the speed and cost benefits of using the VPC, or setup a network load balancer in the VPC, and spin up a second VPN server. Please note that the second solution would require some extra work on backbone routing that hasn’t yet been done.

Stormtrooper Bohemian Rhapsody by Andrew Martin

XSD2Go - Automatically generate golang xml parsers

Posted by Simon Lukasik on May 30, 2020 12:00 AM

Did you ever need to write XML parser from scratch? You can have a parser ready in few minutes! Let me introduce you to xsd2go.

Why bother?

Most of my readers will probably have an experience with the wide spread XML applications like RSS or Atom feeds, SVG, XHTML. For those well known XML applications you will find good library encapsulating the parsing for you. You just include existing parser in your project and you are done with it. However, what would you do if you cannot use it (think of license mismatch), or what would you do if there was no parsing library at all?

There are many XML applications around. Here comes a (probably incomplete) list of XML formats, I had to touch in my past life: Atom, DocBook, Office Open XML, OpenDocument (ODF), OSCAL, Rolie, RSS, SAML, SCAP (+dozens of sub-formats), SOAP, SVG, XMPP, Epub, WS-Policy, XHTML, XSLT.

What is XSD?

You already know that, but let me briefly run through it. XSD stands for XML Schema Definition. For a given XML application (think of Rss) it describes how a well-formed document looks like, describing the structure really well. XSD will tell you, what attributes each element has, what sub-elements can be found in each element and what are the cardinalities, meaning how many of sub-elements of certain type you can expect and which are optional and which are not.

In effect (and by design), XSD can be used to automatically assess documents adherence to a given standard.

Little snark: XSD is true XML application and hence it is expressive and you will find many ways to achieve the same descriptive result.

What is XSD2Go?

XSD2Go is minimalistic project that converts XSD to golang code. For given set of XSD files, xsd2go produces golang code that contains structures/models relevant for parsing given XML format. Produced golang code contains XML parsing hints that can be used together with standard encoding/xml golang package to parse the XMLs.

⚠️ You should run xsd2go, before ever importing encoding/xml to your project. ⚠️

I cannot stress this enough.

I mean, xsd2go is 6 days old and thus very unfinished, but I still believe it already presents good value compared to starting from scratch.

How does it work?

Just briefly. Xsd2go will

  • parse your master XSD file
    • and processes all xsd:import elements, parsing imported XSD files
    • effectively building workspace and a dependency tree of all relevant XSDs.
    • Circular dependencies are allowed.
  • pre-process each XSD
    • verify validity of internal references (think of xsd:element/@ref, xsd:attribute/@ref, @type and similar)
    • figure-out golang import () paths needed (as a reminder each one XSD file results in one golang package)
    • discover and fix any name space clashes that would result in invalid golang code
    • re-compute cardinalities of elements that are wrapped by xsd:sequence and xsd:choice
    • figure out golang types
  • produce golang code using simple and readable text/template golang package
  • re-formats created golang code using go/format

Show me an exemplary usage

gocomply_xsd2go convert schemas/sds/1.3/scap-source-data-stream_1.3.xsd github.com/gocomply/scap pkg/scap/models
Processing 'schemas/sds/1.3/scap-source-data-stream_1.3.xsd'
	Parsing: schemas/sds/1.3/scap-source-data-stream_1.3.xsd
	Parsing: schemas/xccdf/1.2/xccdf_1.2.xsd
	Parsing: schemas/common/xml.xsd
	Parsing: schemas/xccdf/1.2/cpe-language_2.3.xsd
	Parsing: schemas/cpe/2.3/cpe-naming_2.3.xsd
	Parsing: schemas/oval/5.11.3/oval-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/oval-common-schema.xsd
	Parsing: schemas/common/xmldsig-core-schema.xsd
	Parsing: schemas/oval/5.11.3/aix-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/android-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/apache-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/apple-ios-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/asa-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/catos-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/esx-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/freebsd-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/hpux-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/independent-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/ios-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/iosxe-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/junos-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/linux-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/macos-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/netconf-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/pixos-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/sharepoint-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/solaris-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/unix-definitions-schema.xsd
	Parsing: schemas/oval/5.11.3/windows-definitions-schema.xsd
	Parsing: schemas/cpe/2.3/cpe-dictionary_2.3.xsd
	Parsing: schemas/ocil/2.0/ocil-2.0.xsd
	Parsing: schemas/common/catalog.xsd
	Parsing: schemas/common/xlink.xsd
	Generating 'pkg/scap/models/ds/models.go'
	Generating 'pkg/scap/models/apple_ios_def/models.go'
	Generating 'pkg/scap/models/macos_def/models.go'
	Generating 'pkg/scap/models/cpe/models.go'
	Generating 'pkg/scap/models/android_def/models.go'
	Generating 'pkg/scap/models/catos_def/models.go'
	Generating 'pkg/scap/models/cdf/models.go'
	Generating 'pkg/scap/models/xml_dsig/models.go'
	Generating 'pkg/scap/models/aix_def/models.go'
	Generating 'pkg/scap/models/apache_def/models.go'
	Generating 'pkg/scap/models/hpux_def/models.go'
	Generating 'pkg/scap/models/ind_def/models.go'
	Generating 'pkg/scap/models/xlink/models.go'
	Generating 'pkg/scap/models/oval/models.go'
	Generating 'pkg/scap/models/esx_def/models.go'
	Generating 'pkg/scap/models/sp_def/models.go'
	Generating 'pkg/scap/models/sol_def/models.go'
	Generating 'pkg/scap/models/cpe_dict/models.go'
	Generating 'pkg/scap/models/inter/models.go'
	Generating 'pkg/scap/models/oval_def/models.go'
	Generating 'pkg/scap/models/asa_def/models.go'
	Generating 'pkg/scap/models/junos_def/models.go'
	Generating 'pkg/scap/models/pixos_def/models.go'
	Generating 'pkg/scap/models/iosxe_def/models.go'
	Generating 'pkg/scap/models/linux_def/models.go'
	Generating 'pkg/scap/models/netconf_def/models.go'
	Generating 'pkg/scap/models/unix_def/models.go'
	Generating 'pkg/scap/models/er/models.go'
	Generating 'pkg/scap/models/freebsd_def/models.go'
	Generating 'pkg/scap/models/ios_def/models.go'
	Generating 'pkg/scap/models/win_def/models.go'

The outputs can be found at github.com/GoComply/scap/pkg/scap/model

and lastly, kudos

… go to Gabe Alford who kept nudging me about the need for such project. Thank You, Gabe!

Fedora program update: 2020-22

Posted by Fedora Community Blog on May 29, 2020 09:15 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora this week. Fedora 30 has reached end-of-life. Elections voting is open through 11 June. I have weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. Announcements Help wanted Upcoming meetings Releases CPE update […]

The post Fedora program update: 2020-22 appeared first on Fedora Community Blog.

Curated playlist of videos

Posted by Harish Pillay 9v1hp on May 29, 2020 10:30 AM

I did not realise that there are plenty of videos of events at which I spoke, interviews I gave and panels I was part of. For my own purposes, I thought it is best if I can bring them together in a playlist. All the videos are on youtube (and I am sure that there would be on vimeo as well) and I will have to do this housekeeping every now and then.

I’ve found 36 videos so far. I would like to place a link to each here (extracted from the playlist). Will do that soon.

<iframe allowfullscreen="true" class="youtube-player" height="552" src="https://www.youtube.com/embed/videoseries?list=PL9tUz4B_AneIkO8pphIiZ8hRS5NIbk1qk&amp;hl=en_US" style="border:0;" width="980"></iframe>

Aligning Cockpit with Common Criteria

Posted by Cockpit Project on May 29, 2020 12:00 AM

In the last few releases new features were delivered to make Cockpit meet the Common Criteria and thus making it possible to undergo the certification process in the near future. This certification is often required for large organizations, particularly in the public sector, and also gives users more confidence in using the Web Console without risking their security.

This article provides a summary of these new changes with reference to the given CC norms.

Cockpit session tracking

There is a multitude of tools to track logins. Cockpit sessions are now correctly registered in utmp, wtmp and btmp, allowing them to be displayed in tools like who, w, last and lastlog. Cockpit also works correctly with pam_tally2 and pam_faillock.

[root@m1 ~]# who
root     pts/0        2019-12-13 08:09 (
admin    web console  2019-12-13 08:09

Delivered in version 209 and 216.

AC-9 Previous Logon (Access) Notification

Support for banners on the login page

Companies or agencies may need to show warning which states that use of the computer is for lawful purposes, the user is subject to surveillance, and anyone trespassing will be prosecuted. This must be stated before login so they had fair warning. Like SSH, Cockpit can optionally show the content of a banner file on the login screen.

This needs to be configured in /etc/cockpit/cockpit.conf. For example to show content of /etc/issue.cockpit on the login page:



Delivered in version 209.

FTA_TAB.1 Default TOE access banners

Session timeouts

To prevent abusing forgotten Cockpit sessions, Cockpit can be set up to automatically log users out of their current session after some time of inactivity. The timeout (in minutes) can be configured in /etc/cockpit/cockpit.conf. For example, to log out the user after 15 minutes of inactivity:


Automatic logout

Delivered in version 209 (with default timeout of 15 minutes, but since version 218 the default timeout is disabled).

FMT_SMF_EXT.1.1 Enable/disable session timeout

Show “last login” information upon log in

Cockpit displays information about the last time the account was used and how many failed login attempts for this account have occurred since the last successful login. This is an important and required security feature so that users are aware if their account has been logged into without their knowledge or if someone is trying to guess their password.

Last login banner

Delivered in version 216.

AU-14 Session Audit (2)

All systems go

Posted by Fedora Infrastructure Status on May 28, 2020 07:27 PM
Service 'Package Updates Manager' now has status: good: Everything seems to be working.

Creating a dedicated log management layer

Posted by Peter Czanik on May 28, 2020 10:51 AM

Event logging is a central source of information both for IT security and operations, but different teams use different tools to collect and analyze log messages. The same log message is often collected by multiple applications. Having each team using different tools is complex, inefficient and makes systems less secure. Using a single application to create a dedicated log management layer independent of analytics instead, however, has multiple benefits.

Using syslog-ng is a lot more flexible than most log aggregation tools provided by log analytics vendors. This is one of the reasons why my talks and blogs focused on how to make your life easier using its technical advantages. Of course, I am aware of the direct financial benefits as well. If you are interested in that part, talk to my colleagues on the business side. They can help you to calculate how much you can save on your SIEM licenses when syslog-ng collects log messages and ensures that only relevant messages reach your SIEM and only at a predicatively low message rate. You can learn more about this use case on our Optimizing SIEM page.

In this blog, I will focus on a third aspect: simplifying complexity. This was the focus of many of my conference discussions before the COVID-19 pandemic. If we think a bit more about it, we can see that this is not really a third aspect, but a combination of the previous two instead. Using the flexibility of syslog-ng, we create a dedicated log management layer in front of different log analytics solutions. By reducing complexity, we can save in many ways: on computing and human resources, and on licensing when using commercial tools for log analysis as well.

Back to basics

While this blog is focusing on how to consolidate multiple log aggregation systems that are specific to analytics softwares into a common log management layer, I also often see that many organizations still do not see the need for central log collection. So, let’s quickly jump back to the basics: why central log collection is important. There are three major reasons:

  • Convenience: a single place to check logs instead of many.

  • Availability: logs are available even when the sender machine is down or offline.

  • Security: you can check the logs centrally even if a host was breached and logs were deleted or falsified locally.

The client-relay-server architecture of syslog-ng can make sure that central logging scales well, even for larger organizations with multiple locations. You can learn more about relays at https://www.syslog-ng.com/community/b/blog/posts/what-syslog-ng-relays-are-good-for

Reducing complexity

Collecting system logs with one application locally, forwarding the logs with another one, collecting audit logs with a different app, buffering logs with a dedicated server, and processing logs with yet another app centrally means installing several different applications on your infrastructure. This is the architecture of the Elastic stack, for example. Many others are simpler, but still separating system log collection (journald and/or one of the syslog variants) and log shipping. This is the case of Splunk forwarder and many of the different Logging as a Service agents. And on top of that, you might need a different set of applications for different log analysis software. Using multiple software solutions makes a system more complex, difficult to update and needs more computing, network and storage resources as well.

All these features can be implemented using a single application, which in the end can feed multiple log analysis software. A single app to learn and to follow in bug & CVE trackers. A single app to push through the security and operations teams, instead of many. Less resources needed both on the human and technical side.

Implementing a dedicated log management layer

The syslog-ng application collects logs from many different sources, performs real-time log analysis by processing and filtering them, and finally, it stores the logs or routes them for further analysis.

In an ideal world, all log messages come in a structured format, ready to be used for log analysis, alerting or dashboards. But in a real world, only part of the logs fall into this category. Traditionally, most of the log messages come as free format text messages. These are easy to be read by humans, which was the original use of log messages. However, nowadays logs are rarely processed by the human eye. Fortunately, syslog-ng has several tools to turn unstructured (and many of the structured) message formats into name-value pairs, and thus delivers the benefits of structured log messages.

Once you have name-value pairs, log messages can be further enriched with additional information in real-time, which helps responding to security events faster. One way of doing that is adding geo-location based on IP addresses. Another way is adding contextual data from external files, like the role of a server based on the IP address or the role of the user based on the name. Data from external files can also be used to filter messages (for example, to check firewall logs to determine whether certain IP addresses are contained in various black lists for malware command centers, spammers, and so on).

Logging is subject to an increasing number of compliance regulations. PCI-DSS or many European privacy laws require removing sensitive data from log messages. Using syslog-ng logs can be anonymized in a way that they are still useful for security analytics.

With log messages parsed and enriched, you can now make informed decisions where to store or forward log messages. You can already do basic alerting in syslog-ng, and you can receive critical log messages on a Slack channel. There are many ready-to-use destinations within syslog-ng, like Kafka, MongoDB or Elasticsearch. Also, you can easily create your own custom destination based on the generic network or HTTP destinations, and using templates to log in a format as required by a SIEM or a Logging as a Service solution, like Sumo Logic.

What is next?

Many of these concepts were covered before in earlier blogs, and the individual features are covered well in the documentation. If you want to learn more about them and see some configuration examples, join me at the Pass the SALT conference, where among many other interesting talks, you can also learn more in depth about creating a dedicated log management layer.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or even chat with us. For a list of possibilities, check our GitHub page under the “Community” section at https://github.com/syslog-ng/syslog-ng. On Twitter, I am available as @PCzanik.

PHP version 7.3.19RC1 and 7.4.7RC1

Posted by Remi Collet on May 28, 2020 09:28 AM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests, and also as base packages.

RPM of PHP version 7.4.7RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 32 or remi-php74-test repository for Fedora 30-31 and Enterprise Linux 7-8.

RPM of PHP version 7.3.19RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30-31 or remi-php73-test repository for Enterprise Linux.

emblem-notice-24.pngPHP version 7.2 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.4 as Software Collection:

yum --enablerepo=remi-test install php74

Parallel installation of version 7.3 as Software Collection:

yum --enablerepo=remi-test install php73

Update of system version 7.4:

yum --enablerepo=remi-php74,remi-php74-test update php\*

or, the modular way (Fedora and RHEL 8):

dnf module reset php
dnf module enable php:remi-7.4
dnf --enablerepo=remi-modular-test update php\*

Update of system version 7.3:

yum --enablerepo=remi-php73,remi-php73-test update php\*

or, the modular way (Fedora and RHEL 8):

dnf module reset php
dnf module enable php:remi-7.3
dnf --enablerepo=remi-modular-test update php\*

Notice: version 7.4.7RC1 is in Fedora rawhide for QA.

emblem-notice-24.pngEL-8 packages are built using RHEL-8.1

emblem-notice-24.pngEL-7 packages are built using RHEL-7.8

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php73, php74)

Base packages (php)

Curso de Python - Imprimir valores y funciones

Posted by Alvaro Castillo on May 28, 2020 05:42 AM

En esta entrega veremos:

  • Cómo imprimir valores de múltiples formas
  • Iteradores
  • Cómo trabajar con una función básica
  • Estamentos de return, yield
  • Asignando variables a la función
  • Asignando valores por defecto en dichas variables
  • Closures
  • Generadores
  • Función lambda

Sustitución de tipos de datos en sentencias print()

Cuando queremos incluir un valor que proviene de variables, listas, sets, tuplas... se pueden hacer de múltiples formas. Por ejemplo:

variable = "Susana"
print("Hola me llamo:", variable)
Hola me llamo Susana.

Podemos hacerlo de esta forma:

variable = "Susana"
print("Hola me llamo: %s" % variable)
Hola me llamo Susana.

Tenemos que tener en cuenta que de esta manera, hay que definir si el dato que vamos a sustituir es str = %s%, int = %i, float = %f. NOTA: Los valores de tipo complex = %c no tienen sustitución directa en esta forma, por lo que hay que utilizar otro método como en el anterior.

O de esta otra:

variable = Susana
print(f'Hola me llamo {variable}')

También tenemos esta otra:

variable = "Susana"
print("Hola me llamo {}".format(variable))

En fin, hay muchas formas de hacer sustituciones en los str y en otros tipos de datos que puedes consultar en la documentación oficial.


Un iterador es una especie de puntero que permite devolver valores que le especifiquemos.

lista= ["Hola","esto","es","un","ejemplo","de","iterador"]

# Inicializamos el iterador
mi_iterador = iter(lista)

# Se puede imprimir el valor de esta manera, que devuelve la palabra Hola

# Pasamos al siguiente iterador que contiene la palabra "esto", pero no la imprimimos.

# Imprimimos la palabra "es"

# Imprimir todos los elementos del iterador:
for item in mi_iterador:


Las funciones son un conjunto de código organizado y que contienen una serie de instrucciones que pueden reutilizarse. Por ejemplo, dormir es una función que tenemos, hay múltiples variables como el lugar, la intensidad de la luz, si estamos cómodos... pero que al final el resultado es descansar. Lo mismo sucede con las funciones.

Declaración de ejemplo de una función:

def my_funcion():
  # Bloque de código


return permite devolver un valor de la función y darla por terminada más que utilizar un print()

def func(a):
  return a



En contra posición de return, yield permite seguir aplicando el código que venga más adelante de la función creando una especie de co-rutina o gestión de resultados o ejecución por turnos, como si fuera un corredor de atletismo que le pasa el testigo a otro y su marca de tiempo es el valor de retorno. Hacemos uso de los iteradores para extraer los datos.

def func(a):
  print("Devolvemos el primer resultado: %i" % (a)) 
  yield a
  c = a - 2
  print("Devolvemos el segundo valor: %i" % (c))
  yield c

abc = func(20)

mi_iter = iter(abc)

for item in mi_iter:

Tipos de funciones en Python

En Python, tenemos 2 tipos de funciones, una que creamos nosotros y otras que vienen integradas. Las que nosotros creamos las definimos en nuestra aplicación, script... mientras que las...

Curso de Python - Controles de flujo, condicionales y bucles

Posted by Alvaro Castillo on May 28, 2020 05:17 AM

Control de flujo

Los controles de flujo se utilizan para definir cómo va actuar un script, aplicación... y qué va aplicarse inmediatamente después de evaluar la condición cuando se compare.


Esta estructura de control te permite evaluar una condición y ejecutar un trozo de código si la cumple.

>>> if (condición):
>>>  Bloque de código


El if-else es una estructura de control que permite hacer 1 cosa si se cumple la condicioń, si esta no se cumple, únicamente se ejecutará un bloque de código sin contemplar otras posibilidades.

if (condición 1):
  Bloque de código
  Bloque de código

Veamos un ejemplo, Si tenemos un coche de marca Opel, emitirás un mensaje que diga "Tienes un Opel", si no es así, mostraremos un mensaje que diga que "No tienes un coche Opel".

>>> marca = "Citröen"
>>> if (marca == "Opel"):
>>>   print("Tienes un Opel")
>>> else:
>>>  print("No tienes un coche Opel")
'No tienes un coche Opel'


¿Pero qué pasa cuando queremos comprobar múltiples condiciones? No podemos estar anidando if-else como si no hubiese un mañana uno dentro del otro. Para eso tenemos la estructura if-elif-else. Esta estructura nos permite hacer 1 cosa u otra en base a una condición, la cuál estará compuesto por uno o múltiples operadores como aritméticos, lógicos...

if (ondición 1):
  Bloque de código
elif (condición3):
  Bloque de código
elif (condición2):
  Bloque de código
  Bloque de código

Veamos un ejemplo, Si tenemos un coche de marca Opel, emitirás un mensaje que diga "Tienes un Opel", si no es así, mostraremos un mensaje que diga que "No tienes un coche Opel".

>>> marca = "Citröen"
>>> if (marca == "Opel")
>>>   print("Tienes un Opel")
>>> elif (marca == "Citröen")
>>>   print("Tienes un coche Opel")
>>> elif (marca == "Audi"):
>>>   print("Tienes un Audi")
>>> else:
>>>   print("Tu marca de coche no está registrada")
Tienes un coche Citröen

Todo esto se puede complicar aún más haciendo uso de otros operadores y de otros if-elif-else anidados, por ejemplo, utilizaremos los operadores de comparación con lógicos tal que así:

>>> marca_coche = "Toyota"
>>> modelo_coche = "AE87"
>>> if (marcha_coche == "Toyota" and modelo_coche == "AE92"):
>>>   if (motor_coche == 1600):
>>>     print("Perfecto")
>>>   elif (motor_coche == 1400):
>>>     print("Bien")
>>>   elif (motor_coche == 1200):
>>>     print("Cuidado con las cuestas")
>>>   else:
>>>     print("Esto huele a chasis")
>>> elif (marca_coche == "Citröen" and modelo_coche == "Saxo"):
>>>   print("Enhorabuena, tienes un coche que pesa poco y corre mucho.")
>>> else:
>>>   print("Error 404, Tu coche no encontrado.")
Tienes el coche de tus sueños.

Bucle for

¿Qué ocurre si queremos recorrer una lista o generar múltiples ejecuciones de código? Pues evidetenmente con un if no nos vale, ya que solo nos permite validar una condicioń, y cuando la valide, esta dejará de ejecutarse.

for variable_interactiva in secuencia:
  Bloque código

¿Cómo funciona? En secuencia va una condición, podemos poner que recorra todos los valores de una lista y nos lo imprima por variable_interactiva.

>>> frutas = [...

Major service disruption

Posted by Fedora Infrastructure Status on May 28, 2020 12:39 AM
Service 'Package Updates Manager' now has status: major: bodhi not accepting new updates

Fedora 32 elections voting now open

Posted by Fedora Community Blog on May 28, 2020 12:00 AM
Fedora 26 Supplementary Wallpapers: Vote now!

Voting in the Fedora 32 elections is now open. Go to the Elections app to cast your vote. Voting closes at 23:59 UTC on Thursday 11 June. Don’t forget to claim your “I Voted” badge when you cast your ballot. Links to candidate interviews are below. Fedora Council There is one seat open on the […]

The post Fedora 32 elections voting now open appeared first on Fedora Community Blog.

Mindshare election: Interview with Maria Leandro (tatica)

Posted by Fedora Community Blog on May 27, 2020 11:55 PM

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Maria Leandro Fedora account: tatica IRC nick: tatica (found in fedora-social – fedora-latam – fedora-ambassadors – fedora-design – […]

The post Mindshare election: Interview with Maria Leandro (tatica) appeared first on Fedora Community Blog.

Mindshare election: Interview with Alessio Ciregia (alciregi)

Posted by Fedora Community Blog on May 27, 2020 11:55 PM

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Alessio Ciregia Fedora account: alciregi IRC nick: alciregi (found in fedora-join #fedora-it #fedora-ask #fedora others) Fedora user wiki […]

The post Mindshare election: Interview with Alessio Ciregia (alciregi) appeared first on Fedora Community Blog.

Mindshare election: Interview with Daniel Lara (danniel)

Posted by Fedora Community Blog on May 27, 2020 11:55 PM

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Daniel Lara Fedora account: danniel IRC nick: danniel (found in #fedora #fedora-ambassadors #fedora-br# fedora-latam) Fedora user wiki page […]

The post Mindshare election: Interview with Daniel Lara (danniel) appeared first on Fedora Community Blog.

Mindshare election: Interview with Sumantro Mukherjee (sumantrom)

Posted by Fedora Community Blog on May 27, 2020 11:55 PM

This is a part of the Mindshare Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Sumantro Mukherjee Fedora account: sumantrom IRC nick: sumantrom (found in fedora-qa #fedora-test-day #fedora-classroom #fedora-india #fedora-meeting #fedora-join #fedora-devel #fedora-kernel […]

The post Mindshare election: Interview with Sumantro Mukherjee (sumantrom) appeared first on Fedora Community Blog.

Council election: Interview with James Cassell (cyberpear)

Posted by Fedora Community Blog on May 27, 2020 11:50 PM

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with James Cassell Fedora account: cyberpear IRC nick: cyberpear (I tend to idle in very many channels, participating in […]

The post Council election: Interview with James Cassell (cyberpear) appeared first on Fedora Community Blog.

Council election: Interview with Aleksandra Fedorova (bookwar)

Posted by Fedora Community Blog on May 27, 2020 11:50 PM

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Aleksandra Fedorova Fedora account: bookwar IRC nick: bookwar (found in #fedora-devel, #fedora-ci) Fedora user wiki page Questions Why […]

The post Council election: Interview with Aleksandra Fedorova (bookwar) appeared first on Fedora Community Blog.

Council election: Interview with Till Maas (till)

Posted by Fedora Community Blog on May 27, 2020 11:50 PM

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Till Maas Fedora account: till IRC nick: tyll (found in #fedora-devel, #fedora-de, #fedora-meeting-1, #nm, #nmstate, #systemroles) Fedora user […]

The post Council election: Interview with Till Maas (till) appeared first on Fedora Community Blog.

Council election: Interview with Alberto Rodriguez Sanchez (bt0dotninja)

Posted by Fedora Community Blog on May 27, 2020 11:50 PM

This is a part of the Council Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Alberto Rodriguez Sanchez Fedora account: bt0dotninja IRC nick: bt0 (found in fedora-commops #fedora-mktg #fedora-ambassadors #fedora-latam #fedora-join #fedora-mindshare #fedora-neuro) […]

The post Council election: Interview with Alberto Rodriguez Sanchez (bt0dotninja) appeared first on Fedora Community Blog.

FESCo election: Interview with Frantisek Zatloukal (frantisekz)

Posted by Fedora Community Blog on May 27, 2020 11:45 PM

This is a part of the FESCo Elections Interviews series. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Frantisek Zatloukal Fedora account: frantisekz IRC nick: frantisekz (found in fedora-qa, fedora-devel, fedora-admin) Fedora user wiki page Questions Why do you want to be a […]

The post FESCo election: Interview with Frantisek Zatloukal (frantisekz) appeared first on Fedora Community Blog.

FESCo election: Interview with Michal Novotný (clime)

Posted by Fedora Community Blog on May 27, 2020 11:45 PM

This is a part of the FESCo Elections Interviews series. The voting period starts on Thursday, 28 May and closes promptly at 23:59:59 UTC on Thursday, 11 June. Interview with Michal Novotný Fedora account: clime IRC nick: clime (found in #fedora-admin, #git, #c++, #bash, #fedora-apps, #libravatar) Fedora user wiki page Questions Why do you want […]

The post FESCo election: Interview with Michal Novotný (clime) appeared first on Fedora Community Blog.

All systems go

Posted by Fedora Infrastructure Status on May 27, 2020 11:03 PM
New status good: Everything seems to be working. for services: Package Updates Manager, Documentation website, Fedora elections

Disrupted CVE Assignment Process

Posted by Michael Catanzaro on May 27, 2020 09:00 PM

Due to an invalid TLS certificate on MITRE’s CVE request form, I have — ironically — been unable to request a new CVE for a TLS certificate verification vulnerability for a couple weeks now. (Note: this vulnerability does not affect WebKit and I’m only aware of one vulnerable application, so impact is limited; follow the link if you’re curious.) MITRE, if you’re reading my blog, your website’s contact form promises a two-day response, but it’s been almost three weeks now, still waiting.

Update May 29: I received a response today stating my request has been forwarded to MITRE’s IT department, and less than an hour later the issue is now fixed. I guess that’s score +1 for blog posts. Thanks for fixing this, MITRE.

Browser security warning on MITRE's CVE request form

Of course, the issue is exactly the same as it was five years ago, the server is misconfigured to send only the final server certificate with no chain of trust, guaranteeing failure in Epiphany or with command-line tools. But the site does work in Chrome, and sometimes works in Firefox… what’s going on? Again, same old story. Firefox is accepting incomplete certificate chains based on which websites you’ve visited in the past, so you might be able to get to the CVE request form or not depending on which websites you’ve previously visited in Firefox, but a fresh profile won’t work. Chrome has started downloading the missing intermediate certificate automatically from the issuer, which Firefox refuses to implement for fear of allowing the certificate authority to track which websites you’re visiting. Eventually, we’ll hopefully have this feature in GnuTLS, because Firefox-style nondeterministic certificate verification is nuts and we have to do one or the other to be web-compatible, but for now that is not supported and we reject the certificate. (I fear I may have delayed others from implementing the GnuTLS support by promising to implement it myself and then never delivering… sorry.)

We could have a debate on TLS certificate verification and the various benefits or costs of the Firefox vs. Chrome approach, but in the end it’s an obvious misconfiguration and there will be no further CVE requests from me until it’s fixed. (Update May 29: the issue is now fixed. :) No, I’m not bypassing the browser security warning, even though I know exactly what’s wrong. We can’t expect users to take these seriously if we skip them ourselves.

Major service disruption

Posted by Fedora Infrastructure Status on May 27, 2020 06:48 PM
New status major: Our openshift cluster is having difficulties that we are investigating for services: Documentation website, Fedora elections, Package Updates Manager

All systems go

Posted by Fedora Infrastructure Status on May 27, 2020 04:50 PM
Service 'Package Updates Manager' now has status: good: Everything seems to be working.

Broken vulnerability severities

Posted by Josh Bressers on May 27, 2020 02:37 PM

This blog post originally started out as a way to point out why the NVD CVSS scores are usually wrong. One of the amazing things about having easy access to data is you can ask a lot of questions, questions you didn’t even know you had, and find answers right away. If you haven’t read it yet, I wrote a very long series on security scanners. One of my struggles I have is there are often many “critical” findings in those scan reports that aren’t actually critical. I wanted to write something that explained why that was, but because my data took me somewhere else, this is the post you get. I knew CVSSv3 wasn’t perfect (even the CVSS folks know this), but I found some really interesting patterns in the data. The TL;DR of this post is: It may be time to start talking about CVSSv4.

It’s easy to write a post that made a lot of assumptions and generally makes facts up that suit whatever argument I was trying to make (which was the first draft of this). I decided to crunch some data to make sure my hypothesis were correct and because graphs are fun. It turns out I learned a lot of new things, which of course also means it took me way longer to do this work. The scripts I used to build all these graphs can be found here if you want to play along at home. You can save yourself a lot of suffering by using my work instead of trying to start from scratch.

<figure class="alignright size-medium is-resized"><figcaption>CVSSv3 scores</figcaption></figure>

Firstly, we’re going to do most of our work with whole integers of CVSSv3 scores. The scores are generally an integer and one decimal place, so for example ‘7.3’. Using the decimal place makes the data much harder to read in this post and the results using only integers were the same. If you don’t believe me, try it yourself.

So this is the distribution of CVSSv3 scores NVD has logged for CVE IDs. Not every ID has a CVSSv3 score which is OK. It’s a somewhat bell curve shape, which should surprise nobody.

<figure class="alignright size-medium is-resized"><figcaption> CVSSv2 scores</figcaption></figure>

Just for the sake of completeness and because someone will ask, here is the CVSSv2 graph. This doesn’t look as nice, which is one of the problems CVSSv2 had, it tended to favor certain scores. CVSSv3 was built to fix this. I simply show this graph to point out progress is being made, please don’t assume I’m trying to bash CVSSv3 here (I am a little). I’m using this opportunity to explain some things I see in the CVSSv3 data. We won’t be looking at CVSSv2 again.

Now I wanted something to compare this data to, how can we decide if the NVD data is good, bad, or something in the middle? I decided to use the Red Hat CVE dataset. Red Hat does a fantastic job capturing things like severity and CVSS scores, their data is insanely open, it’s really good, and its’ easy to download. I would like to do this with some other large datasets someday, like Microsoft, but getting access to that data isn’t so simple and I have limited time.

<figure class="alignleft size-medium is-resized"><figcaption>Red Hat CVSSv3 scores</figcaption></figure>

Here are the Red Hat CVSSv3 scores. It looks a lot like the NVD CVSSv3 data, which given how CVSSv3 was designed, is basically what anyone would expect.

Except it’s kind of not the same it turns out. If we take the NVD score and subtract it from the Red Hat score for every CVE ID and graph the rest we get something that shows NVD likes to score higher than Red Hat does. For example let’s look at CVE-2020-10684. Red Hat gave it a CVSSv3 score of 7.9, while NVD gave it 7.1. This means in our dataset the score would be 7.1 – 7.9 = -0.8

<figure class="alignright size-medium is-resized"><figcaption>Difference between Red Hat and NVD CVSSv3 scores</figcaption></figure>

This data is more similar than I expected. About 41 percent of the scores are within 1. The zero doesn’t mean they match, very few match exactly. It’s pretty clear from that graph that the NVD scores are generally higher than the Red Hat scores. This shouldn’t surprise anyone as NVD will generally error on the side of caution where Red Hat has a deeper understanding of how a particular vulnerability affects their products.

Now by itself we could write about how NVD scores are often higher than they should be. If you receive security scanner reports you’re no doubt used to a number of “critical” findings that aren’t very critical at all. Those ratings almost always come from this NVD data. I didn’t think this data was compelling enough to stand on its own, so I kept digging, what other relationships existed?

<figure class="alignright size-medium is-resized"><figcaption>Red Hat severity vs CVSSv3 scores</figcaption></figure>

The graph that really threw me for a loop was when I graphed the Red Hat CVSSv3 scores versus the Red Hat assigned severity. Red Hat doesn’t use the CVSSv3 scores to assign severity, they use something called the Microsoft Security Update Severity Rating System. This rating system predates CVSS and in many ways is superior as it is very simple to score and simple to understand. If you clicked that link and read the descriptions you can probably score vulnerabilities using this scale now. Knowing how to use CVSSv3 will take a few days to get started and long time to be good at it.

If we look at the graph we can see low are generally on the left side, moderate in the middle, high toward the right, but what’s the deal with those critical flaws? Red Hat’s CVSSv3 scores place things as being in the moderate to high range, but the Microsoft scale says they’re critical. I looked at some of these, strangely Flash Player accounts for about 2/3 of those critical issues. That’s a name I thought I would never hear again.

The reality is there shouldn’t be a lot of critical flaws, they are meant to be rare occurrences, and generally are. So I kept digging. What are the relationship between the Red Hat severity and NVD severity? The NVD severity is based on the CVSSv3 score.

This is where my research sort of fell off the rails. The ratings provided by NVD and the ratings Red Hat assigns have some substantial differences. I have a few more graphs that help drive this home. If we look at the NVD rating vs the Red Hat ratings, we see the inconsistency.

<figure class="wp-block-image size-large"><figcaption>NVD severity vs Red Hat severity</figcaption></figure>

I think the most telling graph here is the Red Hat Low vulnerabilities are mostly medium, high, and critical from the NVD CVSSv3 scoring. That strikes me as being a problem. I could maybe understand a lot of low and moderate issues, but there’s something very wrong with this data. There shouldn’t be this many high and critical findings.

<figure class="alignright size-medium is-resized"><figcaption>Red Hat severity vs CVSSv3 scores</figcaption></figure>

Even if we graph the Red Hat CVSSv3 scores for their low issues the graph doesn’t look like it should in my opinion. There’s a lot of scoring that’s a 4 or higher.

Again, I don’t think the problem is Red Hat, or NVD, I think they’re using the tools they have the best they can. Now it should be noted that I only have two sources of data, NVD and Red Hat. I really need to find more data to see if my current hypothesis holds. And we can easily determine if what we see from Red Hat is repeated, or maybe Red Hat is an outlier.

There are also some more details that can be dug into. Are there certain CVSSv3 fields where Red Hat and NVD consistently score differently? Are there certain applications and libraries that create the most inconsistency? It will take time to work through this data, I’m not sure how to start looking at this just yet (if you have ideas or want to try it out yourself, do let me know). I view this post at the start of a journey, not a final explanation. CVSS scoring has helped the entire industry. I have no doubt some sort of CVSS scoring will always exist and should always exist.

The takeaway here was going to be an explanation of why the NVD CVSS scores shouldn’t be used to make decisions about severity. I think the actual takeaway now is the problem isn’t NVD, well, they sort of are, but the real problem is CVSSv3. CVSSv3 scores shouldn’t be trusted as the only source for calculating vulnerability severity.

Major service disruption

Posted by Fedora Infrastructure Status on May 27, 2020 10:07 AM
Service 'Package Updates Manager' now has status: major: bodhi is misbehaving, users can experience timeouts

Virtualización con KVM en Fedora 32

Posted by Bernardo C. Hermitaño Atencio on May 27, 2020 02:18 AM

Una gran alternativa a muchos programas de virtualización es KVM, que es un módulo del Kernel de Linux y muy fácil de usar para poder ejecutar muchos sistemas operativos virtualizados.

Kvm: Tecnología open source que convierte el kernel de Linux en un hipervisor que se puede usar para la virtualización.
QEMU: Es un emulador de máquinas y virtualizador genérico open source.
Bridge: Este paquete que contiene las utilidades de red tipo puente, permite conectar dos o más computadores a Internet cuando ésta llega sólo a uno de ellos.
Libvirt: Es una API de código abierto, herramienta de para administrar la virtualización de la plataforma.

En los siguientes 2 videos se presenta el proceso de como instalar y configurar KVM con qemu, libvirt haciendo uso el adaptador de tipo puente (brigde), ademas también se importa un maquina virtual.

<figure class="wp-block-embed-youtube wp-block-embed is-type-rich wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="401" src="https://www.youtube.com/embed/9Rnt_zclGVU?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" width="712"></iframe>
<figcaption>Instalación y configuración de KVM, qemu, libvirt con adaptador de tipo puente.</figcaption></figure> <figure class="wp-block-embed-youtube wp-block-embed is-type-rich wp-embed-aspect-16-9 wp-has-aspect-ratio">
<iframe allowfullscreen="true" class="youtube-player" height="401" src="https://www.youtube.com/embed/D1H8itv2Uu0?version=3&amp;rel=1&amp;fs=1&amp;autohide=2&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" style="border:0;" width="712"></iframe>
<figcaption>Importar máquina virtual para ejecutar en KVM.</figcaption></figure>

Cockpit 220

Posted by Cockpit Project on May 27, 2020 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 220.

New navigation with integrated switching of hosts

The navigation has been redesigned and brings four major improvements:

  • One-level navigation: The current two-level navigation has been squashed for better use of space and better discoverability
  • Integrated hosts switching: Switching between hosts as well as editing them can be done now directly from the navigation without using the ‘Dashboard’ component
  • Better discoverability of applications: Applications are shown as the first group in the menu and are also searchable
  • Access level for all hosts: You can change between Administrative and Limited access on every host, right from the navigation
New one-level navigation Integrated host switcher

Logs: Inline help for filtering

The previous release introduced new advanced search features for logs. This release adds a help button that shows an overview of accepted options, and the journalctl command corresponding to the current filter.

Logs inline help

Storage: Improve side panel on details page

The side panel on the storage details page has been unified and uses the same layout as on the storage overview page.

Storage side panel

Try it out

Cockpit 220 is available now:

GNOME Foundation Board of Directors: a Year in Review

Posted by Allan Day on May 26, 2020 02:49 PM

The 2020 elections for the GNOME Foundation Board of Directors are underway, so it’s a good time to look back over the past 12 months and see what the current board has been up to. This is intended as a general update for members of the GNOME project, as well as a potential motivator for those who might be interested in running in the election!

Who’s on the board?

This year, the board has been Britt Yazel, Tristan Van Berkom, Philip Chimento, Rob McQueen, Carlos Soriano, Federico Mena Quintero, and myself.

Rob’s been president, I’ve been the vice-president and chair, Carlos has been treasurer, Philip has been secretary, and Federico has been vice-secretary.

In addition to these formal roles, each of our board members has brought their existing expertise and areas of interest: Britt has brought a concern with marketing and engagement, Federico has been our code of conduct expert, Rob has brought his knowledge of all things Flatpak and Flathub, Carlos knows everything Gitlab, and Philip and Tristan have both been able to articulate the needs and interests of the GNOME developer community.

Meetings and general organisation

The board has been meeting for 1 hour a week, according to a schedule which we’ve been using for a little while: we have a monthly Executive Director’s report, a monthly working session, and standard meetings in the gaps in-between.

This year we made greater use of our Gitlab issue tracker for planning meeting agendas. A good portion of the issues there are private, but anyone can interact with the public ones.

Making the board into a board

Historically, the GNOME Foundation Board has performed a mix of different roles, some operational and some strategic. We’ve done everything from planning and approving events, to running fundraisers, to negotiating contracts.

Much of this work has been important and valuable, but it’s not really the kind of thing that a Board of Directors is supposed to do. In addition to basic legal responsibilities such as compliance, annual filings, etc, a Board of Directors is really supposed to focus on governance, oversight and long-term planning, and we have been making a concerted effort to shift to this type of role over the past couple of years.

This professionalising trend has continued over the past year, and we even had a special training session about it in January 2020, when we all met in Brussels. Concrete steps that we have taken in this direction include developing high-level goals for the organisation, and passing more operational duties over to our fantastic staff.

This work is already having benefits, and we are now performing a more effective scrutiny role. Over the next year, the goal is to bring this work to its logical conclusion, with a schedule for board meetings which better-reflects its high-level governance and oversight role. As part of this, the hope is that, when the new board is confirmed, we’ll switch from weekly to monthly meetings.

This is also the reason behind our change to the bylaws last year, which is taking effect for the first time in this election. As a result of this, directors will have a term of two years. This will provide more consistency from one year to the next, and will better enable the Foundation and staff to make long-term plans. There has been a concern people would be unwilling to sit as a Director for a two year period, but we have significantly reduced the time commitment required of board members, and hope that this will mitigate any concerns prospective candidates might have.

Notable events

The GNOME Foundation has had a lot going on over the last 12 months! Much of this has been “operational”, in the sense that the board has been consulted and has provided oversight, but hasn’t actually been doing the work. These things include hiring new staff, the coding education challenge that was recently launched, and the Rothschild patent case which was settled only last week.

In each case the board has been kept informed, has given its view and has had to give formal approval when necessary. However, the areas where we’ve been most actively working have, in some respects, been more prosaic. This includes things like:

Code of conduct. The board was involved with the review and drafting of the new GNOME code of conduct, which we subsequently unanimously approved in September 2019. We also set up the new Code of Conduct Committee, which is responsible for administering the code of conduct.

Linux App Summit 2019, which happened in Barcelona. This event happened due to the joint support of the GNOME Foundation and KDE e.V, and the board was active in drafting the agreement that allowed this joint support to take place.

Guidelines for committees. As the board takes a more strategic oversight role, we want our committees to be run and report more consistently (and to operate according to the bylaws), so we’ve created new guidelines.

2020 budget. The foundation has had a lot going on (the coding challenge, patent case, etc) and all of this impacted the budget, and made financial scrutiny particularly important.

GNOME software definition and “Circle” proposal. This is a board-led initiative which addresses a long-standing confusion around which projects should be included within GNOME and make use of our infrastructure, branding and whether the teams involved were eligible for Foundation membership. The initiative was announced on Discourse last week for initial community feedback.

Updated conference policy. This primarily involved passing responsibility for conference approvals to our staff, but we have also clarified the rules for conference bidding processes (see the policy page).

In addition to this, the board has been involved with its usual events and workload, including meeting with our advisory board, the AGM, and voting on any issues which require an OK from the board.


2020 Elections

As I mentioned at the beginning of this post, the 2020 board elections are currently happening. Candidates have until Friday to announce their interest. As someone who has served on the board for a while, it’s definitely something that I’d recommend! If you’re interested and want more information, don’t hesitate to reach out. Or, if you’re feeling confident, just throw your hat in the ring.

Using Rust to access Internet over Tor via SOCKS proxy 🦀

Posted by Kushal Das on May 26, 2020 09:46 AM

Tor provides a SOCKS proxy so that you can have any application using the same to connect the Onion network. The default port is 9050. The Tor Browser also provides the same service on port 9150. In this post, we will see how can we use the same SOCKS proxy to access the Internet using Rust.

You can read my previous post to do the same using Python.

Using reqwest and tokio-socks crates

I am using reqwest and tokio-socks crates in this example.

The Cargo.toml file.

name = "usetor"
version = "0.1.0"
authors = ["Kushal Das <mail@kushaldas.in>"]
edition = "2018"

# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html

tokio = { version = "0.2", features = ["macros"] }
reqwest = { version = "0.10.4", features = ["socks", "json"] }
serde_json = "1.0.53"

The source code:

use reqwest;
use tokio;
use serde_json;

async fn main() -> Result<(), reqwest::Error> {
    let proxy = reqwest::Proxy::all("socks5://").unwrap();
    let client = reqwest::Client::builder()

    let res = client.get("https://httpbin.org/get").send().await?;
    println!("Status: {}", res.status());

    let text: serde_json::Value = res.json().await?;
    println!("{:#?}", text);


Here we are converting the response data into JSON using serde_json. The output looks like this.

✦ ❯ cargo run
    Finished dev [unoptimized + debuginfo] target(s) in 0.06s
     Running `target/debug/usetor`
Status: 200 OK
    "args": Object({}),
    "headers": Object({
        "Accept": String(
        "Host": String(
        "X-Amzn-Trace-Id": String(
    "origin": String(
    "url": String(

Instead of any normal domain, you can also connect to any .onion domain via the same proxy.