Fedora People

A preliminary review of /e/

Posted by Kevin Fenzi on March 23, 2019 01:10 AM

I’ve been running LineageOS on my phone for a while now (and cyanogenmod before that) and been reasonably happy overall. Still even LineageOS is pretty intertwined with the google ecosystem and worries me, especially given that google is first and foremost an ad company.

I happened to run accross mention of /e/ somewhere and since LineageOS did a jump from being based on ASOP15 to ASOP16 which required a new install anyhow, I decided to check it out.

As you may have gathered from the above, /e/ is a phone OS and platform, forked off from LineageOS14.1. It’s located at https://e.foundation based in france (a non profit) headed by Gaël Duval, who Linux folks may know from his Mandrake/Mandriva days. The foundation has a lot of noble goals, starting with “/e/’s first mission is to provide everyone knowledge and good practices around personal data and privacy.” They also have a slogan “Your data is your data!”

I downloaded and installed a 0.5 version here. Since I already had my phone unlocked and TWRP recovery setup, I just backed up my existing LineageOS install (to my laptop), wiped the phone and installed /e/. The install was painless and since (of course) there’s no google connections wanted, I didn’t even have to download a gapps bundle. The install worked just fine and I was off and exploring:

The good:

  • Most everything worked fine. Basically if it worked in LineageOS 14.1, it works here (phone, wifi, bluetooth, etc)
  • Many of the apps I use with my phone seem fine: freeotp, signal, twidere, tiny tiny rss reader, revolution irc are all the same apps I am used to using and are install-able from f-droid just fine.
  • There is of course no google maps anymore, but this was a great chance to try out OsmAnd, which has come a very long way. It’s completely usable except for one thing: The voice navigation uses TTS voices and it sounds like a bad copy of Stephen Hawking is talking to you. Otherwise it’s great!
  • My normal ebook reader app is available: fbreader, but I decided to look around as it’s getting a bit long in the tooth. I settled so far on KOReader, which was orig a kobo app, but works pretty nicely on this OS as well.
  • For podcasts I had been using dogcatcher, but now I am trying out AntennaPod.
  • The security level of the image I got was March 2019, so they are keeping up with at least the “android” security updates now.

The meh:

  • The fdroid app isn’t pre-installed, but it’s easy to install it. They plan to have their own store for apps that will just show additional information over the play store, etc.
  • There is ‘fennec’ in f-droid. You can’t seem to install firefox as all download links lead to the play store.
  • I had been using google photos to store backups/easy web access versions of pictures and movies I took, but of course now I just need to look into alternatives. Perhaps syncthing.

The bad:

  • A few apps I was using are of course non free and not available in f-droid: tello, vizio smartcast, various horrible IOT smart things apps, my credit unions silly app, etc. tello works fine if you can find a apk not on the play store. vizio smartcast seems to fail asking for location services (which should work, but oh well).
  • Untappd doesn’t seem to have a .apk easily available, so I guess twitter will be spared my been drinking adventures. 🙂
  • Some infosec folks looked closely and there was still some traffic to google: https://infosec-handbook.eu/blog/e-foundation-first-look/#e-foundation but they had a very reasonable reply I thought (not trying to reject or ignore anything): https://hackernoon.com/leaving-apple-google-how-is-e-actually-google-free-1ba24e29efb9

The install is all setup with MicroG. “A free-as-in-freedom re-implementation of Google’s proprietary Android user space apps and libraries.” It does a pretty good job pretending to be google for apps that need some google bits.

In addition to the OS, /e/ folks have a server side setup as well. I didn’t play with it too much as I am waiting for their promised containerized versions of the server side so I can run them myself. These provide replacements for google drive, notes, address book, mail, etc.

The name /e/ is a bit strange to try and pronounce, or search for. Turns out they had another name at first, but someone else was using it and took exception. There is some mention that they are going to rename things before the magic 1.0 comes.

All in all I think I am going to keep using /e/ for now. Keeping up on security and the ability to make me look at open source alternatives to the various apps I use seems pretty nice to me. I do hope it catches on and more folks start to use it.

Fedora 30 Modularity Test Day 2019-03-26

Posted by Fedora Community Blog on March 22, 2019 07:52 PM
F30 Modularity test day

Tuesday, 2019-03-26  is the Fedora 30  Modularity Test Day!
We need your help to test if everything runs smoothly

Why Modularity Test Day?

Featuring one of major change[1] of Fedora 29  we would test to make sure that all the functionalities are performing as they should.
Modularity is testable today on any Workstation, Labs, Spins  and we will focus on testing the functionality.
It’s also pretty easy to join in: all you’ll need is Fedora 30 (which you can grab from the wiki page).

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post Fedora 30 Modularity Test Day 2019-03-26 appeared first on Fedora Community Blog.

Participating at #Scale17x

Posted by Alejandro Acosta on March 22, 2019 06:22 PM

Everytime somebody asks me about Scale I can only think of the same: Scale is the most important community lead conference in North America and it only gets better by the years. This year it celebrated its seventeenth edition and it just struck me: with me being there this year, there have been more Scales I have attended than I have not. This is my nineth conference out of 17.

The first time that I attended it was 2011,  it was the edition followed by FudCon Tempe 2010 which happened to be my first Fedora conference and it was also the first time I got to meet some contributors that I had previously collaborated with, many of which I still consider my brothers.

As for this time, I almost didn’t make it as my visa renewal was resolved on Friday’s noon, one day after the conference started. I recovered it that same day and book a flight in the night. I couldn’t find anything to LAX -as I regularly fly- so I had to fly to Tijuana and from there I borrowed a cart to Pasadena. Long story short: I arrived around 1:30 AM on Saturday.

That is water under the bridge, a few hours later I was prepared to start the activities for the day. I met my good Fedora friends Scott Williams, Brian Monroe, Perry Rivera and -making his first Scale conference- Ivan Chavero who is with Red Hat and has been very active promoting Fedora in my hometown, he’s not an ambassador but he has helped me a lot in all of the Fedora activities that we have in Chihuahua.

That day started up with the keynote by Hashicorp’s Mitchell Hashimoto speaking on the transformation of the company starting from a Dorm Room OSS project.

From there, it was all booth duty for the remain of the conference.

Perry Rivera was assisting Clint Savage with the install fest happening in a separated building, so for most of  that day it was only Brian, Scott and me in booth duty. Saturday is always the most active day in the expo floor, and this year it was not an exception. We had visitors continually and many Fedora users and enthusiasts approaching to us just to talk about how their Fedora experiences or with specific questions we were glad to clarify.  At the booth we had a sign up list and a card board in which users could write How they do Fedora, it was fun and gratifying looking at those answers.

I think this year we got an excellent location within the exposition floor that allowed a lot of traffic coming through our booth. (See below)

It is always pleasant to say hello to our good friends and members of other communities. We could say hi to Jason Hibbets and Rikki Endsley from  OpenSource.com and Jennifer Madriaga, Brian Proffit and Karsten Wade from Red Hat among others.


<script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>

When expo floor closed, we still had a chance to attend a few talks, like Ivan Chavero’s “A Linux Engineer in Shark Tank” and also the Kubernetes BoF and the UpScale talks, always very fresh and informative.  We closed that day with the traditional Game Night where we had the chance to have a good time and continue networking with the conference attendees.

As for Sunday, March 10th, the Daylight savings started so the day “felt” a little weird. The exhibit hall opened at 10:00AM and we noticed a smaller attendance, this again is normal for the last day. The floor was open for only 4 hours on Sunday. This time it was Brian who helped on the Install Fest so for then it was only Perry and myself working on the booth. Still people approaching to us and it is always satisfactory when students show interest if Fedora. This year I could also notice an increase on the spanish speaking attendees, which is also formidable to help promote Fedora among them.

I leave you with a few pictures that I manage to take, but -as usual- they cannot get to describe the experience.

Would I make to Scale18th? Only God $DEITY knows

Do I want to attend it? You can bet anything that I do 🙂




Obilježavanje Dana slobode dokumenata 27. ožujka 2019. u Rijeci

Posted by HULK Rijeka on March 22, 2019 05:56 PM

Obilježavanje Dana slobode dokumenata u srijedu, 27. ožujka 2019. godine u Rijeci održat će se u zgradi Sveučilišnih Odjela Sveučilišta u Rijeci, Radmile Matejčić 2, u prostoriji O-028, s početkom u 16 sati.

Program obilježavanja Dana slobode dokumenata u Rijeci je sljedeći:

  • 16:00 — 16:30 Zašto slavimo Dan slobode dokumenata: OpenOffice.org, LibreOffice i OpenDocument (Vedran Miletić)
  • 16:35 — 16:55 reStructuredText: običan tekst, samo strukturiran (Mia Doričić)
  • 17:00 — 17:15 Alat za upravljanje referencama na literaturu Zotero (Patrik Nikolić)

Nakon programa imat ćemo vremena za diskusiju i prigodno čašćenje po uzoru na našu proslavu izlaska LibreOfficea 3.3.

<figure class="wp-block-image">LibreOffice 3.3. cake<figcaption>Mi u HULK-u smo slavili izlaske LibreOfficea prije nego je to postalo kul.</figcaption></figure>

Nadamo se vašem dolasku!

Parental Controls and Metered Data Hackfest

Posted by Allan Day on March 22, 2019 05:01 PM

This week I participated in the Parental Controls and Metered Data Hackfest, which was held at Red Hat’s London office.

Parental controls and metered data already exist in Endless and/or elementary OS in some shape or form. The goal of the hackfest was to plan how to upstream the features to GNOME. It’s great to see this kind of activity from downstreams so I was very happy to contribute in my capacity as an upstream UX designer.

There have been a fair few blog posts about the event already, so I’m going to try and avoid repeating what’s already been written…

Parental controls

Parental controls sound like a niche feature, but they actually have wider applicability than limiting what the kids can do with your laptop. This is because the same features that are used by parental controls can be useful for other types of functionality, particularly around “digital well-being”. For example, a parent might want to limit how much time their child spends using the computer, but someone might want to self-impose this same limit on themselves, in order to try and lead a healthier lifestyle.

Furthermore, outside of parental controls, the same functionality can be pitched in different ways. A feature like limiting the use of particular apps to certain times of the day could either be presented as a “digital well-being” feature, where the goal is to be happier and healthier, or as a “productivity” feature, where the goal is to help someone get more out of their time in front of the screen.

There are some interesting user experience questions that need to be answered here, such as to what extent we should focus on particular use cases, as well as what those use cases should be.

We discussed these questions a bit during the hackfest, but more thought is going to be necessary. The other next step will be to figure out what the initial MVP should be for these features, since they could potentially be quite extensive.

Metered data

Metered network connections are those that either have usage limits attached to them, or those which have financial costs for usage. In both cases this requires that we limit automatic/background network usage, as well as potentially showing warnings if the user is doing something that could result in high data usage.

My main interest in this area is to ensure that GNOME behaves correctly when people use mobile broadband, either by tethering their phone or when using a dedicated mobile broadband connection. (There’s nothing more frustrating than your laptop silently chewing through your data plan.)

The first target for this work is to make sure that automatic software updates behave well, but there’s some other interesting work that could come out of it, particularly around controls for whether unfocused or backgrounded apps are allowed to use the network.

Philip Withnall has created a survey to find out about peoples’ experiences using metered data. Please fill it out if you haven’t already!


The hackfest was a great event, and I’d like to thank the following people and organisations for making it possible:

  • Philip Withnall for organising the event
  • The GNOME Foundation for sponsoring me to attend
  • Red Hat for providing the venue

Not posting here means not there is nothing done

Posted by Sirko Kemter on March 22, 2019 04:51 PM

I looking with fears to this strange ideas Mindshare has for the future of the Ambassadors. You can not write reports if you not have an event, so I telling here now how hard it is in this country to organize an event.

Since October 2018 I search for a place which would host the next Translation Sprint. We have tons of co-working spaces or NGO’s which have space available. But is always the same I asked e.g. Open Institute, answer we can host you just on Saturday. And I had actually to write there several times and even make calls because I got no answer for the first contact. The same on The Desk, we can host you only on Saturday. This makes no sense in Cambodia, it is a regular working day, because they have 28 holidays. So most people have to work until 2pm. What sucked on this one, I was working on it since end of January. So first meeting was setup for 11th March, I went there but nobbody there to meet me. This is normal cambodian working style I dont tell I am busy and cant meet you and give you an alternative time. Well the promised mail with an alternative time never arrived, so I had to ask for it again. Second meeting was then this Monday, I spent an hour with them with the useless result of “just Saturday”. But there is light on the horizon OpenDevelopment might host us but here just on Sunday, which is for us better then just Saturday. So six months, hundreds of mails and several meetings and achieved nothing. How easy is it to setup a Fedora Womans Day in the Pune office, compared to this and then just travel around the world to visit other events and this is then called “active”


Also for the “different Release Party” I have a lot of trouble, I contacted PNC back in October to repeat the event. The sad news for me after Benoit Pitet also John Munger left, leaves me with Cambodians to work so again explaining things the Europeans did realize without the need to tell them. I show you here the quote I got back in November:

Dear Gnokii,
It is very interesting, but Fedora is not the one which matches with our students studying.
Therefore, I was wondering if you could offer our CentOS, it will be the best for us.

Best regards,


I never signed a mail as gnokii to them, that shows their level of education. I will try it again, actually I try to contact their “External Relations Manager” which I have amongst my friends on facebook but no luck so far. So I might write again and try to setup a meeting with Maud Lhuillier as head of PN in Asia. I have a bit of a wild idea, so let’s see if this works out. So let’s see….

So as you can see, there is a lot going on even something doesnt happen, so how wants Mindshare measure if there is activity or not?

FPgM report: 2019-12

Posted by Fedora Community Blog on March 22, 2019 03:46 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora Program Management this week.

Fedora 30 Beta is No-Go. Another Go/No-Go meeting will be held on Thursday. I’ve set up weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. The Fedora 30 Beta Go/No-Go and Release Readiness meetings are next week.


Meetings and test days

Fedora 30 Status

Fedora 30 Beta was declared No-Go. The target release date moves to 2019-04-02.

Blocker bugs

Bug IDBlocker statusComponentBug Status


An updated list of incomplete changes is available in Bugzilla.

The Firefox Wayland By Default On Gnome change is postponed to Fedora 31.

FESCo will vote on including the Mono 5 change in Fedora 30 as well as Fedora 31.

Fedora 31 Status



Submitted to FESCo

Approved by FESCo

The post FPgM report: 2019-12 appeared first on Fedora Community Blog.

FAS username search in Fedora Happiness Packets

Posted by Fedora Community Blog on March 22, 2019 08:15 AM
Fedora Happiness Packets - project update
<figure class="aligncenter"></figure>

I have been recently working on incorporation of Fedora Accounts System’s username search functionality in the project “Fedora Happiness Packets”. After weeks of working, it’s so overwhelming to see its on the verge of completion and being incorporated in the project.

About the project

The search functionality is used to find the name and email address of Fedora Accounts System’s users from their username, making it a lot easier for any sender to send happiness packets to a particular user with the knowledge of just their username.

Getting started with python-fedora API

For incorporating the search, python-fedora API is used to retrieve the data. After authenticating as a genuine fas-user by passing credentials to AccountSystem, we can retrieve the data using the method person_by_username of a fas2 object.

Problems encountered

The solution to the problem statement was simple. What made the process challenging was lack of proper documentation of the Python-Fedora module. Since credentials like FAS username and FAS password were required for authenticating the user, the main goal was to use the data while the user logs into Fedora Happiness Packets.

I was aiming to use OpenID Connect, the client id, client secret which is used while registering the application to the OpenID provider (Ipsilon in this case). But the fas-client we have in python-fedora does not support OpenID Connect Authentication. This was the major problem which created a major stuck in the proceeding.

Another setback was Django’s crispy forms. Since we are using Crispy forms to create models and render the layout in the front end, it was difficult for me to access individual form elements since the whole concept was very new to me.

Quick fix

After getting solution recommendation from other admins of Fedora, I finally got a solution to pass through. Since the search functionality requires only an authenticated user, which necessarily may not be the user who logs in, we can use a testing Username and testing Password in the case of development environment. For testing, we can make a json file from where the original credentials and the values are read into the project.

What I learnt?

I worked in Django for the very first time and it was such an overwhelming experience. I got to learn most of the concepts of Django. How it works, how the data flows, how data gets rendered in the front end, etc. The concept of Django’s Crispy forms was something really new to me and I learnt how to deal with the it. Every time, I rely on documentation to get into details, but for the first time I was successfully able to get what is actually happening by going through the code manually.

My experience

I had really enjoyed working with such an welcoming community. Almost all of my doubts are cleared during this application process. What actually I learnt that I am gonna keep with myself forever is, “There is always an alternative solution to any problem! We just need to minimize the gap between its actual existence and our knowledge of its being”.

Vote of Thanks!

Thanks to Justin (@jflory7) for helping me with my piles of doubts and queries. Jona (@jonatoni) was very kind to find explicit time to frame my ideas, thanks to her. A special thanks to Clement (@cverna) for helping me proceed with a viable solution, during one of the major hurdle I faced.

Thank you 🙂

The post FAS username search in Fedora Happiness Packets appeared first on Fedora Community Blog.

Fedora Security Lab

Posted by Fabian Affolter on March 22, 2019 08:14 AM

The Fedora Security Lab was released as part of the Fedora 30 Candidate Beta cycle.

Grab it, test it and report back.

This time we don’t want to miss the release because of some last minute changes.

How to set up Fedora Silverblue as a gaming station

Posted by Fedora Magazine on March 22, 2019 08:00 AM

This article gives you a step by step guide to turn your Fedora Silverblue into an awesome gaming station with the help of Flatpak and Steam.

Note: Do you need the NVIDIA proprietary driver on Fedora 29 Silverblue for a complete experience? Check out this blog post for pointers.

Add the Flathub repository

This process starts with a clean Fedora 29 Silverblue installation with a user already created for you.

First, go to https://flathub.org/home and enable the Flathub repository on your system. To do this, click the Quick setup button on the main page.

<figure class="wp-block-image"><figcaption>Quick setup button on flathub.org/home</figcaption></figure>

This redirects you to https://flatpak.org/setup/ where you should click on the Fedora icon.

<figure class="wp-block-image"><figcaption>Fedora icon on flatpak.org/setup</figcaption></figure>

Now you just need to click on Flathub repository file. Open the downloaded file with the Software Install application.

<figure class="wp-block-image"><figcaption>Flathub repository file button on flatpak.org/setup/Fedora</figcaption></figure>

The GNOME Software application opens. Next, click on the Install button. This action needs sudo permissions, because it installs the Flathub repository for use by the whole system.

<figure class="wp-block-image"><figcaption>Install button in GNOME Software</figcaption></figure>

Install the Steam flatpak

You can now search for the Steam flatpak in GNOME Software. If you can’t find it, try rebooting — or logout and login — in case GNOME Software didn’t read the metadata. That happens automatically when you next login.

<figure class="wp-block-image"><figcaption>Searching for Steam</figcaption></figure>

Click on the Steam row and the Steam page opens in GNOME Software. Next, click on Install.

<figure class="wp-block-image"><figcaption>Steam page in GNOME Software</figcaption></figure>

And now you have installed Steam flatpak on your system.

Enable Steam Play in Steam

Now that you have Steam installed, launch it and log in. To play Windows games too, you need to enable Steam Play in Steam. To enable it, choose Steam > Settings from the menu in the main window.

<figure class="aligncenter"><figcaption>Settings button in Steam</figcaption></figure>

Navigate to the Steam Play section. You should see the option Enable Steam Play for supported titles is already ticked, but it’s recommended you also tick the Enable Steam Play option for all other titles. There are plenty of games that are actually playable, but not whitelisted yet on Steam. To see which games are playable, visit ProtonDB and search for your favorite game. Or just look for the games with the most platinum reports.

<figure class="wp-block-image"><figcaption>Steam Play settings menu on Steam</figcaption></figure>

If you want to know more about Steam Play, you can read the article about it here on Fedora Magazine:

<figure class="wp-block-embed is-type-rich is-provider-fedora-magazine">
Play Windows games on Fedora with Steam Play and Proton
<iframe class="wp-embedded-content" data-secret="fCo55bri0o" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/play-windows-games-steam-play-proton/embed/#?secret=fCo55bri0o" title="“Play Windows games on Fedora with Steam Play and Proton” — Fedora Magazine" width="600"></iframe>


You’re now ready to play plenty of games on Linux. Please remember to share your experience with others using the Contribute button on ProtonDB and report bugs you find on GitHub, because sharing is nice. 🙂

Photo by Hardik Sharma on Unsplash.

Cockpit 190

Posted by Cockpit Project on March 22, 2019 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 190.

Logs: Filter log entries by service

Filtering logs by service

The Logs page now allows you to only view the logs for a specific service.

Machines: Support for Pausing/Resuming VMs

Pause and Resume operations for VMs

You can now pause a running VM or resume a paused a VM.

Thanks to Simon Kobyda for this feature!

Machines: Make Autostart property of a Virtual Network configurable

Autostart property for Virtual Networks

Thanks to Simon Kobyda for this feature!

Machines: Support for creating VM with option to boot from PXE

You can now choose Network boot when creating a new VM. Supported sources are libvirt Virtual Networks and host network devices used with direct assignment.

Create VM with Network boot

Accessibility improvements

Dropdowns in all pages are now properly accessible and allow keyboard navigation.

Try it out

Cockpit 190 is available now:

Fedora 29 : Testing the dnf python module.

Posted by mythcat on March 21, 2019 07:41 PM
Today we tested with Fedora 29 a python module called DNF.
All users have used this tool.
This python module is not very documented on the internet.
A more complex example can be found on DNF tool documentation.
I tried to see what I can get from this module.
Let's start installing it with the pip tool:
$ pip install dnf --user
Here are some tests that I managed to run in the python shell.
[mythcat@desk ~]$ python
Python 2.7.15 (default, Oct 15 2018, 15:26:09)
[GCC 8.2.1 20180801 (Red Hat 8.2.1-2)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> import dnf
>>> dir(dnf)
['Base', 'Plugin', 'VERSION', '__builtins__', '__doc__', '__file__', '__name__', '__package__',
'__path__', '__version__', 'base', 'callback', 'cli', 'comps', 'conf', 'const', 'crypto', 'db',
'dnf', 'dnssec', 'drpm', 'exceptions', 'goal', 'history', 'i18n', 'lock', 'logging', 'match_counter',
'module', 'package', 'persistor', 'plugin', 'pycomp', 'query', 'repo', 'repodict', 'rpm', 'sack',
'selector', 'subject', 'transaction', 'unicode_literals', 'util', 'warnings', 'yum']
>>> import dnf.conf
>>> print(dnf.conf.Conf())
assumeno: 0
assumeyes: 0
autocheck_running_kernel: 1
bandwidth: 0
best: 0
>>> import dnf.module
>>> import dnf.rpm
>>> import dnf.cli
>>> base = dnf.Base()
>>> base.update_cache()
This read all repositories:

>>> base.read_all_repos()
You need to read the sack for querying:

>>> base.fill_sack()
>>> base.sack_activation = True</dnf>
Create a query to matches all packages in sack:

>>> qr=base.sack.query()
Get only available packages:

>>> qa=qr.available()
Get only installed packages:

>>> qi=qr.installed()
>>> q_a=qa.run()
>>> for pkg in qi.run():
... if pkg not in q_a:
... print('%s.%s' % (pkg.name, pkg.arch))
Get all packages installed on Linux:

>>> q_i=qi.run()
>>> for pkg in qi.run():
... print('%s.%s' % (pkg.name, pkg.arch))
You can see more about the Python programming language on my blog.

PHP version 7.2.17RC1 and 7.3.4RC1

Posted by Remi Collet on March 21, 2019 04:11 PM

Release Candidate versions are available in testing repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.3.4RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 30 or remi-php73-test repository for Fedora 27-29 and Enterprise Linux.

RPM of PHP version 7.2.16RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 28-29 or remi-php72-test repository for Fedora 27 and Enterprise Linux.


emblem-notice-24.pngPHP version 7.1 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.3 as Software Collection:

yum --enablerepo=remi-test install php73

Parallel installation of version 7.2 as Software Collection:

yum --enablerepo=remi-test install php72

Update of system version 7.3:

yum --enablerepo=remi-php73,remi-php73-test update php\*

Update of system version 7.2:

yum --enablerepo=remi-php72,remi-php72-test update php\*

Notice: version 7.3.4RC1 in Fedora rawhide for QA.

emblem-notice-24.pngEL-7 packages are built using RHEL-7.6.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php72, php73)

Base packages (php)

Small history about QA

Posted by Remi Collet on March 21, 2019 03:10 PM

Despite I'm mainly a developer, I now use most of my time on doing QA on PHP projects.

Here is, around release of versions7.2.17RC1 and 7.3.4RC1 a report which should help to understand this activity.


1. Presentation

Usually, tests are done by PHP developers, particularly thanks to travis and then by users who will install the RC version available 2 weeks before a GA version.

The PHP project follow a release process (cf README.RELEASE_PROCESS) which gives 2 days between the preparation of a version, the Tuesday on git, and the Thursday its announcement in the mailing lists. These 2 days are especially designed to allow the build of binary packages (mostly by Microsoft and often by me for my repository) and to allow a last QA check which mays allow to discover some late issue.

When the new versions were available (on Tuesday afternoon) I start building the packages for my repostiory, givinf more coverage than the current travis configuration:

  • Fedora 27 to 31
  • RHEL 6, 7 and 8-Beta
  • i386 and x86_64
  • NTS and ZTS
  • various compiler versions  (GCC 4 to 9) and system library versions

I also run the build of the 7.3.4RC1 package in Fedora rawhide to trigger the re-build of all the PHP stack in Koschei, one of the CI tools of the Fedora project.

Notice : time to build all the packages for all the targets is about 3h for each version !  (I really need a faster builder).


2. Discoverd issues

2.1. Failed tests with pcre2 version 10.33RC1

Already available in rawhide, this version introduce a change in some error message, making 2 tests to fail.

Minor issue, fixed in PHP 7.3+: commit c421d9a.

2.2. Failed tests on 32-bit

In fix of bug #76117 the output of var_export have changed, make 2 tests to fail on 32-bit.

After confirmation by the autor of the change, tests have been fixed in PHP 7.2+ : commits a467a89 and 5c8d69b.

2.3. Regression

Koschei allow to discover very quickly a important regression in the run of the "make test" command. After digging, this regression was introduced in the fix of bug #77609, read the comments on the commit 3ead672.

After discussion between the Release managers, it have been choosen to:

  • revert this change to get back to a sane situation
  • to re-run the release process (new tag onr git)

The version which wil be announced shortly will not be affected byt this regression.


3. Conclusion

To ensure of the quality of PHP, of no regression is a complex, long and serious work. Thanks to all the actors, developers, QA team and users, this works pretty well.

So, if you use PHP in a development environment, it is essential to install the RC versions to detect and report us quickly any problem, so we can react before the finale version.

For users of my repository, the RC versions of PHP and various extensions are nearly always available in the testing repositories.


Don't trust me. Trust the voters.

Posted by Daniel Pocock on March 21, 2019 09:07 AM

On 9 March, when I was the only member of the Debian community to submit a nomination and fully-fledged platform four minutes before the deadline, I did so on the full understanding that voters have the option to vote "None of the above".

In other words, knowing that nobody can win by default, voters could reject and humiliate me.

Or worse.

My platform had been considered carefully over many weeks, despite a couple of typos. If Debian can't accept that, maybe I should write typos for the White House press office?

One former leader of the project, Steve McIntyre, replied:

I don't know what you think you're trying to achieve here

Hadn't I explained what I was trying to achieve in my platform? Instead of pressing the "send put down" button, why not try reading it?

Any reply in support of my nomination has been censored, so certain bullies create the impression that theirs is the last word.

I've put myself up for election before yet I've never, ever been so disappointed. Just as Venezuela's crisis is now seen as a risk to all their neighbours, the credibility of elections and membership status is a risk to confidence throughout the world of free software. It has already happened in Linux Foundation and FSFE and now we see it happening in Debian.

In student politics, I was on the committee that managed a multi-million dollar budget for services in the union building and worked my way up to become NUS ambassador to Critical Mass, paid to cycle home for a year and sharing an office with one of the grand masters of postal voting: Voters: 0, Cabals: 1.

Ironically, the latter role is probably more relevant to the skills required to lead a distributed organization like Debian. Critical Mass rides have no leader at all.

When I volunteered to be FSFE Fellowship representative, I faced six other candidates. On the first day of voting, I was rear-ended by a small van, pushed several meters along the road and thrown off a motorbike, half way across a roundabout. I narrowly missed being run over by a bus.

It didn't stop me. An accident? Russians developing new tactics for election meddling? Premonition of all the backstabbings to come? Miraculously, the Fellowship still voted for me to represent them.

Nonetheless, Matthias Kirschner, FSFE President, appointed one of the rival candidates to a superior class of membership just a few months later. He also gave full membership rights to all of his staff, ensuring they could vote in the meeting to remove elections from the constitution. Voters: 0, Cabals: 2.

My platform and photo for the FSFE election also emphasizes my role in Debian and some Debian people have always resented that, hence their pathological obsession with trying to control me or discredit me.

Yet in Debian's elections, I've hit a dead-end. The outgoing leader of the project derided me for being something less than a "serious" candidate, despite the fact I was the only one who submitted a nomination before the deadline. People notice things like that. It doesn't stick to me, it sticks to Debian.

I thank Chris Lamb for interjecting, because it reveals a lot about today's problems. A series of snipes like that, usually made in private, have precipitated increasing hostility in recent times.

When I saw Lamb's comment, I couldn't help erupting in a fit of laughter. The Government of Lamb's own country, the UK, was elected under the slogan Strong and stable leadership. There used to be a time when the sun never set on the British empire, today the sun never sets on laughter about their lack of a serious plan for Brexit. Serious leadership appears somehwat hard to find. Investigations found that the Pro-Brexit movement cheated with help from Cambridge Analytica and violations of campaign spending limits but the vote won't be re-run (yet). Voters: 0, Cabals: 3.

It is disappointing when a leader seeks to vet his replacement in this way. In Venezuela, Hugo Chavez assured everybody that Nicolas Maduro was the only serious candidate who could succeed him. Venezuelans can see the consequences of such interventions by outgoing leaders clearly, but only during daylight, because the power has been out continuously for more than a week now. Many of their best engineers emigrated and Debian risks similar phenomena with these childish antics.

The whole point of a free and fair election is that voters are the ultimate decision maker and we all put our trust in the voters alone to decide who is the most serious candidate. I remain disappointed that Lamb was not willing to talk face-to-face with those people he had differences with.

In any other context, the re-opening of nominations and the repeated character attacks, facilitated by no less than another candidate who already holds office in the Debian account managers team would be considered as despicable as plagiarism and doping. So why is this acceptable in Debian? Voters: 0, Cabals: 4. If you ran a foot race this way, nobody would respect the outcome.

Having finished multiple cross countries, steeplechases and the odd marathon, why can't I even start in Debian's annual election?

In his interview with Mr Sam Varghese of IT Wire, rival candidate Joerg "Ganeff" Jaspert talks about "mutual trust". Well, he doesn't have to. I put my trust in the voters. That's democracy. Who is afraid of it? That's what a serious vote is all about.

Jaspert's team have gone to further lengths to gain advantages, spreading rumours on the debian-private mailing list that they have "secret evidence" to justify their behaviour. It is amusing to see such ridiculous claims being made in Debian at the same time that Maduro in Venezuela is claiming to have secret evidence that his rival, Guaido, sabotaged the electricity grid. The golden rule of secret evidence: don't hold your breath waiting for it to materialize.

While Maduro's claims of sabotage seem far-fetched, it is widely believed that Republican-friendly Enron played a significant role in Californian power shortages, swinging public mood against the Democrat incumbent and catapulting the world's first Governator into power (excuse the pun). Voters: 0, Cabals: 5.

If the DAMs do have secret evidence against any Debian Developer, it is only fair to show the evidence to the Developer and give that person a right of reply. If such "evidence" is spread behind somebody's back, it is because it wouldn't stand up to any serious scrutiny.

Over the last six months, Jaspert, Lamb and Co can't even decide whether they've demoted or expelled certain people. That's not leadership. It's a disgrace. If people are trusted to choose me as the Debian Project Leader, I guarantee that no other volunteer will be put through such intimidation and shaming ever again.

After writing a blog about human rights in January, it is Jaspert who censored it from Planet Debian just hours later:

Many people were mystified. Why would my blog post about human rights be censored by Debian? People have been scratching their heads trying to work out how it could even remotely violate the code of conduct. Is it because the opening quote came from Jaspert himself and he didn't want his cavalier attitude put under public scrutiny?

This is not involving anything from the universal declaration of human rights. We are simply a project of volunteers which is free to chose its members as it wishes.

which is a convenient way of eliminating competitors. After trampling on my blog and my nomination for the DPL election, it is simply a coincidence that Jaspert was the next to put his hand up and nominate.

In Jonathan Carter's blog about his candidacy, he quotes Ian Murdock:

You don’t want design by committee, but you want to tap in to the wisdom of the crowd.... the crowd is the most intelligent of all.

If that is true, why is a committee of just three people, one of whom is a candidate, telling the crowd who they can and can't vote for?

If that isn't a gerrymander, what is?

Following through on the threat

If you are going to use veiled threats to keep your developers in line, every now and then, you have to follow through, as Jaspert has done recently using his DAM position to make defamatory statements in the press.

If Jaspert's organization really is willing to threaten and shame volunteers and denounce human rights, as he did in this quote, then I wouldn't want to be a part of it anyway, consider this my retirement and resignation and eliminate any further questions about my status. Nonetheless, I remain an independent Debian Developer just as committed to serving Debian users as ever before. Voters: 0, Cabals: 6.

I remain ready and willing to face "None of the above" and any other candidate, serious or otherwise, on a level playing field, to serve those who would vote for me over and above those who seek to blackmail me and push me around with secret evidence and veiled threats.

Packages of varnish-6.2.0 with matching vmods, for el6 and el7

Posted by Ingvar Hagelund on March 21, 2019 08:29 AM

The Varnish Cache project recently released a new upstream version 6.2 of Varnish Cache. I updated the fedora rawhide package yesterday. I have also built a copr repo with varnish packages for el6 and el7 based on the fedora package. A snapshot of matching varnish-modules (based on Nils Goroll’s branch) is also available.

Packages are available at https://copr.fedorainfracloud.org/coprs/ingvar/varnish62/.

vmods included in varnish-modules:

AskFedora refresh: we’ve moved to Discourse!

Posted by Fedora Community Blog on March 21, 2019 07:33 AM
The Fedora Project community

We have been working on moving AskFedora to a Discourse instance after seeing how well the community took to discussion.fedoraproject.org. After working on it for a few weeks now, we’re happy to report that the new AskFedora is now ready for use at https://askbeta.fedoraproject.org.

The new AskFedora!

The new AskFedora is a Discourse instance hosted for us by Discourse, similar to discussion.fedoraproject.org. However, where discussion.fedoraproject.org is meant for development discussion within the community, AskFedora is meant for end-user troubleshooting. While we did toy with the idea of simply using discussion.fedoraproject.org for both purposes, we felt it was a bit of a risk of the mix hampering both use cases. So, the decision was made to stick to the current organisation and use a separate Discourse instance for user queries.

Getting started: logging in and language selection

The new AskFedora is limited to FAS (Fedora Account System) logins only. This is unlike the Askbot instance where we also permitted social media and other logins. Limiting the logins to FAS permits us to have better control over the instance, and makes it much easier to gather data on usage and so on. Setting up a new FAS instance is quite trivial, so we do not expect this to be an issue to end-users either.

Another way in which AskFedora on Discourse is different from AskFedora on Askbot, is that we chose not to host per-language subsites. Instead, we’ve leveraged Discourse categories and user-groups to support languages.

When you login for the first time, you will only see the general categories:

  • Start here!
  • Community
  • Site Feedback

These are common to all users. Based on interest from the community, and after verifying that we had community members willing to oversee these languages, the new AskFedora currently supports English, Spanish, Italian, and Persian. Here is how:

Each language has an associated user-group. All users can join and leave these language user-groups at any time. Membership to each user-group gives access to “translated” categories, i.e., identical categories set up for users of the particular language group. Users can join as many language groups as they wish!

Categories are loosely based on the lifecycle of a Fedora release. The top levels ask the question “what stage of the Fedora life-cycle are you at?”. The next level tries to be more specific to ask something on the lines of “what tool are you using?”. These categories are only meant to help organise the forum somewhat. They are not set in stone, and of course, lots of topics may fit into a multitude of categories. We leave it up to the users of the Forum to choose the appropriate category for their query.

  • Announcements (for each language group: can be used by regional teams, for example)
  • Installing Fedora
    • “General” for standard installations using Anaconda with near to default settings, and “Advanced” for more complex use cases such as kickstarts. 
    • A “Hardware” category dedicated for queries related to hardware support.
  • Customising a Fedora installation: for queries related to personalisation:
    • Either “General” for simpler tasks like changing defaults and so on mostly using tools provided by the community or “Advanced” for well, anything else really.
  • Using Fedora, with subcategories for the different platforms that Fedora runs on:
    • Desktops/Servers/Containers/Cloud/Others.
  • Upgrading a Fedora installation:
    • Either using the supported methods: DNF and DNF based methods; or any other ways that we tend to cook up each cycle.

So, when you do login, please do go to the “Start here” category as the banner requests. We have a topic in each supported language documenting what we’ve written here—how to join the appropriate language group and get started.

Feedback and next steps

At this time, we are only announcing the new instance to the community. Hence, this post on the community blog first. The forum will be announced to the wider user-base on the Fedora magazine a week or two later. This gives us time to have a set of community members on the forum already to help end-users when they do get started. This also gives us time to collect feedback from the community and make tweaks to improve the user-experience before the “official launch”. Please use the “Site Feedback” category to drop us comments. Before the forum is announced to the wider audience, we will also update the URL to use https://ask.fedoraproject.org and a redirect from https://askbeta.fedoraproject.org will be put in place to ensure a smooth transition for current users.

The usual reminders

The forum (all forums, channels) are extensions of the Fedora community. They are tools that enable us to communicate with each other. Therefore, everything that occurs on these must follow our Code of Conduct. In short, please remember to “be excellent to each other“. There will always be disagreements, and us being us, tempers will flare. However, before you type out a reply, repeat to your self: “be excellent to each other” again and again, until your draft has lost its aggression/annoyance/negative connotations. This also applies to trolling—even when pointing it out, let’s stay excellent to each other. If you need any help, the forum staff are always there to step in—just drop us a message.

As a closing word, we’re grateful to everyone that put the work in to make this refresh happen—especially the Askbot developers that have hosted AskFedora for us till now, and the Discourse team that will host it for us from now. It has taken quite a few hours of discussion, planning, and work to set things up the way we felt it would help users most. All of this happened on the Fedora Join SIG’s pagure project. We are always looking for more hands to help, and we are even happier if we can pass on some of what we have learned in our time in the Fedora community to other members. Please, do get in touch!

The post AskFedora refresh: we’ve moved to Discourse! appeared first on Fedora Community Blog.

Using hexdump to print binary protocols

Posted by Peter Hutterer on March 21, 2019 12:30 AM

I had to work on an image yesterday where I couldn't install anything and the amount of pre-installed tools was quite limited. And I needed to debug an input device, usually done with libinput record. So eventually I found that hexdump supports formatting of the input bytes but it took me a while to figure out the right combination. The various resources online only got me partway there. So here's an explanation which should get you to your results quickly.

By default, hexdump prints identical input lines as a single line with an asterisk ('*'). To avoid this, use the -v flag as in the examples below.

hexdump's format string is single-quote-enclosed string that contains the count, element size and double-quote-enclosed printf-like format string. So a simple example is this:

$ hexdump -v -e '1/2 "%d\n"' <filename>
This prints 1 element ('iteration') of 2 bytes as integer, followed by a linebreak. Or in other words: it takes two bytes, converts it to int and prints it. If you want to print the same input value in multiple formats, use multiple -e invocations.

$ hexdump -v -e '1/2 "%d "' -e '1/2 "%x\n"' <filename>
-11568 d2d0
23698 5c92
0 0
0 0
6355 18d3
1 1
0 0
This prints the same 2-byte input value, once as decimal signed integer, once as lowercase hex. If we have multiple identical things to print, we can do this:

$ hexdump -v -e '2/2 "%6d "' -e '" hex:"' -e '4/1 " %x"' -e '"\n"'
-10922 23698 hex: 56 d5 92 5c
0 0 hex: 0 0 0 0
14879 1 hex: 1f 3a 1 0
0 0 hex: 0 0 0 0
0 0 hex: 0 0 0 0
0 0 hex: 0 0 0 0
Which prints two elements, each size 2 as integers, then the same elements as four 1-byte hex values, followed by a linebreak. %6d is a standard printf instruction and documented in the manual.

Let's go and print our protocol. The struct representing the protocol is this one:

struct input_event {
#if (__BITS_PER_LONG != 32 || !defined(__USE_TIME_BITS64)) && !defined(__KERNEL__)
struct timeval time;
#define input_event_sec time.tv_sec
#define input_event_usec time.tv_usec
__kernel_ulong_t __sec;
#if defined(__sparc__) && defined(__arch64__)
unsigned int __usec;
__kernel_ulong_t __usec;
#define input_event_sec __sec
#define input_event_usec __usec
__u16 type;
__u16 code;
__s32 value;
So we have two longs for sec and usec, two shorts for type and code and one signed 32-bit int. Let's print it:

$ hexdump -v -e '"E: " 1/8 "%u." 1/8 "%06u" 2/2 " %04x" 1/4 "%5d\n"' /dev/input/event22
E: 1553127085.097503 0002 0000 1
E: 1553127085.097503 0002 0001 -1
E: 1553127085.097503 0000 0000 0
E: 1553127085.097542 0002 0001 -2
E: 1553127085.097542 0000 0000 0
E: 1553127085.108741 0002 0001 -4
E: 1553127085.108741 0000 0000 0
E: 1553127085.118211 0002 0000 2
E: 1553127085.118211 0002 0001 -10
E: 1553127085.118211 0000 0000 0
E: 1553127085.128245 0002 0000 1
And voila, we have our structs printed in the same format evemu-record prints out. So with nothing but hexdump, I can generate output I can then parse with my existing scripts on another box.

F29-20190319 updated Live isos released

Posted by Ben Williams on March 20, 2019 12:46 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated F29-20190319 Live ISOs, carrying the 4.20.16-200 kernel.

This set of updated isos will save considerable amounts  of updates after install.  ((for new installs.)(New installs of Workstation have 1.2GB of updates)).

This set also includes a updated iso of the Security Lab. 

A huge thank you goes out to irc nicks dowdle, Southern-Gentlem for testing these iso.

We would also like to thank Fedora- QA  for running the following Tests on our ISOs.:



As always our isos can be found at  http://tinyurl.com/Live-respins .  

4 cool terminal multiplexers

Posted by Fedora Magazine on March 20, 2019 08:00 AM

The Fedora OS is comfortable and easy for lots of users. It has a stunning desktop that makes it easy to get everyday tasks done. Under the hood is all the power of a Linux system, and the terminal is the easiest way for power users to harness it. By default terminals are simple and somewhat limited. However, a terminal multiplexer allows you to turn your terminal into an even more incredible powerhouse. This article shows off some popular terminal multiplexers and how to install them.

Why would you want to use one? Well, for one thing, it lets you logout of your system while leaving your terminal session undisturbed. It’s incredibly useful to logout of your console, secure it, travel somewhere else, then remotely login with SSH and continue where you left off. Here are some utilities to check out.

One of the oldest and most well-known terminal multiplexers is screen. However, because the code is no longer maintained, this article focuses on more recent apps. (“Recent” is relative — some of these have been around for years!)


The tmux utility is one of the most widely used replacements for screen. It has a highly configurable interface. You can program tmux to start up specific kinds of sessions based on your needs. You’ll find a lot more about tmux in this article published earlier:

<figure class="wp-block-embed is-type-rich is-provider-fedora-magazine">
Use tmux for a more powerful terminal
<iframe class="wp-embedded-content" data-secret="AOTxgovebe" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/use-tmux-more-powerful-terminal/embed/#?secret=AOTxgovebe" title="“Use tmux for a more powerful terminal” — Fedora Magazine" width="600"></iframe>

Already a tmux user? You might like this additional article on making your tmux sessions more effective.

To install tmux, use the sudo command along with dnf, since you’re probably in a terminal already:

$ sudo dnf install tmux

To start learning, run the tmux command. A single pane window starts with your default shell. Tmux uses a modifier key to signal that a command is coming next. This key is Ctrl+B by default. If you enter Ctrl+B, C you’ll create a new window with a shell in it.

Here’s a hint: Use Ctrl+B, ? to enter a help mode that lists all the keys you can use. To keep things simple, look for the lines starting with bind-key -T prefix at first. These are keys you can use right after the modifier key to configure your tmux session. You can hit Ctrl+C to exit the help mode back to tmux.

To completely exit tmux, use the standard exit command or Ctrl+D keystroke to exit all the shells.


You might have recently seen the Magazine article on dwm, a dynamic window manager. Like dwm, dvtm is for tiling window management — but in a terminal. It’s designed to adhere to the legacy UNIX philosophy of “do one thing well” — in this case managing windows in a terminal.

Installing dvtm is easy as well. However, if you want the logout functionality mentioned earlier, you’ll also need the abduco package which handles session management for dvtm.

$ sudo dnf install dvtm abduco

The dvtm utility has many keystrokes already mapped to allow you to manage windows in the terminal. By default, it uses Ctrl+G as its modifier key. This keystroke tells dvtm that the following character is going to be a command it should process. For instance, Ctrl+G, C creates a new window and Ctrl+G, X removes it.

For more information on using dvtm, check out the dvtm home page which includes numerous tips and get-started information.


While byobu isn’t truly a multiplexer on its own — it wraps tmux or even the older screen to add functions — it’s worth covering here too. Byobu makes terminal multiplexers better for novices, by adding a help menu and window tabs that are slightly easier to navigate.

Of course it’s available in the Fedora repos as well. To install, use this command:

$ sudo dnf install byobu

By default the byobu command runs screen underneath, so you might want to run byobu-tmux to wrap tmux instead. You can then use the F9 key to open up a help menu for more information to help you get started.


The mtm utility is one of the smallest multiplexers you’ll find. In fact, it’s only about 1000 lines of code! You might find it helpful if you’re in a limited environment such as old hardware, a minimal container, and so forth. To get started, you’ll need a couple packages.

$ sudo dnf install git ncurses-devel make gcc

Then clone the repository where mtm lives:

$ git clone https://github.com/deadpixi/mtm.git

Change directory into the mtm folder and build the program:

$ make

You might receive a few warnings, but when you’re done, you’ll have the very small mtm utility. Run it with this command:

$ ./mtm

You can find all the documentation for the utility on its GitHub page.

These are just some of the terminal multiplexers out there. Got one you’d like to recommend? Leave a comment below with your tips and enjoy building windows in your terminal!

Photo by Michael on Unsplash.

Kiwi TCMS 6.6

Posted by Kiwi TCMS on March 19, 2019 08:40 PM

We're happy to announce Kiwi TCMS version 6.6! This is a medium severity security update, improvement and bug-fix update. You can explore everything at https://demo.kiwitcms.org!

Supported upgrade paths:

5.3   (or older) -> 5.3.1
5.3.1 (or newer) -> 6.0.1
6.0.1            -> 6.1
6.1              -> 6.1.1
6.1.1            -> 6.2 (or newer)

Docker images:

kiwitcms/kiwi       latest  c4734f98ca37    971.3 MB
kiwitcms/kiwi       6.2     7870085ad415    957.6 MB
kiwitcms/kiwi       6.1.1   49fa42ddfe4d    955.7 MB
kiwitcms/kiwi       6.1     b559123d25b0    970.2 MB
kiwitcms/kiwi       6.0.1   87b24d94197d    970.1 MB
kiwitcms/kiwi       5.3.1   a420465852be    976.8 MB

Changes since Kiwi TCMS 6.5.3


  • Explicitly require marked v0.6.1 to fix medium severity ReDoS vulnerability. See SNYK-JS-MARKED-73637


  • Update python-gitlab from 1.7.0 to 1.8.0
  • Update django-contrib-comments from 1.9.0 to 1.9.1
  • More strings marked as translatable (Christophe CHAUVET)
  • When creating new TestCase you can now change notification settings. Previously this was only possible during editing
  • Document import-export approaches. Closes Issue #795
  • Document available test automation plugins
  • Improve documentation around Docker customization and SSL termination
  • Add documentation example of reverse rroxy configuration for HAProxy (Nicolas Auvray)
  • TestPlan.add_case() will now set the sortkey to highest in plan + 10 (Rik)
  • Add LinkOnly issue tracker. Fixes Issue #289
  • Use the same HTML template for both TestCase new & edit
  • New API methods for adding, removing and listing attachments. Fixes Issue #446:
    • TestPlan.add_attachment()
    • TestCase.add_attachment()
    • TestPlan.list_attachments()
    • TestCase.list_attachments()
    • Attachments.remove_attachment()

Database migrations

  • Populate missing TestCase.text history. In version 6.5 the TestCase model was updated to store the text into a single field called text instead of 4 separate fields. During that migration historical records were updated to have the new text field but values were not properly assigned.

    The "effect" of this is that in TestCaseRun records you were not able to see the actual text b/c it was None.

    This change ammends 0006_merge_text_field_into_testcase_model for installations which have not yet migrated to 6.5 or later. We also provide the data-only migration 0009_populate_missing_text_history which will inspect the current state of the DB and copy the text to the last historical record.

Removed functionality

  • Remove legacy reports. Closes Issue #657

  • Remove "Save & Continue" functionality from TestCase edit page

  • Renamed API methods:

    • TestCaseRun.add_log() -> TestCaseRun.add_link()
    • TestCaseRun.remove_log() -> TestCaseRun.remove_link()
    • TestCaseRun.get_logs() -> TestCaseRun.get_links()

    These methods work with URL links, which can be added or removed to test case runs.

Bug fixes

  • Remove hard-coded timestamp in TestCase page template, References Issue #765
  • Fix handling of ?from_plan URL parameter in TestCase page
  • Make TestCase.text occupy 100% width when rendered. Fixes Issue #798
  • Enable markdown.extensions.tables. Fixes Issue #816
  • Handle form erros and default values for TestPlan new/edit. Fixes Issue #864
  • Tests + fix for failing TestCase rendering in French
  • Show color-coded statuses on dashboard page when seen with non-English language
  • Refactor check for confirmed test cases when editting to work with translations
  • Fix form values when filtering test cases inside TestPlan. Fixes Issue #674 (@marion2016)
  • Show delete icon for attachments. Fixes Issue #847


  • Remove unused .current_user instance attribute
  • Remove EditCaseForm and use NewCaseForm instead, References Issue #708, Issue #812
  • Fix "Select All" checkbox. Fixes Issue #828 (Rady)


How to upgrade

If you are using Kiwi TCMS as a Docker container then:

cd Kiwi/
git pull
docker-compose down
docker pull kiwitcms/kiwi
docker pull centos/mariadb
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate

Don't forget to backup before upgrade!

WARNING: kiwitcms/kiwi:latest and docker-compose.yml will always point to the latest available version! If you have to upgrade in steps, e.g. between several intermediate releases, you have to modify the above workflow:

# starting from an older Kiwi TCMS version
docker-compose down
docker pull kiwitcms/kiwi:<next_upgrade_version>
edit docker-compose.yml to use kiwitcms/kiwi:<next_upgrade_version>
docker-compose up -d
docker exec -it kiwi_web /Kiwi/manage.py migrate
# repeat until you have reached latest

Happy testing!

The Product Security Blog has moved!

Posted by Red Hat Security on March 19, 2019 07:38 PM

Red Hat Product Security has joined forces with other security teams inside Red Hat to publish our content in a common venue using the Security channel of the Red Hat Blog. This move provides a wider variety of important Security topics, from experts all over Red Hat, in a more modern and functional interface. We hope everyone will enjoy the new experience!






Epiphany Technology Preview Upgrade Requires Manual Intervention

Posted by Michael Catanzaro on March 19, 2019 06:39 PM

Jan-Michael has recently changed Epiphany Technology Preview to use a separate app ID. Instead of org.gnome.Epiphany, it will now be org.gnome.Epiphany.Devel, to avoid clashing with your system version of Epiphany. You can now have separate desktop icons for both system Epiphany and Epiphany Technology Preview at the same time.

Because flatpak doesn’t provide any way to rename an app ID, this means it’s the end of the road for previous installations of Epiphany Technology Preview. Manual intervention is required to upgrade. Fortunately, this is a one-time hurdle, and it is not hard:

$ flatpak uninstall org.gnome.Epiphany

Uninstall the old Epiphany…

$ flatpak install gnome-apps-nightly org.gnome.Epiphany.Devel org.gnome.Epiphany.Devel.Debug

…install the new one, assuming that your remote is named gnome-apps-nightly (the name used locally may differ), and that you also want to install debuginfo to make it possible to debug it…

$ mv ~/.var/app/org.gnome.Epiphany ~/.var/app/org.gnome.Epiphany.Devel

…and move your personal data from the old app to the new one.

Then don’t forget to make it your default web browser under System Settings -> Details -> Default Applications. Thanks for testing Epiphany Technology Preview!

Of debugging Ansible Tower and underlying cloud images

Posted by Roland Wolters on March 19, 2019 03:09 PM
<figure class="alignright is-resized">Ansible Logo</figure>

Recently I was experimenting with Tower’s isolated nodes feature – but somehow it did not work in my environment. Debugging told me a lot about Ansible Tower – and also why you should not trust arbitrary cloud images.

Background – Isolated Nodes

Ansible Tower has a nice feature called “isolated nodes”. Those are dedicated Tower instances which can manage nodes in separated environments – basically an Ansible Tower Proxy.

An Isolated Node is an Ansible Tower node that contains a small piece of software for running playbooks locally to manage a set of infrastructure. It can be deployed behind a firewall/VPC or in a remote datacenter, with only SSH access available. When a job is run that targets things managed by the isolated node, the job and its environment will be pushed to the isolated node over SSH, where it will run as normal.

Ansible Tower Feature Spotlight: Instance Groups and Isolated Nodes

Isolated nodes are especially handy when you setup your automation in security sensitive environments. Think of DMZs here, of network separation and so on.

I was fooling around with a clustered Tower installation on RHEL 7 VMs in a cloud environment when I run into trouble though.

My problem – Isolated node unavailable

Isolated nodes – like instance groups – have a status inside Tower: if things are problematic, they are marked as unavailable. And this is what happened with my instance isonode.remote.example.com running in my lab environment:

<figure class="wp-block-image"><figcaption>Ansible Tower showing an instance node as unavailable</figcaption></figure>

I tried to turn it “off” and “on” again with the button in the control interface. It made the node available, it was even able to executed jobs – but it became quickly unavailable soon after.


So what happened? The Tower logs showed a Python error:

# tail -f /var/log/tower/tower.log
fatal: [isonode.remote.example.com]: FAILED! => {"changed": false,
"module_stderr": "Shared connection to isonode.remote.example.com
closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n
File \"/var/lib/awx/.ansible/tmp/ansible-tmp-1552400585.04
-60203645751230/AnsiballZ_awx_capacity.py\", line 113, in <module>\r\n
_ansiballz_main()\r\n  File \"/var/lib/awx/.ansible/tmp/ansible-tmp
-1552400585.04-60203645751230/AnsiballZ_awx_capacity.py\", line 105, in
_ansiballz_main\r\n    invoke_module(zipped_mod, temp_path,
ANSIBALLZ_PARAMS)\r\n  File \"/var/lib/awx/.ansible/tmp/ansible-tmp
-1552400585.04-60203645751230/AnsiballZ_awx_capacity.py\", line 48, in
invoke_module\r\n    imp.load_module('__main__', mod, module, MOD_DESC)\r\n
File \"/tmp/ansible_awx_capacity_payload_6p5kHp/__main__.py\", line 74, in
<module>\r\n  File \"/tmp/ansible_awx_capacity_payload_6p5kHp/__main__.py\",
line 60, in main\r\n  File
\"/tmp/ansible_awx_capacity_payload_6p5kHp/__main__.py\", line 27, in
get_cpu_capacity\r\nAttributeError: 'module' object has no attribute
'cpu_count'\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact
error", "rc": 1}

PLAY RECAP *********************************************************************
isonode.remote.example.com : ok=0    changed=0    unreachable=0    failed=1  

Apparently a Python function was missing. If we check the code we see that indeed in line 27 of file awx_capacity.py the function psutil.cpu_count() is called:

def get_cpu_capacity():
    env_forkcpu = os.getenv('SYSTEM_TASK_FORKS_CPU', None)
    cpu = psutil.cpu_count()

Support for this function was added in version 2.0 of psutil:

424: [Windows] installer for Python 3.X 64 bit.
427: number of logical and physical CPUs (psutil.cpu_count()).

psutil history

Note the date here: 2014-03-10 – pretty old! I check the version of the installed package, and indeed the version was pre-2.0:

$ rpm -q --queryformat '%{VERSION}\n' python-psutil

To be really sure and also to ensure that there was no weird function backporting, I checked the function call directly on the Tower machine:

# python
Python 2.7.5 (default, Sep 12 2018, 05:31:16) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import inspect
>>> import psutil as module
>>> functions = inspect.getmembers(module, inspect.isfunction)
>>> functions
[('_assert_pid_not_reused', <function _assert_pid_not_reused at
0x7f9eb10a8d70>), ('_deprecated', <function deprecated at 0x7f9eb38ec320>),
('_wraps', <function wraps at 0x7f9eb414f848>), ('avail_phymem', <function
avail_phymem at 0x7f9eb0c32ed8>), ('avail_virtmem', <function avail_virtmem at
0x7f9eb0c36398>), ('cached_phymem', <function cached_phymem at
0x7f9eb10a86e0>), ('cpu_percent', <function cpu_percent at 0x7f9eb0c32320>),
('cpu_times', <function cpu_times at 0x7f9eb0c322a8>), ('cpu_times_percent',
<function cpu_times_percent at 0x7f9eb0c326e0>), ('disk_io_counters',
<function disk_io_counters at 0x7f9eb0c32938>), ('disk_partitions', <function
disk_partitions at 0x7f9eb0c328c0>), ('disk_usage', <function disk_usage at
0x7f9eb0c32848>), ('get_boot_time', <function get_boot_time at
0x7f9eb0c32a28>), ('get_pid_list', <function get_pid_list at 0x7f9eb0c4b410>),
('get_process_list', <function get_process_list at 0x7f9eb0c32c08>),
('get_users', <function get_users at 0x7f9eb0c32aa0>), ('namedtuple',
<function namedtuple at 0x7f9ebc84df50>), ('net_io_counters', <function
net_io_counters at 0x7f9eb0c329b0>), ('network_io_counters', <function
network_io_counters at 0x7f9eb0c36500>), ('phymem_buffers', <function
phymem_buffers at 0x7f9eb10a8848>), ('phymem_usage', <function phymem_usage at
0x7f9eb0c32cf8>), ('pid_exists', <function pid_exists at 0x7f9eb0c32140>),
('process_iter', <function process_iter at 0x7f9eb0c321b8>), ('swap_memory',
<function swap_memory at 0x7f9eb0c327d0>), ('test', <function test at
0x7f9eb0c32b18>), ('total_virtmem', <function total_virtmem at
0x7f9eb0c361b8>), ('used_phymem', <function used_phymem at 0x7f9eb0c36050>),
('used_virtmem', <function used_virtmem at 0x7f9eb0c362a8>), ('virtmem_usage',
<function virtmem_usage at 0x7f9eb0c32de8>), ('virtual_memory', <function
virtual_memory at 0x7f9eb0c32758>), ('wait_procs', <function wait_procs at

Searching for a package origin

So how to solve this issue? My first idea was to get this working by updating the entire code part to the multiprocessor lib:

# python
Python 2.7.5 (default, Sep 12 2018, 05:31:16) 
[GCC 4.8.5 20150623 (Red Hat 4.8.5-36)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import multiprocessing
>>> cpu = multiprocessing.cpu_count()
>>> cpu

But while I was filling a bug report I wondered why RHEL shipped such an ancient library. After all, RHEL 7 was released in June 2014, and psutil had cpu_count available since early 2014! And indeed, a quick search for the package via the Red Hat package search showed a weird result: python-psutil was never part of base RHEL 7! It was only shipped as part of some very, very old OpenStack channels:

<figure class="wp-block-image"><figcaption>access.redhat.com package search, results for python-psutil</figcaption></figure>

Newer OpenStack channels in fact come along with newer versions of python-psutil.

So how did this outdated package end up on this RHEL 7 image? Why was it never updated?

The cloud image is to blame! The package was installed on it – most likely during the creation of the image: python-psutil is needed for OpenStack Heat, so I assume that these RHEL 7 images where once created via OpenStack and then used as the default image in this demo environment.

And after the initial creation of the image the Heat packages were forgotten. In the meantime the image was updated to newer RHEL versions, snapshots were created as new defaults and so on. But since the package in question was never part of the main RHEL repos, it was never changed or removed. It just stayed there. Waiting, apparently, for me 😉


This issue showed me how tricky cloud images can be. Think about your own cloud images: have you really checked all all of them and verified that no package, no start up script, no configuration was changed from the Linux distribution vendor’s base setup?

With RPMs this is still manageable, you can track if packages are installed which are not present in the existing channels. But did someone install something with pip? Or any other way?

Take my case: an outdated version of a library was called instead of a much, much more recent one. If there would have been a serious security issue with the library in the meantime, I would have been exposed although my update management did not report any library to be updated.

I learned my lesson to be more critical with cloud images, checking them in more detail in the future to avoid having nasty surprises during production. And I can just recommend that you do that as well.

<script type="text/javascript"> __ATA.cmd.push(function() { __ATA.initSlot('atatags-26942-5c95ec4261286', { collapseEmpty: 'before', sectionId: '26942', width: 300, height: 250 }); }); </script>
<script type="text/javascript"> __ATA.cmd.push(function() { __ATA.initSlot('atatags-114160-5c95ec426128a', { collapseEmpty: 'before', sectionId: '114160', width: 300, height: 250 }); }); </script>

Introducing flat-manager

Posted by Alexander Larsson on March 19, 2019 01:20 PM

A long time ago I wrote a blog post about how to maintain a Flatpak repository.

It is still a nice, mostly up to date, description of how Flatpak repositories work. However, it doesn’t really have a great answer to the issue called syncing updates in the post. In other words, it really is more about how to maintain a repository on one machine.

In practice, at least on a larger scale (like e.g. Flathub) you don’t want to do all the work on a single machine like this. Instead you have an entire build-system where the repository is the last piece.

Enter flat-manager

To support this I’ve been working on a side project called flat-manager. It is a service written in rust that manages Flatpak repositories. Recently we migrated Flathub to use it, and its seems to work quite well.

At its core, flat-manager serves and maintains a set of repos, and has an API that lets you push updates to it from your build-system. However, the way it is set up is a bit more complex, which allows some interesting features.

Core concept: a build

When updating an app, the first thing you do is create a new build, which just allocates an id that you use in later operations. Then you can upload one or more builds to this id.

This separation of the build creation and the upload is very powerful, because it allows you to upload the app in multiple operations, potentially from multiple sources. For example, in the Flathub build-system each architecture is built on a separate machine. Before flat-manager we had to collect all the separate builds on one machine before uploading to the repo. In the new system each build machine uploads directly to the repo with no middle-man.

Committing or purging

An important idea here is that the new build is not finished until it has been committed. The central build-system waits until all the builders report success before committing the build. If any of the builds fail, we purge the build instead, making it as if the build never happened. This means we never expose partially successful builds to users.

Once a build is committed, flat-manager creates a separate repository containing only the new build. This allows you to use Flatpak to test the build before making it available to users.

This makes builds useful even for builds that never was supposed to be generally available. Flathub uses this for test builds, where if you make a pull request against an app it will automatically build it and add a comment in the pull request with the build results and a link to the repo where you can test it.


Once you are satisfied with the new build you can trigger a publish operation, which will import the build into the main repository and do all the required operations, like:

  • Sign builds with GPG
  • Generate static deltas for efficient updates
  • Update the appstream data and screenshots for the repo
  • Generate flatpakref files for easy installation of apps
  • Update the summary file
  • Call out out scripts that let you do local customization

The publish operation is actually split into two steps, first it imports the build result in the repo, and then it queues a separate job to do all the updates needed for the repo. This way if multiple builds are published at the same time the update can be shared. This saves time on the server, but it also means less updates to the metadata which means less churn for users.

You can use whatever policy you want for how and when to publish builds. Flathub lets individual maintainers chose, but by default successful builds are published after 3 hours.

Delta generation

The traditional way to generate static deltas is to run flatpak build-update-repo --generate-static-deltas. However, this is a very computationally expensive operation that you might not want to do on your main repository server. Its also not very flexible in which deltas it generates.

To minimize the server load flat-manager allows external workers that generate the deltas on different machines. You can run as many of these as you want and the deltas will be automatically distributed to them. This is optional, and if no workers connect the deltas will be generated locally.

flat-manager also has configuration options for which deltas should be generated. This allows you to avoid generating unnecessary deltas and to add extra levels of deltas where needed. For example, Flathub no longer generates deltas for sources and debug refs, but we have instead added multiple levels of deltas for runtimes, allowing you to go efficiently to the current version from either one or two versions ago.

Subsetting tokens

flat-manager uses JSON Web Tokens to authenticate API clients. This means you can assign different permission to different clients. Flathub uses this to give minimal permissions to the build machines. The tokens they get only allow uploads to the specific build they are currently handling.

This also allows you to hand out access to parts of the repository namespace. For instance, the Gnome project has a custom token that allows them to upload anything in the org.gnome.Platform namespace in Flathub. This way Gnome can control the build of their runtime and upload a new version whenever they want, but they can’t (accidentally or deliberately) modify any other apps.


I need to mention Rust here too. This is my first real experience with using Rust, and I’m very impressed by it. In particular, the sense of trust I have in the code when I got it past the compiler. The compiler caught a lot of issues, and once things built I saw very few bugs at runtime.

It can sometimes be a lot of work to express the code in a way that Rust accepts, which makes it not an ideal language for sketching out ideas. But for production code it really excels, and I can heartily recommend it!

Future work

Most of the initial list of features for flat-manager are now there, so I don’t expect it to see a lot of work in the near future.

However, there is one more feature that I want to see; the ability to (automatically) create subset versions of the repository. In particular, we want to produce a version of Flathub containing only free software.

I have the initial plans for how this will work, but it is currently blocking on some work inside OSTree itself. I hope this will happen soon though.

[F30] Participez à la journée de test consacrée à l'internationalisation

Posted by Charles-Antoine Couret on March 19, 2019 07:00 AM

Aujourd'hui, ce mardi 19 mars, est une journée dédiée à un test précis : sur l'internationalisation de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Comme chaque version de Fedora, la mise à jour de ses outils impliquent souvent l’apparition de nouvelles chaînes de caractères à traduire et de nouveaux outils liés à la prise en charge de langues (en particulier asiatiques).

Pour favoriser l'usage de Fedora dans l'ensemble des pays du monde, il est préférable de s'assurer que tout ce qui touche à l'internationalisation de Fedora soit testée et fonctionne. Notamment parce qu'une partie doit être fonctionnelle dès le LiveCD d'installation (donc sans mise à jour).

Les tests du jour couvrent :

  • Le bon fonctionnement d'ibus pour la gestion des entrées claviers ;
  • La personnalisation des polices de caractères ;
  • L'installation automatique des paquets de langues des logiciels installés suivant la langue du système ;
  • La traduction fonctionnelle par défaut des applications ;
  • Les nouvelles dépendances des paquets de langue pour installer les polices et les entrées de saisie nécessaires.

Bien entendu, étant donné les critères, à moins de savoir une langue chinoise, l'ensemble des tests n'est pas forcément réalisable. Mais en tant que francophones, de nombreuses problématiques nous concernent et remonter les problèmes est important. En effet, ce ne sont pas les autres communautés linguistiques qui identifieront les problèmes d'intégration de la langue française.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Contribution opportunity! Quick docs!

Posted by Fedora Community Blog on March 19, 2019 06:30 AM
Photo by Peter Lewicki on Unsplash

Quick docs are meant to be short articles on the official Fedora documentation site that cover commonly used workflows/tools.

Unlike wiki pages which are generally unreviewed, information on quick-docs follows the PR (peer-review + pull request) process. So the new information that is added there is more trustworthy and should be too, given that quick docs is listed on the official Fedora documentation website.

Role 1: reviewer

All new quick-docs are added using the pull-request model. They all, therefore, must be reviewed.

So, a really easy way of contributing to quick-docs is to find a pull request that addresses a topic you know about and review it.

Similar to the package maintainers system, “review swaps” are encouraged: you review my pull request, and in return I will review yours. Not only does this ensure quicker reviews, it also helps contributors get to know each other! Please, feel free to request documentation review swaps on the mailing lists!

Role 2: writer

This role is on the other side of the review: you write new documents about topics that interest you. It could be anything, anything at all, that you think is worth documenting. Quite a few of us document various tools and techniques on our blogs—why not put these up on quick-docs if the are general enough to be of interest to a wider audience?

Documentation writers need reviewers, and reviewers need documentation writers. So, while I’ve listed these as separate roles, most people will do both!

Skills you will need/learn as a quick-docs contributor

As a contributor to quick-docs, you will need/learn the following skills:

You will notice that a these skills are mostly transferable to other work. This makes contributing to quick-docs even more useful, especially for beginners.

I mention this purely for the sake of completeness, for it should be obvious to anyone who sees the Fedora community go about its work: the docs team and the Fedora community in general, are always looking to spread whatever knowledge we have to others in the community or those looking to join the community!

Current requirement: improving automatically exported wiki pages

A large body of quick-docs consists of pages from the wiki that were auto-exported. These are not yet reviewed. So, we do need more community members looking at these pages, catching errors, and improving them. This is also one of the easiest ways of contributing to quick docs:

  • pick a quick-docs page.
  • verify whether the information is correct.
  • open a pull request with suggested improvements.
  • review other’s pull requests.

Get in touch, get started!

So, let’s get started? Here are the relevant links:

Happy writing!

The post Contribution opportunity! Quick docs! appeared first on Fedora Community Blog.

Happy St Patrick's Day, IFSO AGM and meeting sock puppets

Posted by Daniel Pocock on March 18, 2019 02:18 PM

Happy St Patrick's day (17 March)

In February, we had an annual general meeting (AGM) of the Irish Free Software Organization in Dublin.

If you are in Ireland, please consider joining IFSO or making a donation.

The sock puppet next door

There is a very interesting story about how this meeting came about.

When discussions took place in the FSFE community about the decision to abolish elections, approximately 15 people participated, with about 10 people against democracy and only about 5 people speaking up in favour.

Looking at those numbers is deceptive: of the 10 people speaking against elections, all were in what other people perceive as the cabal, a group of 27 people who have full membership, over and above the fellows. Cabal people hadn't lost anything in the constitutional change. The 5 people speaking in favour of democracy where not members of the cabal, they were ordinary members of the 1500-strong fellowship. In such circumstances, is it fair to extrapolate the voice of those 5 people and consider it to be representative of the majority of 1500 fellows? Or do we accept the more simplistic 10 against 5? The more simplistic case, where it is not obvious to outsiders that the 10 people are cabal members, is one of those fake community situations.

Imagine if every participant in that conversation had to state in their email signature whether they were cabal or fellow, or even better, if the emails could be colour-coded by membership class. Would it be easier to see the correlation between the vested interests and the opinions?

In any case, the more outspoken members of the cabal tried to intimidate the fellows, trying to discredit them with personal attacks and calling some of them sock puppets. As fellowship representative, I simply emailed some of these people personally asking "can you please tell me if you are a sock-puppet or a fellow?"

What I found was surprising: not only were they real people, one of them lives just around the corner from my home in Dublin. Stefan and I met for burgers late in 2018 and helped put things into motion to reboot the IFSO.

One fellow told me he (or she?) was not using their real name because the FSFE cabal censors discussions about governance issues, blocking people from the mailing lists or moderating their posts. But they are still a real person making real contributions to the organization. Another fellow observed that one member of the cabal, Cryptie, doesn't use her real name and asked why should anybody else?

Another thought that crossed my mind after meeting Stefan: why is it easier for me to meet a so-called sock-puppet in real life than it is to meet the leader of the Debian project when serious issues need to be discussed? The DPL's refusal to meet with people in person and then deciding he knows them well enough to give opinions about them to people in other communities feels like one of the major reasons there has been stress for many people in Debian recently.

Now Debian has similar problems to FSFE: undemocratic behaviour by the leaders, censorship and then, for fear of retribution, it looks like some people stop using their real names when posting on the debian-project mailing list and other people may erroneously be accused of not using real names. With over five thousand people subscribed to the list, I don't feel that two people with similar names is a compelling example of sock-puppeteering and some of the accusations are uncomfortable for multiple people. Even fewer people dare to open their mouth next.

This brings us to another of the benefits of setting up local associations like IFSO: people can meet face to face more often, maybe monthly and then nobody is wondering if they are corresponding with a sock puppet. FSFE's 27 members (what they call the "General Assembly", or other people regard as a cabal) only officially meets once per year. It has become too big to function like a board or have regular meetings but too small to have the credibility that would come from acknowledging all volunteers/fellows as equal members.

According to the treasurer's report at the IFSO AGM, there is no money in the bank so there is nothing for sock puppets to fight over anyway. So come along and join the next meeting for some fun.

New feature in fedora-upgrade

Posted by Miroslav Suchý on March 18, 2019 02:05 PM

I have just released a new version of fedora-upgrade (an unofficial tool to upgrade Fedora). It has two nice features:

Previously you were able to upgrade just to next version. E.g., upgrade from Fedora 28 to Fedora 29. You were not able to upgrade from Fedora 28 to Fedora 30. This is now possible. You can run:

fedora-upgrade --upgrade-to=30

and it will try to upgrade to Fedora 30 - no matter what is your current version. Be warned - more releases you skip, more bugs will pop up.

I have several machines in the cloud, which have root volume pretty small (4GB) and which I (for various reasons) prefer to upgrade (rather than terminating and building from scratch using Ansible playbook). Upgrading system where rootfs is 4GB big and the 2 GB are already used is painful. You need 2 GB for DNF cache to download the packages, and then DNF tells you that you are out of space. I usually workaround that by mounting /var/cache/dnf as tmpfs and after the upgrade I unmounted it. I finally find time to script that so you can use:

fedora-upgrade --tmpfs=3G

to mount /var/cache/dnf as 3GB big tmpfs.

The new version just landend in Bodhi - tomorrow it will be in updates-testing.

Let’s try dwm — dynamic window manger

Posted by Fedora Magazine on March 18, 2019 08:01 AM

If you like efficiency and minimalism, and are looking for a new window manager for your Linux desktop, you should try dwm — dynamic window manager. Written in under 2000 standard lines of code, dwm is extremely fast yet powerful and highly customizable window manager.

You can dynamically choose between tiling, monocle and floating layouts, organize your windows into multiple workspaces using tags, and quickly navigate through using keyboard shortcuts. This article helps you get started using dwm.


To install dwm on Fedora, run:

$ sudo dnf install dwm dwm-user

The dwm package installs the window manager itself, and the dwm-user package significantly simplifies configuration which will be explained later in this article.

Additionally, to be able to lock the screen when needed, we’ll also install slock — a simple X display locker.

$ sudo dnf install slock

However, you can use a different one based on your personal preference.

Quick start

To start dwm, choose the dwm-user option on the login screen.

<figure class="wp-block-image"></figure>

After you log in, you’ll see a very simple desktop. In fact, the only thing there will be a bar at the top listing our nine tags that represent workspaces and a []= symbol that represents the layout of your windows.

Launching applications

Before looking into the layouts, first launch some applications so you can play with the layouts as you go. Apps can be started by pressing Alt+p and typing the name of the app followed by Enter. There’s also a shortcut Alt+Shift+Enter for opening a terminal.

Now that some apps are running, have a look at the layouts.


There are three layouts available by default: the tiling layout, the monocle layout, and the floating layout.

The tiling layout, represented by []= on the bar, organizes windows into two main areas: master on the left, and stack on the right. You can activate the tiling layout by pressing Alt+t.

<figure class="wp-block-image"></figure>

The idea behind the tiling layout is that you have your primary window in the master area while still seeing the other ones in the stack. You can quickly switch between them as needed.

To swap windows between the two areas, hover your mouse over one in the stack area and press Alt+Enter to swap it with the one in the master area.

<figure class="wp-block-image"></figure>

The monocle layout, represented by [N] on the top bar, makes your primary window take the whole screen. You can switch to it by pressing Alt+m.

Finally, the floating layout lets you move and resize your windows freely. The shortcut for it is Alt+f and the symbol on the top bar is ><>.

Workspaces and tags

Each window is assigned to a tag (1-9) listed at the top bar. To view a specific tag, either click on its number using your mouse or press Alt+1..9. You can even view multiple tags at once by clicking on their number using the secondary mouse button.

Windows can be moved between different tags by highlighting them using your mouse, and pressing Alt+Shift+1..9. 


To make dwm as minimalistic as possible, it doesn’t use typical configuration files. Instead, you modify a C header file representing the configuration, and recompile it. But don’t worry, in Fedora it’s as simple as just editing one file in your home directory and everything else happens in the background thanks to the dwm-user package provided by the maintainer in Fedora.

First, you need to copy the file into your home directory using a command similar to the following:

$ mkdir ~/.dwm
$ cp /usr/src/dwm-VERSION-RELEASE/config.def.h ~/.dwm/config.h

You can get the exact path by running man dwm-start.

Second, just edit the ~/.dwm/config.h file. As an example, let’s configure a new shortcut to lock the screen by pressing Alt+Shift+L.

Considering we’ve installed the slock package mentioned earlier in this post, we need to add the following two lines into the file to make it work:

Under the /* commands */ comment, add:

static const char *slockcmd[] = { "slock", NULL };

And the following line into static Key keys[]:

{ MODKEY|ShiftMask, XK_l, spawn, {.v = slockcmd } },

In the end, it should look like as follows: (added lines are highlighted)

/* commands */
static char dmenumon[2] = "0"; /* component of dmenucmd, manipulated in spawn() */
static const char *dmenucmd[] = { "dmenu_run", "-m", dmenumon, "-fn", dmenufont, "-nb", normbgcolor, "-nf", normfgcolor, "-sb", selbgcolor, "-sf", selfgcolor, NULL };
static const char *termcmd[]  = { "st", NULL };
static const char *slockcmd[] = { "slock", NULL };

static Key keys[] = {
/* modifier                     key        function        argument */
{ MODKEY|ShiftMask,             XK_l,      spawn,          {.v = slockcmd } },
{ MODKEY,                       XK_p,      spawn,          {.v = dmenucmd } },
{ MODKEY|ShiftMask,             XK_Return, spawn,          {.v = termcmd } },

Save the file.

Finally, just log out by pressing Alt+Shift+q and log in again. The scripts provided by the dwm-user package will recognize that you have changed the config.h file in your home directory and recompile dwm on login. And becuse dwm is so tiny, it’s fast enough you won’t even notice it.

You can try locking your screen now by pressing Alt+Shift+L, and then logging back in again by typing your password and pressing enter.


If you like minimalism and want a very fast yet powerful window manager, dwm might be just what you’ve been looking for. However, it probably isn’t for beginners. There might be a lot of additional configuration you’ll need to do in order to make it just as you like it.

To learn more about dwm, see the project’s homepage at https://dwm.suckless.org/.

Episode 137.5 - Holy cow Beto was in the cDc, this is awesome!

Posted by Open Source Security Podcast on March 18, 2019 12:01 AM
Josh and Kurt talk about Beto being in the Cult of the Dead Cow (cDc). This is a pretty big deal in a very good way. We hit on some history, why it's a great thing, what we can probably expect from opponents. There's even some advice at the end how we can all help. We need more politicians with backgrounds like this.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/9037547/height/90/theme/custom/thumbnail/yes/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

    GNU Tools Cauldron 2019

    Posted by Mark J. Wielaard on March 15, 2019 11:11 PM

    Simon Marchi just announced that the next GNU Tools Cauldron will be in Montreal, Canada from Thursday September 12 till Sunday September 15.

    The purpose of this workshop is to gather all GNU tools developers, discuss current/future work, coordinate efforts, exchange reports on ongoing efforts, discuss development plans for the next 12 months, developer tutorials and any other related discussions. This year, the GNU Tools Cauldron crosses the Atlantic Ocean and lands in Montréal, Canada. We are inviting every developer working in the GNU toolchain: GCC, GDB, binutils, runtimes, etc.


    The conference is free to attend, registration in advance is required.

    FPgM report: 2019-11

    Posted by Fedora Community Blog on March 15, 2019 09:35 PM
    Fedora Program Manager weekly report on Fedora Project development and progress

    Here’s your report of what has happened in Fedora Program Management this week.

    I’ve set up weekly office hours in #fedora-meeting-1. Drop by if you have any questions or comments about the schedule, Changes, elections, or anything else. The Fedora 30 Beta Go/No-Go and Release Readiness meetings are next week.


    Meetings and test days

    Fedora 30 Status

    Fedora 30 includes a Change that will cause ambiguous python shebangs to error.  A list of failing builds is available on Taskotron.

    Blocker bugs

    Bug IDBlocker statusComponentBug Status


    An updated list of incomplete changes is available in Bugzilla.

    Fedora 31 Status



    Submitted to FESCo

    Approved by FESCo

    The post FPgM report: 2019-11 appeared first on Fedora Community Blog.

    Two new policy proposals

    Posted by Fedora Community Blog on March 15, 2019 05:19 PM
    Fedora community elections

    This post shares two new proposals for changes to Fedora Council policy.

    Policy changes

    While addressing another issue recently, the Fedora Council realized we don’t have a good policy for making changes to Council policies. That seems like a mistake we should fix. So I have submitted a pull request to the Council Docs repo that lays out a policy:

    Proposed changes to Fedora Council policies must be publicly announced on the council-discuss mailing list and in a Fedora Community Blog post in order to get feedback from the community. After a minimum of two calendar weeks, the Council may vote on the proposed change using the full consensus voting model. After approval, the change is reflected on the Council policies page.

    The intention is to make policy changes transparent and allow for community feedback.

    Channel bans

    In addition, we realized that we don’t have an explicit policy about issuing bans in channels for persistent off-topic conversation. We want to give teams within Fedora autonomy to act on their own within the boundaries of our Four Foundations and community norms.

    The Council developed a policy proposal that allows channel operators to ban individuals for persistent off-topic posting but makes it clear that the ban should only apply to affected channels:

    Teams within Fedora have the freedom to decide what is on- and off-topic for their fora (IRC channel, mailing list, Telegram channel, et cetera). Moderators may ban participants for repeatedly engaging in off-topic discussion, however they must file a ticket with the Council’s Code of Conduct issue tracker to report the ban. Bans for being off-topic in one channel may not be extended to other channels unless the behavior is displayed in that channel as well. In this case, each ban should be treated as a separate issue with its own ticket. Community members who wish to appeal the ban may file a ticket with the Council.

    To be clear, this is not intended for conduct that violates our Code of Conduct.

    Feedback welcome

    The Fedora Council wants community input. Please provide questions or comments in the Pagure pull requests:

    These will be submitted for a Council vote on Monday, 1 April.

    The post Two new policy proposals appeared first on Fedora Community Blog.

    NetworkManager 1.16 released, adding WPA3-Personal and WireGuard support

    Posted by Lubomir Rintel on March 15, 2019 05:00 PM

    NetworkManager needs no introduction. In fifteen years since its initial release, it has reached the status of the standard Linux network configuration daemon of choice of all major Linux distributions. What, on the other hand, may need some introduction, are the features of its 28th major release.

    Ladies and gentlemen, please welcome: NetworkManager-1.16.

    Guarding the Wire

    Unless you’ve been living under a rock for the last year, there’s a good chance you’ve heard of WireGuard. It is a brand new secure protocol for creating IPv4 and IPv6 Virtual Private Networks. It aims to be much simpler than IPsec, a traditional protocol for the job, hoping to accelerate the adoption and maintainability of the code base.

    Unlike other VPN solutions NetworkManager supports, WireGuard tunnelling will be entirely handled by the Linux kernel. This has an advantages in terms of performance, and also removes the needs of a VPN plugin. We’ve started work on supporting WireGuard tunnels as first-class citizens and once the kernel bits settle, we’ll be ready.

    More detail in Thomas’ article.

    Wi-Fi goodies

    Good Wi-Fi support is probably why many users choose NetworkManager on their laptops, and as always there are improvements in this area too. When wpa_supplicant is new enough, we’re now able use SAE authentication, as specified by the recent WPA3-Personal standard. This results in better security for password-protected home networks.

    New NetworkManager adds support for pairing with Wi-Fi Direct (also known as Wi-Fi P2P) capable devices. Read more in an article by Benjamin Berg, author of GNOME Screencast, who also contributed he functionality to NetworkManager.

    As usual, there’s also improvements to the IWD backend, an alternative to the venerable wpa_supplicant. With NetworkManager 1.16, users of IWD will be able to create Wi-Fi hot spots or take part in Ad-Hoc networks.

    Network booting

    Since the new version, it will be possible to run NetworkManager early in boot, prior to mounting the root filesystem. A dracut module will be able to convert network configuration provided on the kernel command line into keyfiles ready to be used by NetworkManager. Once NetworkManager succeeds in bringing up the network, it will terminate, leaving a state file for the real NetworkManager instance to pick up once the system is booted up.

    This removes some redundancy and makes the network boot both more capable and robust.

    Connectivity checks

    Finally, new NetworkManager is able to be more precise in assessing connectivity status. Under the right conditions (that basically means systemd-resolved being available, not necessarily default), we’re now able to assess connectivity status on per-device basis and check IPv4 and IPv6 separately.

    This will make it possible to prioritize default routes on internet-connected interfaces.

    What’s next?

    NetworkManager 1.18 is likely to see support for new Wi-Fi features; perhaps DPP and meshing. We’re also removing libnm-glib, since we no longer love it and nobody uses it anymore. Such is life.

    What else? You decide! As always, even though patch submissions is what makes us the happiest, we also gladly take suggestions. Our issue tracker is open.


    NetworkManager wouldn’t be what it is without contributions of hundreds of developers and translators worldwide. Here are the brave ones who contributed to NetworkManager since the last stable release: Aleksander Morgado, Andrei Dziahel, Andrew Zaborowski, AsciiWolf, Beniamino Galvani, Benjamin Berg, Corentin Noël, Damien Cassou, Dennis Brakhane, Evgeny Vereshchagin, Francesco Giudici, Frederic Danis, Frédéric Danis, garywill, Iñigo Martínez, Jan Alexander Steffens, Jason A. Donenfeld, Jonathan Kang, Kristjan SCHMIDT, Kyle Walker, Lennart Poettering, Li Song, Lubomir Rintel, luz.paz, Marco Trevisan, Michael Biebl, Patrick Talbert, Piotr Drąg, Rafael Fontenelle, scootergrisen, Sebastien Fabre, Soapux, Sven Schwermer, Taegil Bae, Thomas Haller, Yuri Chornoivan and Yu Watanabe.

    Thank you!

    Test Days: Internationalization (i18n) features for Fedora 30

    Posted by Fedora Community Blog on March 15, 2019 12:58 PM
    Fedora 30 Kernel 5.0Test Day

    All this week, we will be testing for i18n features in Fedora 30. Those are as follows:

    How to participate

    Most of the information is available on the Test Day wiki page. In case of doubts, feel free to send an email to the testing team mailing list.

    Though it is a test day, we normally keep it on for the whole week. If you don’t have time tomorrow, feel free to complete it in the coming few days and upload your test results.

    Let’s test and make sure this works well for our users!

    The post Test Days: Internationalization (i18n) features for Fedora 30 appeared first on Fedora Community Blog.

    WireGuard in NetworkManager

    Posted by Thomas Haller on March 15, 2019 12:00 PM

    WireGuard in NetworkManager

    NetworkManager 1.16 got native support for WireGuard VPN tunnels (NEWS). WireGuard is a novel VPN tunnel protocol and implementation that spawned a lot of interest. Here I will not explain how WireGuard itself works. You can find very good documentation and introduction at wireguard.com.

    Having support in NetworkManager is great for two main reasons:

    • NetworkManager provides a de facto standard API for configuring networking on the host. This allows different tools to integrate and interoperate — from cli, tui, GUI, to cockpit. All these different components may now make use of the API also for configuring WireGuard. One advantage for the end user is that a GUI for WireGuard is now within reach.
    • By configuring WireGuard with NetworkManager you get other features beyond the plain WireGuard tunnel setup. Most notably you get DNS and firewalld setup in a consistent manner.
    <figure class="wp-caption aligncenter" style="width: 583px">alice<figcaption class="wp-caption-text">For Alice it is now easy to configure WireGuard with NetworkManager.</figcaption></figure>

    NetworkManager’s support for WireGuard requires the kernel module for Linux. As of March 2019, it is not yet upstream in mainline kernel but easy to install on most distributions.

    Import an existing WireGuard profile

    The WireGuard project provides a wg-quick tool to setup WireGuard tunnels. If you are using WireGuard already, chances are that you use this tool. In that case you would have a configuration file and issue wg-quick up. Here is the example configuration file from wg-quick’s manual page:

    Address =
    Address =
    SaveConfig = true
    PrivateKey = yAnz5TF+lXXJte14tji3zlMNq+hd2rYUIgJBgB3fBmk=
    ListenPort = 51820
    PublicKey = xTIBA5rboUvnH4htodjb6e697QjLERt1NAB4mZqp8Dg=
    AllowedIPs =,
    PublicKey = TrMvSoP4jYQlY6RIzBgbssQqY3vxI2Pi+y71lOWWXX0=
    AllowedIPs =,
    PublicKey = gN65BkIKy1eCE9pP1wdc8ROUtkHLF2PfAqYdyYBz6EA=
    AllowedIPs =

    Let’s import this into NetworkManager:

    $ CONF_FILE="wg0.conf"
    $ nmcli connection import type wireguard file "$CONF_FILE"
    Connection 'wg0' (125d4b76-d230-47b0-9c31-bb7b9ebca861) successfully added.

    Note that the PreUp, PostUp, PreDown, and PostDown keys are ignored during import.

    You may delete the profile again with

    $ nmcli connection delete wg0
    Connection 'wg0' (125d4b76-d230-47b0-9c31-bb7b9ebca861) successfully deleted.

    About Connection Profiles

    Note that wg-quick up wg0.conf does something fundamentally different from what nmcli connection import does. When you run wg-quick up, it reads the file, configures the WireGuard tunnel, sets up addresses and routes, and exits.

    This is not what “connection import” does. NetworkManager is profile based. That means you create profiles instead of issuing ad-hoc commands that configure ephemeral settings (like ip address add, wg set, or wg-quick up). NetworkManager calls these profiles “connections”. Configuring something in NetworkManager usually boils down to create a suitable profile and “activate” it for the settings to take effect.

    nmcli connection import is just one way to create a profile. Note that the imported profile is configured to autoconnect, so quite possibly the profile gets activated right away. But regardless of that, think of “import” creating just a profile. You would only do this step once, but afterwards activate the profile many times.

    There is no difference to NetworkManager how the profile was created. You could also create a WireGuard profile from scratch.

    $ nmcli connection add type wireguard ifname wg0 con-name my-wg0
    Connection 'my-wg0' (0d2aed05-2c7f-40ec-81ad-b1b4edd898fc) successfully added.

    And let’s look at the profile:

    $ nmcli --show-secrets connection show my-wg0
    connection.id:                       my-wg0
    connection.uuid:                     0d2aed05-2c7f-40ec-81ad-b1b4edd898fc
    connection.stable-id:                --
    connection.type:                     wireguard
    connection.interface-name:           wg0
    connection.autoconnect:              yes
    ipv4.method:                         disabled
    ipv6.method:                         ignore
    wireguard.private-key:               --
    wireguard.private-key-flags:         0 (none)
    wireguard.listen-port:               0
    wireguard.fwmark:                    0x0
    wireguard.peer-routes:               yes
    wireguard.mtu:                       0

    and finally let’s activate it. Note you will be asked to enter the private key that you may generate with wg genkey:

    $ nmcli --show-secrets --ask connection up my-wg0
    Secrets are required to connect WireGuard VPN 'my-wg0'
    WireGuard private-key (wireguard.private-key): eD8wqjLABmg6ClC+6egB/dnMLbbUYSMMrDsrHUwmQlI=
    Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/30)

    Confirm that the VPN tunnel is now up:

    $ nmcli
    wg0: connected to my-wg0
            wireguard, sw, mtu 1420
            inet6 fe80::720b:6576:1650:d26/64
            route6 ff00::/8
            route6 fe80::/64
    $ ip link show wg0
    34: wg0:  mtu 1420 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    $ sudo WG_HIDE_KEYS=never wg
    interface: wg0
      public key: SymChsQwTX5yZrtwtsWpYfHLMgnJpOJ25YOfs7/ImT0=
      private key: eD8wqjLABmg6ClC+6egB/dnMLbbUYSMMrDsrHUwmQlI=
      listening port: 56389

    Note that above wireguard.private-key-flags are set to 0. The secret flags determine whether the secret is not-required, to be stored to disk or a keyring, or always asked. In this case, the private key got stored to disk in /etc/NetworkManager/system-connections/.

    This connection isn’t right yet. Let’s adjust it:

    $ nmcli connection modify my-wg0 \
        autoconnect yes \
        ipv4.method manual \
        ipv4.addresses \
        wireguard.listen-port 50000 \

    Check the manual for available NetworkManager settings in the profile. Compare what you configured until the profile is to your liking:

    $ nmcli --overview connection show my-wg0
    connection.id:                          my-wg0
    connection.uuid:                        0d2aed05-2c7f-40ec-81ad-b1b4edd898fc
    connection.type:                        wireguard
    connection.interface-name:              wg0
    connection.timestamp:                   1551171032
    ipv4.method:                            manual
    ipv6.method:                            ignore
    wireguard.private-key-flags:            0 (none)
    wireguard.listen-port:                  50000
    GENERAL.NAME:                           my-wg0
    GENERAL.UUID:                           0d2aed05-2c7f-40ec-81ad-b1b4edd898fc
    GENERAL.DEVICES:                        wg0
    GENERAL.STATE:                          activated
    GENERAL.DEFAULT:                        no
    GENERAL.DEFAULT6:                       no
    GENERAL.SPEC-OBJECT:                    --
    GENERAL.VPN:                            no
    GENERAL.DBUS-PATH:                      /org/freedesktop/NetworkManager/ActiveConnection/30
    GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/Settings/60
    GENERAL.ZONE:                           --
    GENERAL.MASTER-PATH:                    --
    IP6.ADDRESS[1]:                         fe80::720b:6576:1650:d26/64
    IP6.ROUTE[1]:                           dst = ff00::/8, nh = ::, mt = 256, table=255
    IP6.ROUTE[2]:                           dst = fe80::/64, nh = ::, mt = 256

    Note that above output also shows the current device information with upper-cased properties. This is because the profile is currently activated. As you modify the profile, you’ll note that the changes don’t take effect immediately. For that you have to (re-) activate the profile with

    $ nmcli connection up my-wg0

    Note that this time we don’t need to provide the private key. The key was stored to disk according to the secret flags. This will allow the profile to automatically connect in the future upon boot.

    Configuring Peers

    As of now, nmcli does not yet support configuring peers. This is a missing feature. Until this is implemented you have the following possibilities, which are all a bit inconvenient.

    1.) Import Peers from a wg-quick configuration file

    See above. This does not allow you to modify an existing profile, as nmcli connection import always creates a new profile.

    2.) Use the Python Example Script nm-wg-set

    There is a python example script. It uses pygobject with libnm and accepts similar parameters as wg set. I mention this example script to give you an idea how you could use NetworkManager from python (in this case based on libnm and pygobject).

    $ python nm-wg-set my-wg0 \
        fwmark 0x500 \
        peer llG3xkDWcEP4KODf45zjntuvUX0oXieRyxXdl5POYX4= \
        endpoint my-wg.example.com:4001 \
        allowed-ips \
        persistent-keepalive 120 \
        peer 2Gl0SATbfrrzxfrSkhNoRR9Jg56y533y07KtIVngAk0= \
        preshared-key \
          <(echo qoNbN/6ABe4wWyz4jh+uwX7vqRpNeGEtgAnUbwNjEug=) \
        preshared-key-flags 0 \
    $ WG_HIDE_KEYS=never python nm-wg-set my-wg0 
    interface:                    wg0
    uuid:                         0d2aed05-2c7f-40ec-81ad-b1b4edd898fc
    id:                           my-wg0
    private-key:                  eD8wqjLABmg6ClC+6egB/dnMLbbUYSMMrDsrHUwmQlI=
    private-key-flags:            0 (none)
    listen-port:                  50000
    fwmark:                       0x500
    peer[0].public-key:           llG3xkDWcEP4KODf45zjntuvUX0oXieRyxXdl5POYX4=
    peer[0].preshared-key-flags:  4 (not-required)
    peer[0].endpoint:             my-wg.example.com:4001
    peer[0].persistent-keepalive: 120
    peer[1].public-key:           2Gl0SATbfrrzxfrSkhNoRR9Jg56y533y07KtIVngAk0=
    peer[1].preshared-key:        qoNbN/6ABe4wWyz4jh+uwX7vqRpNeGEtgAnUbwNjEug=
    peer[1].preshared-key-flags:  0 (none)
    peer[1].persistent-keepalive: 0

    3.) Use libnm directly

    libnm is the client library for NetworkManager. It gained API for fully configuring WireGuard profiles. This is what the nm-wg-set example script above uses.

    4.) Use D-Bus directly

    NetworkManager’s D-Bus API is what all clients use — from libnm, nmcli to GUIs. NetworkManager is really all about the (D-Bus) API that it provides. Everything that a tool does with NetworkManager will always be possible by using D-Bus directly. When NetworkManager 1.16 introduces WireGuard support, then the tools are still lacking, but the API is ready for implementing them.

    5.) Edit the Profile on Disk

    NetworkManager persists WireGuard profiles in the keyfile format. These are files under /etc/NetworkManager/system-connections and it is always fully supported that you just edit these files by hand. This is the other, file-base API of NetworkManager beside D-Bus. This leaves you with the problem to know what to edit there exactly. Let’s look at what we got so far:

    $ sudo cat \

    The WireGuard peer settings should be pretty straight forward. See also NetworkManager’s keyfile documentation. Edit the file and issue sudo nmcli connection reload or sudo nmcli connection load /etc/NetworkManager/system-connection/my-wg0.nmconnection. This causes NetworkManager to update the profile with the changes from disk.

    Finally, reactivate the profile and check the result:

    $ nmcli connection up my-wg0 
    Connection successfully activated (D-Bus active path: /org/freedesktop/NetworkManager/ActiveConnection/31)
    $ sudo WG_HIDE_KEYS=never wg
    interface: wg0
      public key: SymChsQwTX5yZrtwtsWpYfHLMgnJpOJ25YOfs7/ImT0=
      private key: eD8wqjLABmg6ClC+6egB/dnMLbbUYSMMrDsrHUwmQlI=
      listening port: 50000
      fwmark: 0x500
    peer: llG3xkDWcEP4KODf45zjntuvUX0oXieRyxXdl5POYX4=
      allowed ips:
      persistent keepalive: every 2 minutes
    peer: 2Gl0SATbfrrzxfrSkhNoRR9Jg56y533y07KtIVngAk0=
      preshared key: qoNbN/6ABe4wWyz4jh+uwX7vqRpNeGEtgAnUbwNjEug=
      allowed ips: (none)

    Reapply and Runtime Configuration

    We said that after modifying a profile we have to fully reactivate the profile for the changes to take effect. That’s not the only way. NetworkManager supports nmcli device reapply wg0 which makes changes to the profile effective without doing a full re-activation cycle. That is less disruptive as the interface does not go down. Likewise, nmcli device modify wg0 allows you to change only the runtime configuration, without modifying the profile. It is fully supported to modify WireGuard settings of an active tunnel via reapply.

    Dynamically Resolving Endpoints

    In WireGuard, peers may have an endpoint configured but also roaming is built-in. NetworkManager supports peer endpoints specified as DNS names: it will resolve the names before configuring the IP address in kernel. NetworkManager resolves endpoint names every 30 minutes or whenever the DNS configuration of the host changes, in order to pick up changes to the endpoint’s IP address.


    In the NetworkManager profile you can configure wireguard.mtu for the MTU. In absence of an explicit configuration, the default is used. That is different from wg-quick up, which tries to autodetect the MTU by looking at how to reach all peers. NetworkManager does not do such automatism.

    Peer Routes, AllowedIPs and Cryptokey Routing

    In WireGuard you need to configure the “AllowedIPs” ranges for the peers. This is what WireGuard calls Cryptokey Routing. It also implies, that you usually configure direct routes for these “AllowedIPs” ranges via the WireGuard tunnel. NetworkManager will add those routes automatically if wireguard.peer-routes option of the profile is enabled (which it is by default).

    Routing All Your Traffic

    When routing all traffic via the WireGuard tunnel, then peer endpoints must be still reached outside the tunnel.

    For other VPN plugins NetworkManager adds a direct route to the external VPN gateway on the device that has the default route. That works well in most cases, but is an ugly hack because NetworkManager doesn’t reliably know the correct direct route in unusual scenarios.

    NetworkManager currently does not provide any additional automatism to help you with that. As workaround you could manually add an explicit route to the profile of the device via which the endpoint is reachable:

    $ nmcli connection modify eth0 \
        +ipv4.routes "$WG_ENDPOINT_ADDR/32"

    An alternative solution is to configure policy routing. The wg-quick tool does this with the Table=auto setting (which is the default).

    NetworkManager supports configuring routes in other routing tables than the “main” table. Hence, using policy-routing works in parts by configuring "ipv4.route-table" and "ipv6.route-table". The problem is that currently NetworkManager does not support configuring the routing policy rules themselves. For now, the rules must be configured outside of NetworkManager. You could do so via a dispatcher script in /etc/NetworkManager/dispatcher.d, but yes, this is lacking. See the NetworkManager manual about dispatcher scripts.

    Key, Peer, and IP Address Management

    The beauty of WireGuard is its simplicity. But it also leaves all questions about key distribution, peer management and IP address assignment to the upper layers. For the moment NetworkManager does not provide additional logic on top of WireGuard and exposes just the plain settings. This leaves the user (or external tools) to manually distribute private keys and configure peers, IP addresses and routing. I expect that as WireGuard matures there will be schemes for simplifying this and NetworkManager may implement such protocols or functionality. But NetworkManager won’t come up with a homegrown, non-standard way of doing this.

    WireGuard is Layer3 only. That means you cannot run DHCP on a WireGuard link and ipv4.method=auto is not a valid configuration. Instead, you have to configure static addresses or IPv6 link local addresses.


    WireGuard, like most tunnel based solutions, have neat applications regarding networking namespaces. This is not implemented in NetworkManager yet, but we would be interested to do so. Note that this isn’t specific to WireGuard tunnels and namespace isolation would be a useful feature in general.

    What’s next?

    • Add support for policy-routing rules (rhbz#1652653).
    • Automatically help avoiding routing loops when routing all traffic.
    • Add nmcli support for configuring WireGuard peers.
    • Add WireGuard support to other NetworkManager clients, like nm-connection-editor.
    • See where management tools for WireGuard go and what NetworkManager can do to simplify management of keys, peers and addressing.
    • Provide an API in NetworkManager to isolate networks via networking namespaces. This is not specific to WireGuard but will be useful in that context.

    libinput's internal building blocks

    Posted by Peter Hutterer on March 15, 2019 06:15 AM

    Ho ho ho, let's write libinput. No, of course I'm not serious, because no-one in their right mind would utter "ho ho ho" without a sufficient backdrop of reindeers to keep them sane. So what this post is instead is me writing a nonworking fake libinput in Python, for the sole purpose of explaining roughly how libinput's architecture looks like. It'll be to the libinput what a Duplo car is to a Maserati. Four wheels and something to entertain the kids with but the queue outside the nightclub won't be impressed.

    The target audience are those that need to hack on libinput and where the balance of understanding vs total confusion is still shifted towards the latter. So in order to make it easier to associate various bits, here's a description of the main building blocks.

    libinput uses something resembling OOP except that in C you can't have nice things unless what you want is a buffer overflow\n\80xb1001af81a2b1101. Instead, we use opaque structs, each with accessor methods and an unhealthy amount of verbosity. Because Python does have classes, those structs are represented as classes below. This all won't be actual working Python code, I'm just using the syntax.

    Let's get started. First of all, let's create our library interface.

    class Libinput:
    def path_create_context(cls):
    return _LibinputPathContext()

    def udev_create_context(cls):
    return _LibinputUdevContext()

    # dispatch() means: read from all our internal fds and
    # call the dispatch method on anything that has changed
    def dispatch(self):
    for fd in self.epoll_fd.get_changed_fds():

    # return whatever the next event is
    def get_event(self):
    return self._events.pop(0)

    # the various _notify functions are internal API
    # to pass things up to the context
    def _notify_device_added(self, device):

    def _notify_device_removed(self, device):

    def _notify_pointer_motion(self, x, y):
    self._events.append(LibinputEventPointer(x, y))

    class _LibinputPathContext(Libinput):
    def add_device(self, device_node):
    device = LibinputDevice(device_node)

    def remove_device(self, device_node):

    class _LibinputUdevContext(Libinput):
    def __init__(self):
    self.udev = udev.context()

    def udev_assign_seat(self, seat_id):
    self.seat_id = seat.id

    for udev_device in self.udev.devices():
    device = LibinputDevice(udev_device.device_node)

    We have two different modes of initialisation, udev and path. The udev interface is used by Wayland compositors and adds all devices on the given udev seat. The path interface is used by the X.Org driver and adds only one specific device at a time. Both interfaces have the dispatch() and get_events() methods which is how every caller gets events out of libinput.

    In both cases we create a libinput device from the data and create an event about the new device that bubbles up into the event queue.

    But what really are events? Are they real or just a fidget spinner of our imagination? Well, they're just another object in libinput.

    class LibinputEvent:
    def type(self):
    return self._type

    def context(self):
    return self._libinput

    def device(self):
    return self._device

    def get_pointer_event(self):
    if instanceof(self, LibinputEventPointer):
    return self # This makes more sense in C where it's a typecast
    return None

    def get_keyboard_event(self):
    if instanceof(self, LibinputEventKeyboard):
    return self # This makes more sense in C where it's a typecast
    return None

    class LibinputEventPointer(LibinputEvent):
    def time(self)
    return self._time/1000

    def time_usec(self)
    return self._time

    def dx(self)
    return self._dx

    def absolute_x(self):
    return self._x * self._x_units_per_mm

    def absolute_x_transformed(self, width):
    return self._x * width/ self._x_max_value
    You get the gist. Each event is actually an event of a subtype with a few common shared fields and a bunch of type-specific ones. The events often contain some internal value that is calculated on request. For example, the API for the absolute x/y values returns mm, but we store the value in device units instead and convert to mm on request.

    So, what's a device then? Well, just another I-cant-believe-this-is-not-a-class with relatively few surprises:

    class LibinputDevice:
    class Capability(Enum):
    CAP_TOUCH = 2

    def __init__(self, device_node):
    pass # no-one instantiates this directly

    def name(self):
    return self._name

    def context(self):
    return self._libinput_context

    def udev_device(self):
    return self._udev_device

    def has_capability(self, cap):
    return cap in self._capabilities

    Now we have most of the frontend API in place and you start to see a pattern. This is how all of libinput's API works, you get some opaque read-only objects with a few getters and accessor functions.

    Now let's figure out how to work on the backend. For that, we need something that handles events:

    class EvdevDevice(LibinputDevice):
    def __init__(self, device_node):
    fd = open(device_node)
    super().context.add_fd_to_epoll(fd, self.dispatch)

    def has_quirk(self, quirk):
    return quirk in self.quirks

    def dispatch(self):
    while True:
    data = fd.read(input_event_byte_count)
    if not data:


    def _configure(self):
    # some devices are adjusted for quirks before we
    # do anything with them
    if self.has_quirk(SOME_QUIRK_NAME):

    if 'ID_INPUT_TOUCHPAD' in self.udev_device.properties:
    self.interface = EvdevTouchpad()
    elif 'ID_INPUT_SWITCH' in self.udev_device.properties:
    self.interface = EvdevSwitch()
    self.interface = EvdevFalback()

    class EvdevInterface:
    def dispatch_one_event(self, event):

    class EvdevTouchpad(EvdevInterface):
    def dispatch_one_event(self, event):

    class EvdevTablet(EvdevInterface):
    def dispatch_one_event(self, event):

    class EvdevSwitch(EvdevInterface):
    def dispatch_one_event(self, event):

    class EvdevFallback(EvdevInterface):
    def dispatch_one_event(self, event):
    Our evdev device is actually a subclass (well, C, *handwave*) of the public device and its main function is "read things off the device node". And it passes that on to a magical interface. Other than that, it's a collection of generic functions that apply to all devices. The interfaces is where most of the real work is done.

    The interface is decided on by the udev type and is where the device-specifics happen. The touchpad interface deals with touchpads, the tablet and switch interface with those devices and the fallback interface is that for mice, keyboards and touch devices (i.e. the simple devices).

    Each interface has very device-specific event processing and can be compared to the Xorg synaptics vs wacom vs evdev drivers. If you are fixing a touchpad bug, chances are you only need to care about the touchpad interface.

    The device quirks used above are another simple block:

    class Quirks:
    def __init__(self):

    def has_quirk(device, quirk):
    for file in self.quirks:
    if quirk.has_match(device.name) or
    quirk.has_match(device.usbid) or
    return True
    return False

    def get_quirk_value(device, quirk):
    if not self.has_quirk(device, quirk):
    return None

    quirk = self.lookup_quirk(device, quirk)
    if quirk.type == "boolean":
    return bool(quirk.value)
    if quirk.type == "string":
    return str(quirk.value)
    A system that reads a bunch of .ini files, caches them and returns their value on demand. Those quirks are then used to adjust device behaviour at runtime.

    The next building block is the "filter" code, which is the word we use for pointer acceleration. Here too we have a two-layer abstraction with an interface.

    class Filter:
    def dispatch(self, x, y):
    # converts device-unit x/y into normalized units
    return self.interface.dispatch(x, y)

    # the 'accel speed' configuration value
    def set_speed(self, speed):
    return self.interface.set_speed(speed)

    # the 'accel speed' configuration value
    def get_speed(self):
    return self.speed


    class FilterInterface:
    def dispatch(self, x, y):

    class FilterInterfaceTouchpad:
    def dispatch(self, x, y):

    class FilterInterfaceTrackpoint:
    def dispatch(self, x, y):

    class FilterInterfaceMouse:
    def dispatch(self, x, y):
    self.history.push((x, y))
    v = self.calculate_velocity()
    f = self.calculate_factor(v)
    return (x * f, y * f)

    def calculate_velocity(self)
    for delta in self.history:
    total += delta
    velocity = total/timestamp # as illustration only

    def calculate_factor(self, v):
    # this is where the interesting bit happens,
    # let's assume we have some magic function
    f = v * 1234/5678
    return f
    So libinput calls filter_dispatch on whatever filter is configured and passes the result on to the caller. The setup of those filters is handled in the respective evdev interface, similar to this:

    class EvdevFallback:
    def init_accel(self):
    if self.udev_type == 'ID_INPUT_TRACKPOINT':
    self.filter = FilterInterfaceTrackpoint()
    elif self.udev_type == 'ID_INPUT_TOUCHPAD':
    self.filter = FilterInterfaceTouchpad()
    The advantage of this system is twofold. First, the main libinput code only needs one place where we really care about which acceleration method we have. And second, the acceleration code can be compiled separately for analysis and to generate pretty graphs. See the pointer acceleration docs. Oh, and it also allows us to easily have per-device pointer acceleration methods.

    Finally, we have one more building block - configuration options. They're a bit different in that they're all similar-ish but only to make switching from one to the next a bit easier.

    class DeviceConfigTap:
    def set_enabled(self, enabled):
    self._enabled = enabled

    def get_enabled(self):
    return self._enabled

    def get_default(self):
    return False

    class DeviceConfigCalibration:
    def set_matrix(self, matrix):
    self._matrix = matrix

    def get_matrix(self):
    return self._matrix

    def get_default(self):
    return [1, 0, 0, 0, 1, 0, 0, 0, 1]
    And then the devices that need one of those slot them into the right pointer in their structs:

    class EvdevFallback:
    def init_calibration(self):
    self.config_calibration = DeviceConfigCalibration()

    def handle_touch(self, x, y):
    if self.config_calibration is not None:
    matrix = self.config_calibration.get_matrix

    x, y = matrix.multiply(x, y)
    self.context._notify_pointer_abs(x, y)

    And that's basically it, those are the building blocks libinput has. The rest is detail. Lots of it, but if you understand the architecture outline above, you're most of the way there in diving into the details.

    libinput and location-based touch arbitration

    Posted by Peter Hutterer on March 15, 2019 05:58 AM

    One of the features in the soon-to-be-released libinput 1.13 is location-based touch arbitration. Touch arbitration is the process of discarding touch input on a tablet device while a pen is in proximity. Historically, this was provided by the kernel wacom driver but libinput has had userspace touch arbitration for quite a while now, allowing for touch arbitration where the tablet and the touchscreen part are handled by different kernel drivers.

    Basic touch arbitratin is relatively simple: when a pen goes into proximity, all touches are ignored. When the pen goes out of proximity, new touches are handled again. There are some extra details (esp. where the kernel handles arbitration too) but let's ignore those for now.

    With libinput 1.13 and in preparation for the Dell Canvas Dial Totem, the touch arbitration can now be limited to a portion of the screen only. On the totem (future patches, not yet merged) that portion is a square slightly larger than the tool itself. On normal tablets, that portion is a rectangle, sized so that it should encompass the users's hand and area around the pen, but not much more. This enables users to use both the pen and touch input at the same time, providing for bimanual interaction (where the GUI itself supports it of course). We use the tilt information of the pen (where available) to guess where the user's hand will be to adjust the rectangle position.

    There are some heuristics involved and I'm not sure we got all of them right so I encourage you to give it a try and file an issue where it doesn't behave as expected.

    GNOME 3.32 released & coming to Fedora 30

    Posted by Fedora Magazine on March 15, 2019 02:34 AM

    Today, the GNOME project announced the release of GNOME 3.32.

    The release of the GNOME desktop is the default desktop environment in the upcoming release of Fedora 30 Workstation. GNOME 3.32 includes a wide range of enhancements, including: new default application icons, a new emoji chooser in the on screen keyboard, and improved per-app permissions control.

    <figure class="wp-block-image"><figcaption>GNOME 3.32</figcaption></figure>

    New Icons

    GNOME 3.32 features a range of UI tweaks and improvements. Notably, the entire default icon library has been updated and refreshed, featuring more vibrant colours.

    <figure class="wp-block-image"><figcaption>Some of the new icons in GNOME 3.32</figcaption></figure>

    Additionally, the colours of the desktop are tweaked to the brighter colour palette to match the new icons.

    App Menus deprecated

    In GNOME 3. the App Menu is the dropdown that appeared in the top left of the panel next to the Activities hotspot. As of GNOME 3.32, this UI feature is deprecated, and all core GNOME default applications now no longer have App Menus.

    Fractional Scaling

    Previously, the GNOME UI could only scale in increments of 1. With the wide range of different DPI screens available this may cause a strange middle ground on some displays, where the UI is either too small or too large when scaled. GNOME 3.32 provides experimental support for scaling the UI by more granular amounts.

    Better emoji input

    GNOME 3.32 features an updated on-screen keyboard implementation. Most notably, this includes the ability to easily “type” emoji with the on-screen keyboard

    <figure class="wp-block-image"><figcaption>Emoji on-screen keyboard in GNOME 3.32</figcaption></figure>

    Improved App permissions control

    The new “Application Permissions” in the main settings dialog allows users to view and change permissions for applications.

    <figure class="wp-block-image"></figure>

    Read more about this release

    There are many more changes and enhancements in this major version of GNOME. Check out the release announcement and the release notes from the GNOME Project for more information.

    Screenshots in this post are from the GNOME 3.32 release notes

    Convert Docker Image Output to an HTML Table

    Posted by Adam Young on March 14, 2019 11:17 PM
    docker images | awk '
    BEGIN {print ("")};
    print ("")}
    END {print ("
    " $1,"" $2,"" $3,$4,"" $5"" $6"
    " $1,"" $2,"" $3,"" $4,$5,$6 " " $7,$8 "

    Final 503 addendum

    Posted by Stephen Smoogen on March 14, 2019 07:18 PM
    mirrorlist 503's for 2019
    This is a graphical shape of the amount of 503's we have had in 2019. The earlier large growth in January/February have dropped down to just one web-server which is probably underpowered to run containers.  We will look at taking it out of circulation in the coming weeks.

    Building the Kolla Keystone Container

    Posted by Adam Young on March 14, 2019 03:43 PM

    Kolla has become the primary source of Containers for running OpenStack services. Since if has been a while since I tried deliberately running just the Keystone container, I decided to build the Kolla version from scratch and run it.

    UPDATE: Ozz wrote it already, and did it better: http://jaormx.github.io/2017/testing-containerized-openstack-services-with-kolla/

    I had an clone of the Kolla repo already, but if you need one, you can get it by cloning

    git clone git://git.openstack.org/openstack/kolla

    All of the dependencies you need to run the build process are handled by tox. Assuming you can run tox elsewhere, you can use that here, too:

    tox -e py35

    That will run through all the unit tests. They do not take that long.

    To build all of the containers you can active the virtual environment and then use the build tool. That takes quite a while, since there are a lot of containers required to run OpenStack.

    $ . .tox/py35/bin/activate
    (py35) [ayoung@ayoungP40 kolla]$ tools/build.py 

    If you want to build just the keystone containers….

     python tools/build.py keystone

    Building this with no base containers cached took me 5 minutes. Delta builds should be much faster.

    Once the build is complete, you will have a bunch of container images defined on your system:

    kolla/centos-binary-keystone 7.0.2 69049739bad6 33 minutes ago 800 MB
    kolla/centos-binary-keystone-fernet 7.0.2 89977265fcbb 33 minutes ago 800 MB
    kolla/centos-binary-keystone-ssh 7.0.2 4b377e854980 33 minutes ago 819 MB
    kolla/centos-binary-barbican-keystone-listener 7.0.2 6265d0acff16 33 minutes ago 732 MB
    kolla/centos-binary-keystone-base 7.0.2 b6d78b9e0769 33 minutes ago 774 MB
    kolla/centos-binary-barbican-base 7.0.2 ccd7b4ff311f 34 minutes ago 706 MB
    kolla/centos-binary-openstack-base 7.0.2 38dbb3c57448 34 minutes ago 671 MB
    kolla/centos-binary-base 7.0.2 177c786e9b01 36 minutes ago 419 MB
    docker.io/centos 7 1e1148e4cc2c 3 months ago 202 MB

    Note that the build instructions live in the git repo under docs.

    New package in Fedora: python-xslxwriter

    Posted by Rajeesh K Nambiar on March 14, 2019 09:42 AM

    XlsxWriter is a Python module for creating files in xlsx (MS Excel 2007+) format. It is used by certain python modules some of our customers needed (such as OCA report_xlsx module).

    This module is available in pypi but it was not packaged for Fedora. I’ve decided to maintain it in Fedora and created a package review request which is helpfully reviewed by Robert-André Mauchin.

    The package, providing python3 compatible module, is available for Fedora 28 onwards.

    Moving from fedmsg to fedora-messaging

    Posted by Fedora Community Blog on March 14, 2019 05:51 AM

    The Fedora infrastructure is working on replacing our current message bus fedmsg by a new library fedora-messaging based on AMQP. This is an update on the work currently in progress.

    After deploying a RabbitMQ cluster and bridges to duplicate messages from fedmsg to the fedora-messaging and from fedora-messaging to fedmsg. We are now starting the migration of application to fedora-messaging. After looking at the overall change we have identified a set of critical applications which are needed for the rawhide gating initiative :

    • Bodhi
    • Koji
    • Pagure (src.fp.o)
    • Anitya
    • the-new-hotness
    • Greenwave
    • waiverDB
    • resultsDB
    • CentOS CI
    • OpenQA

    The reliability of the messages sent or received by these applications is critical for this initiative, that’s why moving to fedora-messaging and AMQP is a prerequisite.

    Team work

    To make significant progress in a short time we setup a small team (pingou, abompard and cverna) and organized our work using a GitHub board with the intention to do as much as possible in a week.  After a team kickoff call on Monday morning we decided not to focus on more than 2 cards at the time meaning that we would work in pair on one task while the other person could make progress on the other card.

    We also tried to focus on what we could achieved in a week and decided to leave the CentOS CI and OpenQA work for later since these 2 applications require more work upfront.

    What we achieved

    By the end of the week, we had src.fp.o and Koji successfully sending messages using fedora-messaging in staging. While deploying these applications to staging we have also fixed the bridge that replicates the messages from fedora-messaging to fedmsg.

    After submitting a Pull Request to migrate resultsDB to fedora-messaging and getting this PR merged, we deployed the application in staging only to find out that resultsDB virtual machine could not connect to the AMQP broker. We finally opened a infrastructure ticket to get the networking modification needed.

    WaiverDB, Greenwave and Bodhi are still in progress, they all have a PR either opened or merged and we are still coordinating with the applications maintainer to finish this effort and deploy these applications in staging.

    What’s next

    We still have some work to complete this effort but we have made really good progress in only a week. Once the dependencies holding us from completing the card in progress are cleared, we should spend another week focusing on the deployment of these applications. 

    If you would like to help us in this effort, or would like to help with the migration of another application to fedora-messaging you can reach out on the #fedora-apps IRC channel and or the Fedora Infrastructure mailing list.

    Photo by Lewis Ngugi on Unsplash

    The post Moving from fedmsg to fedora-messaging appeared first on Fedora Community Blog.

    The risks of secret punishments in online communities

    Posted by Daniel Pocock on March 14, 2019 12:04 AM

    While the controversy over the integrity of elections in free software communities is significant, a far more serious issue for all communities right now is the spectre of secret punishments and other practices that involve shaming people. Debian has recently experimented with these reckless practices and it would be wise to ensure they are not repeated or replicated in any other community.

    Secret punishments exploit shame to maintain secrecy and avoid controversy. For example, many pedophiles know they can keep offending because shame will keep their victims from talking. There is a close connection between the use of secret punishments and the pursuit of political objectives, for example, isolation of asylum seekers, which is now classified as a form of torture. People in positions of authority see shame as an opportunity to indulge themselves in occasional acts of bullying, hoping their conduct will never be subject to scrutiny and maintaining their otherwise immaculate reputations.

    This reveals an interesting feature of shame: people feel shame whether they did something wrong or not. An innocent 13-year-old victim of a pedophile feels shame. A rogue trader who knows he is guilty feels shame too. It is much the same emotion.

    Not everybody responds the same way however. Consider the recent prosecution of Cardinal Pell in my home town, Melbourne, Australia. I went to Catholic schools and a number of relatives worked in Catholic education, in the administration down the road from St Patrick's Cathedral, where Pell would wander in from time to time for meetings. I used to row past St Kevin's almost every day. I met many people from St Kevin's during university too. It is unusual for me to see Cardinal Pell in this situation and I can't help contemplating people on both sides of the case. Consider one key fact from the trial: of the two boys who were allegedly abused, one has died from a drug overdose and there was no evidence that he ever told anybody about Pell's offenses at any point in his life. Shame prevented him from talking. Yet some victims of this abuse do choose to come forward.

    In the Debian crisis, different people waited different periods of time before they could talk publicly about the way they were used in the demotions experiment. One reason for this is that nobody wants to hurt the project, everybody had made efforts to communicate with the leaders privately many times before it became a public issue. But if we ignore those attempts at private communication, there is also a probability that shame was a factor in remaining silent, not telling anybody else, even though the punishments were either completely inappropriate or way out of proportion to any mistakes.

    Some people may feel I'm being a little bit indulgent linking child sex abuse to online abuse. It turns out, research published in Social Psychology of Education found that psychological impacts of online bullying, which includes shaming, are just as harmful as those from child abuse.

    When operating in an online community, such as a free software organization, we tend to know very little about each other and our wider circumstances. People rarely disclose details of personal tragedy, physical and mental illnesses, pressures in their home or work environment or anything else of an emotional nature. To give one example, well known in the HR world: thirty percent of people will experience a depression at some point during their life. In a community of 1,000 people, it is almost a certainty that every year, some are going to experience a major depressive episode. In fact, for people aged 18-25, it is close to 1 in 10 people.

    If you are a leader in an online community and you decide to use secret punishments as a tool, if you inflict some kind of shame on 10 people each year, what is the chance that one of them might already be unwell and your actions cause them further harm?

    In one of the more extraordinary cases, the father of a 13 year old girl cut off her hair as a punishment. The punishment/shaming was recorded on a video and uploaded online.

    A few days later, she jumped off a bridge.

    It is scary to contemplate how many other members of the free software community may have received a heavy-handed email from the leaders or "anti harassment" teams that may have made them feel shame. Many people have been reading my blogs about my roles in both Debian and as a community representative in another organization. People have confided in me privately about additional incidents, intimidating emails from project leaders, that were not disclosed publicly. Yet there may be even more victims who have not spoken to anybody, or somebody who is tying themselves in knots, unsure how to start a conversation about their "anti harassment" experience.

    In the well known management book One Minute Manager, it is suggested that leaders give people one minute reprimands, finishing with some sort of praise. The type of reprimands and threats people have received from "anti harassment" and safety teams have no resemblance to that, they often contain big lists of perceived failings, they CC a whole bunch of people to add extra humiliation and shame and rather than finishing with praise, they finish with a threat or punishment, to sustain the shame.

    Thinking about it another way, shame is like fat or salt. Small quantities of fat and salt are important in our diets but excessive quantities cause harm. Small quantities of shame may deter us from making mistakes. Large amounts of shame are more likely to do harm than good.

    Reflect on the vast difference between our online communities and real-world environments. In the real world, an employee might simply get a doctor's note and stay away from the office during a period of illness. Their employer could not accidentally punish them because they are not present in the office. In the online world, there are no doctor's notes. Once again, 16 million American's reported suffering depression in one year but how many would have put their email account in vacation mode with an auto-response about their condition?

    In the online world, it is a lot easier to hide that stuff, so people do.

    Another striking feature of the Debian scandal is that no due process was followed, even though at least one person had earlier asserted they were dealing with an extraordinary situation in their private life, the leaders made not the slightest attempt to start two-way communications. This rude attitude demonstrates utter contempt for the welfare of the people they interact with.

    Making punishments like this becomes a game of Russian Roulette: most of the time no harm is observed but every now and then, it goes badly wrong, like the girl who jumped off a bridge. It is a reckless game indeed. No free software community would want to be associated with an incident like that.

    I strongly feel that responsible online communities need to denounce the use of punishments and shame, just as most responsible countries have denounced the use of land mines and biological weapons.

    Personally, as I continue to observe the way certain leaders take a flippant and callous attitude to these issues, it leaves me feeling that it is better not to remain associated with those figures until the welfare of all community members becomes a priority. I already decided to cut all ties with FSFE and I've had no regrets about that.

    Fedora 29 : Use Selinux with Firefox.

    Posted by mythcat on March 13, 2019 08:45 PM
    Today I tested Selinux with the Firefox browser. The main purpose was to create a policy for this browser. You can use this example to create your own policies. Using Fedora 29 this problem can be resolved easily. Let's start with installing an important packet using the dnf tool.
    [root@desk selinux_001]# dnf install policycoreutils-devel
    Let's see the other commands used to create policies named firefox.te:
    [mythcat@desk ~]$ mkdir selinux_001
    [mythcat@desk ~]$ cd selinux_001/
    [mythcat@desk selinux_001]$ whereis firefox
    firefox: /usr/bin/firefox /usr/lib64/firefox /etc/firefox /usr/share/man/man1/firefox.1.gz
    [mythcat@desk selinux_001]$ sepolicy generate --init -n firefox /usr/bin/firefox
    nm: /usr/bin/firefox: file format not recognized
    Failed to retrieve rpm info for selinux-policy
    Created the following files:
    /home/mythcat/selinux_001/firefox.te # Type Enforcement file
    /home/mythcat/selinux_001/firefox.if # Interface file
    /home/mythcat/selinux_001/firefox.fc # File Contexts file
    /home/mythcat/selinux_001/firefox_selinux.spec # Spec file
    /home/mythcat/selinux_001/firefox.sh # Setup Script
    [mythcat@desk selinux_001]$ cat firefox.te

    policy_module(firefox, 1.0.0)

    # Declarations

    type firefox_t;
    type firefox_exec_t;
    init_daemon_domain(firefox_t, firefox_exec_t)

    permissive firefox_t;

    # firefox local policy
    allow firefox_t self:fifo_file rw_fifo_file_perms;
    allow firefox_t self:unix_stream_socket create_stream_socket_perms;



    [mythcat@desk selinux_001]$ cat firefox.fc
    /usr/bin/firefox -- gen_context(system_u:object_r:firefox_exec_t,s0)
    I have modified this policy generated by sepolicy by adding my own rules:
    [mythcat@desk selinux_001]$ cat firefox.te
    policy_module(firefox, 1.0.0)

    # Declarations

    type firefox_t;
    type firefox_exec_t;
    init_daemon_domain(firefox_t, firefox_exec_t)

    permissive firefox_t;
    # my rules
    require {
    type unreserved_port_t;
    type http_port_t;
    class tcp_socket { accept listen name_bind name_connect };

    # firefox local policy
    allow firefox_t self:fifo_file rw_fifo_file_perms;
    allow firefox_t self:unix_stream_socket create_stream_socket_perms;

    # my rules
    allow firefox_t http_port_t:tcp_socket { name_bind name_connect };
    allow firefox_t unreserved_port_t:tcp_socket { name_bind name_connect };
    allow firefox_t self:tcp_socket { listen accept };



    I used the following commands to get my own policy:
    [mythcat@desk selinux_001]$ make -f /usr/share/selinux/devel/Makefile
    Compiling targeted firefox module
    /usr/bin/checkmodule: loading policy configuration from tmp/firefox.tmp
    /usr/bin/checkmodule: policy configuration loaded
    /usr/bin/checkmodule: writing binary representation (version 19) to tmp/firefox.mod
    Creating targeted firefox.pp policy package
    rm tmp/firefox.mod tmp/firefox.mod.fc
    [mythcat@desk selinux_001]$ sudo semodule -i firefox.pp
    [sudo] password for mythcat:
    The semodule is the tool used to manage SELinux policy modules, including installing, upgrading, listing and removing modules. Let's see the result:
    [root@desk selinux_001]# semodule -l | grep firefox

    EPEL: Python34->Python36 Move Happening (Currently in EPEL-testing)

    Posted by Stephen Smoogen on March 13, 2019 02:29 PM
    Over the last 5 days, Troy Dawson, Jeroen van Meeuwen, Carl W George,  and several helpers have gotten nearly all of the python34 packages moves over to python36 in EPEL-7.  They are being included in 6 Bodhi pushes because of a limitation in Bodhi for the text size of packages in an include.

    The current day for these package groups to move into EPEL regular is April 2nd. We would like to have all tests we find in the next week or so also added so that the updates can occur in a large group without too much breakage.


    Please heavily test them by doing the following:

    Stage 1 Testing

    1. Install RHEL, CentOS, or Scientific Linux 7 onto a TEST system.
    2. Install or enable the EPEL repository for this system
    3. Install various packages you would normally use
    4. yum --enablerepo=epel-testing update
    5. Report problems to epel-devel@lists.fedoraproject.org

    Stage 2 Testing

    1. Check for any updated testing instructions on this blog or EPEL-devel list.
    2. Install RHEL, CentOS, or Scientific Linux 7 onto a TEST system.
    3. Install or enable the EPEL repository for this system
    4. yum install python34 <other-python34-packages></other-python34-packages>
    5. yum --enablerepo=epel-testing update
    6. Report problems to epel-devel@lists.fedoraproject.org

    Stage 3 Testing

    1. Check for any updated testing instructions on this blog or EPEL-devel list.
    2. Install RHEL, CentOS, or Scientific Linux 7 onto a TEST system.
    3. Install or enable the EPEL repository for this system
    4. yum install python36 <other-python36-packages></other-python36-packages>
    5. yum --enablerepo=epel-testing update
    6. Report problems to epel-devel@lists.fedoraproject.org
    This should cover the three most common scenarios. Other scenarios exist and will require some sort of intervention to work around. We will outline them as they come up.

    Many Many Thanks go to Troy, Jeroen, Carl, and the many people on the python team who made a copr and did many of the initial patches to make this possible.

    Libravatar has a new home

    Posted by Fedora Magazine on March 13, 2019 09:00 AM

    Libravatar is a free and open source service that anyone can use to host and share an avatar (profile picture) to other websites. Read on for some news about the service and its relevance to the Fedora Project.

    As defined in the project’s blog, The Libravatar project is part of a movement working to give control back to people, away from centralized services and the organizations running them. It addresses a simple problem: putting a face on an email address.

    The project originated from the will to have a free, as in freedom, service alternative to Gravatar, giving the users the possibility to use a hosted service or to run their own instance of the service and have full control of their data.

    In April 2018 the Libravatar project announced that the service will be shutting down. The service is/was being used by many communities like Fedora, Mozilla and the Linux Kernel to name a few. The announcement triggered a big response from the community, of people interested and willing to help to keep it running.

    After some coordination, and a complete rewrite of the application the launch of the new service was announced Tuesday 19th February 2019. The Fedora Project is proud to sponsor Libravatar by providing the infrastructure needed to run the service.