Fedora People

FPgM report: 2018–38

Posted by Fedora Community Blog on September 21, 2018 04:38 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora Program Management this week. The Fedora 29 Beta RC5 was declared Go and will release on 25 September.

I’ve set up weekly office hours in #fedora-meeting. Drop by if you have any questions or comments about the schedule, Changes, elections or anything else.

Help requests


Upcoming meetings


Fedora 29

  • Beta freeze is underway through September 18.
  • Beta release date is set for 25 September.

Fedora 29 Status


The numbers below reflect the current state of Bugzilla tracking bugs.

ON_QA 39
(total) 41

Fedora 30 Changes

Fedora 30 includes a Change that will cause ambiguous python shebangs to error.  A list of failing builds is available on Taskotron.

The post FPgM report: 2018–38 appeared first on Fedora Community Blog.

A healthcare IT foundation built on gooey clay

Posted by Harish Pillay 9v1hp on September 21, 2018 02:57 PM

Today, there was a report from the Solicitor General of Singapore about the data breach of the SingHealth systems that happened in July.

These systems have been in place for many years. They are almost exclusively running Microsoft Windows along with a mix of other proprietary software including Citrix and Allscript.  The article referred to above failed to highlight that the compromised “end-user workstation” was a Windows machine. That is the very crucial information that always gets left out in all of these reports of breaches.

I have had the privilege of being part of an IT advisory committee for a local hospital since about 2004 (that committee has disbanded a couple of years ago, btw).

Every year, budgetary proposals for updates, new versions etc., of the software that the advisory committee gets for consideration and possible approval. Almost always, I would be the exception in the committee in questioning the continued use of expensive proprietary software for these healthcare systems (a contributory factor to increasing health care costs). But because I am the lone contrarian voice, inevitably, the vote will be made to approve and hence continue, IMHO, the wasteful path of spending enormous amounts of monies in these proprietary systems.

I did try, many times, to propose using open source solutions like, for example, virtualization from KVM. This is already built in into the Linux kernel that you can get full commercial support from Red Hat (disclosure: I work for Red Hat). You pay a subscription and we make sure that the systems are running securely (via SELinux for a start) and that enterprise can focus on their core business. But no, they continued with VMware.

I did propose open source solutions like OpenEMR and many other very viable solutions for the National Electronic Medical Records system – but none of them were accepted. (It has been brought to my attention that there are plans to mandate private sector healthcare providers to use the NEHR. There is considerable opposition to it both from the hassle (from their point of view) and added costs since the solution is proprietary and expensive).

There were some glimpses of hope in the early years of being on the committee, but it was quickly snuffed out because the “powers that be” did not think open source solutions would be “good enough”. And open source solutions are still not accepted as part of the healthcare IT architecture.

Why would that be the case?

Part of the reason is because decision makers (then and now) only have experience in dealing with proprietary vendor solutions. Some of it might be the only ones available and the open source world has not created equivalent or better offerings. But where there are possibly good enough or even superior open source offerings, they would never be considered – “Rather go with the devil I know, than the devil I don’t know. After all, this is only a job. When I leave, it is someone else’s problem.” (Yeah, I am paraphrasing many conversations and not only from the healthcare sector).

I recall a project that I was involved with – before being a Red Hatter – to create a solution to create a “computer on wheels” solution to help with blood collection. As part of that solution, there was a need to check the particulars of the patient who the nurse was taking samples from. That patient info was stored on some admission system that did not provide a means for remote, API-based query. The vendor of that system wanted tens of thousands of dollars to just allow the query to happen. Daylight robbery. I worked around it – did screen scrapping to extract the relevant information.

Healthcare IT providers look at healthcare systems as a cashcow and want to milk it to the fullest extent possible (the end consumer bears the cost in the end).

Add that to the dearth of technical IT skills supporting the healthcare providers, you quickly fall into that vendor lock-in scenario where the healthcare systems are at the total mercy of the proprietary vendors.

Singapore is not unique at all. This is a global problem.

Singapore, however, has the potential to break out of this dismal state if only there is both technical, management and political leadership in the healthcare system. The type of leadership that would want to actively pursue by all means possible to make healthcare IT as low cost and yet supportable, reliable and more importantly, be able to create a domestic ecosystem to support (not via Government-linked companies).

I did propose many times to create skunkworks projects and/or run hackathons to create solutions using open source tools to seed the next generation of local solutions providers. As I write this, it has not happened.

To compound the lack of thought leadership, the push in the 2000s to “outsource IT” meant that what remaining technically skilled people there were, got shortchanged as the work went to these contract providers (some of these skilled people were transferred to these outsourcee firms but left shortly after, because it was just BS).

This also meant that over time, the various entities who outsourced IT were just relationship managers with the outsourcee companies.It is not in the interest of the outsourcee companies to propose solutions that could lower the cost overall as it could affect the outsourcee’s revenue model. So, you have a catch-22: no in-house IT/architecture skills and no interest at all on the part of the outsourcees to propose a lower cost and perhaps better solutions.

I would be happy, if asked, to put together a set of solutions that will steadily address all of the healthcare IT requirements/solutions. I want this to then trigger the creation of a local ecosystem of companies that can drive these solutions not only for Singapore’s own consumption as well as to export it globally.

We have the smarts to do this. The technical community of open source developers are, I am very confident, able to rise to the challenge. We need political thought leadership to make this so.

Give me the new hospital in Woodlands to make the solutions work. I want to be able to do as much of it using commercially supported open source products (see this for a discussion of open source projects vs open source products), and build a whole suite of supportable open source solutions that are open to the whole world to innovate upon. It would be wonderful to see https://health.gov.sg/opensource/ (no it does not exist yet).

There are plenty of ancient, leaky, and crufty systems in the current healthcare IT systems locally. We need to make a clean break from it for the future.

The Smart Nation begs for it. 

Dr Vivian Balakrishnan said the following at GeekCamp SG 2015 (video):

I believe in a #SmartNation with an open source society and immense opportunities; where everyone can share, participate and co-create solutions to enrich and improve quality of life for ourselves and our fellow Singaporeans.

And for completeness, the actual post is here (it is a public page; i.e., no account needed):

<fb:post data-width="552" href="https://www.facebook.com/Vivian.Balakrishnan.Sg/posts/10153017297841207"></fb:post>

I am ready.  Please join me.

Software Freedom Day – SFD

Posted by Solanch69 on September 21, 2018 03:51 AM

Este es mi resumen a la invitacion del Software Freedom Day realizado en Ica 2018

<script type="text/javascript"> __ATA.cmd.push(function() { __ATA.initSlot('atatags-26942-5ba7b886598b9', { collapseEmpty: 'before', sectionId: '26942', width: 300, height: 250 }); }); </script>
<script type="text/javascript"> __ATA.cmd.push(function() { __ATA.initSlot('atatags-114160-5ba7b886598bc', { collapseEmpty: 'before', sectionId: '114160', width: 300, height: 250 }); }); </script>


Posted by Solanch69 on September 21, 2018 03:44 AM

El Congreso Nacional de Estudiantes de Computación, Innovación y Tecnologías (CONECIT), es un evento anual que se realiza en Perú. Los asistentes tienen la oportunidad de disfrutar talleres simultáneos y charlas en diferentes auditorios, con diferentes temas, este año se realizò en Iquitos, departamento de Loreto. En esta oportunidad, los representantes de Fedora Perù, presentaron… Sigue leyendo CONECIT 2018

Resigning as the FSFE Fellowship's representative

Posted by Daniel Pocock on September 20, 2018 05:15 PM

I've recently sent the following email to fellows, I'm posting it here for the benefit of the wider community and also for any fellows who don't receive the email.

Dear fellows,

Given the decline of the Fellowship and FSFE's migration of fellows into a supporter program, I no longer feel that there is any further benefit that a representative can offer to fellows.

With recent blogs, I've made a final effort to fulfill my obligations to keep you informed. I hope fellows have a better understanding of who we are and can engage directly with FSFE without a representative. Fellows who want to remain engaged with FSFE are encouraged to work through your local groups and coordinators and attend the annual general meeting in Berlin on 7 October as active participation is the best way to keep an organization on track.

This resignation is not a response to any other recent events. From a logical perspective, if the Fellowship is going to evolve out of a situation like this, it is in the hands of local leaders and fellowship groups, it is no longer a task for a single representative.

There are many positive experiences I've had working with people in the FSFE community and I am also very grateful to FSFE for those instances where I have been supported in activities for free software.

Going forward, leaving this role will also free up time and resources for other free software projects that I am engaged in.

I'd like to thank all those of you who trusted me to represent you and supported me in this role during such a challenging time for the Fellowship.


Daniel Pocock

Submit a Fedora talk to DevConf.cz 2019

Posted by Fedora Community Blog on September 20, 2018 10:00 AM
DevConf banner showing buildings in Brno with fireworks in the background

DevConf.cz 2019 is the free, Red Hat sponsored community conference for open source contributors. Developers, admins, DevOps engineers, testers, documentation writers and other contributors to Linux, middleware, virtualization, storage, cloud, and mobile technologies will meet in Brno, Czechia January 25-27, 2019. Join Fedora and other FLOSS communities to sync, share, and hack on upstream projects together.

Ready to submit your proposal? The CfP is now open!

Looking for ideas? Check out this year’s primary themes on the DevConf.cz website. Some themes include IoT, kernel, cloud & containers, and — of course — Fedora. You can also look at last year’s schedule for inspiration. We’d like to have as many high-quality and useful talks about Fedora or about other technologies on the Fedora platform as possible.

Important DevConf.cz dates

  • CfP closes: October 26, 2018
  • Accepted speakers confirmation: November 12, 2018
  • Event dates: Friday January 25 to Sunday January 27, 2019

The post Submit a Fedora talk to DevConf.cz 2019 appeared first on Fedora Community Blog.

Speeding up AppStream: mmap’ing XML using libxmlb

Posted by Richard Hughes on September 20, 2018 09:27 AM

AppStream and the related AppData are XML formats that have been adopted by thousands of upstream projects and are being used in about a dozen different client programs. The AppStream metadata shipped in Fedora is currently a huge 13Mb XML file, which with gzip compresses down to a more reasonable 3.6Mb. AppStream is awesome; it provides translations of lots of useful data into basically all languages and includes screenshots for almost everything. GNOME Software is built around AppStream, and we even use a slightly extended version of the same XML format to ship firmware update metadata from the LVFS to fwupd.

XML does have two giant weaknesses. The first is that you have to decompress and then parse the files – which might include all the ~300 tiny AppData files as well as the distro-provided AppStream files, if you want to list installed applications not provided by the distro. Seeking lots of small files isn’t so slow on a SSD, and loading+decompressing a small file is actually quicker than loading an uncompressed larger file. Parsing an XML file typically means you set up some callbacks, which then get called for every start tag, text section, then end tag – so for a 13Mb XML document that’s nested very deeply you have to do a lot of callbacks. This means you have to process the description of GIMP in every language before you can even see if Shotwell exists at all.

The typical way parsing XML involves creating a “node tree” when parsing the XML. This allows you treat the XML document as a Document Object Model (DOM) which allows you to navigate the tree and parse the contents in an object oriented way. This means you typically allocate on the heap the nodes themselves, plus copies of all the string data. AsNode in libappstream-glib has a few tricks to reduce RSS usage after parsing, which includes:

  • Interning common element names like description, p, ul, li
  • Freeing all the nodes, but retaining all the node data
  • Ignoring node data for languages you don’t understand
  • Reference counting the strings from the nodes into the various appstream-glib GObjects

This still has a both drawbacks; we need to store in hot memory all the screenshot URLs of all the apps you’re never going to search for, and we also need to parse all these long translated descriptions data just to find out if gimp.desktop is actually installable. Deduplicating strings at runtime takes nontrivial amounts of CPU and means we build a huge hash table that uses nearly as much RSS as we save by deduplicating.

On a modern system, parsing ~300 files takes less than a second, and the total RSS is only a few tens of Mb – which is fine, right? Except on resource constrained machines it takes 20+ seconds to start, and 40Mb is nearly 10% of the total memory available on the system. We have exactly the same problem with fwupd, where we get one giant file from the LVFS, all of which gets stored in RSS even though you never have the hardware that it matches against. Slow starting of fwupd and gnome-software is one of the reasons they stay resident, and don’t shutdown on idle and restart when required.

We can do better.

We do need to keep the source format, but that doesn’t mean we can’t create a managed cache to do some clever things. Traditionally I’ve been quite vocal against squashing structured XML data into databases like sqlite and Xapian as it’s like pushing a square peg into a round hole, and forces you to think like a database doing 10 level nested joins to query some simple thing. What we want to use is something like XPath, where you can query data using the XML structure itself.

We also want to be able to navigate the XML document as if it was a DOM, i.e. be able to jump from one node to it’s sibling without parsing all the great, great, great, grandchild nodes to get there. This means storing the offset to the sibling in a binary file.

If we’re creating a cache, we might as well do the string deduplication at creation time once, rather than every time we load the data. This has the added benefit in that we’re converting the string data from variable length strings that you compare using strcmp() to quarks that you can compare just by checking two integers. This is much faster, as any SAT solver will tell you. If we’re storing a string table, we can also store the NUL byte. This seems wasteful at first, but has one huge advantage – you can mmap() the string table. In fact, you can mmap the entire cache. If you order the string table in a sensible way then you store all the related data in one block (e.g. the <id> values) so that you don’t jump all over the cache invalidating almost everything just for a common query. mmap’ing the strings means you can avoid strdup()ing every string just in case; in the case of memory pressure the kernel automatically reclaims the memory, and the next time automatically loads it from disk as required. It’s almost magic.

I’ve spent the last few days prototyping a library, which is called libxmlb until someone comes up with a better name. I’ve got a test branch of fwupd that I’ve ported from libappstream-glib and I’m happy to say that RSS has reduced from 3Mb (peak 3.61Mb) to 1Mb (peak 1.07Mb) and the startup time has gone from 280ms to 250ms. Unless I’ve missed something drastic I’m going to port gnome-software too, and will expect even bigger savings as the amount of XML is two orders of magnitude larger.

So, how do I use this thing. First, lets create a baseline doing things the old way:

$ time appstream-util search gimp.desktop
real	0m0.645s
user	0m0.800s
sys	0m0.184s

To create a binary cache:

$ time xb-tool compile appstream.xmlb /usr/share/app-info/xmls/* /usr/share/appdata/* /usr/share/metainfo/*
real	0m0.497s
user	0m0.453s
sys	0m0.028s

$ time xb-tool compile appstream.xmlb /usr/share/app-info/xmls/* /usr/share/appdata/* /usr/share/metainfo/*
real	0m0.016s
user	0m0.004s
sys	0m0.006s

Notice the second time it compiled nearly instantly, as none of the filename or modification timestamps of the sources changed. This is exactly what programs would do every time they are launched.

$ df -h appstream.xmlb
4.2M	appstream.xmlb

$ time xb-tool query appstream.xmlb "components/component[@type=desktop]/id[firefox.desktop]"
RESULT: <id>firefox.desktop</id>
RESULT: <id>firefox.desktop</id>
RESULT: <id>firefox.desktop</id>
real	0m0.008s
user	0m0.007s
sys	0m0.002s

8ms includes the time to load the file, search for all the components that match the query and the time to export the XML. You get three results as there’s one AppData file, one entry in the distro AppStream, and an extra one shipped by Fedora to make Firefox featured in gnome-software. You can see the whole XML component of each result by appending /.. to the query. Unlike appstream-glib, libxmlb doesn’t try to merge components – which makes it much less magic, and a whole lot simpler.

Some questions answered:

  • Why not just use a GVariant blob?: I did initially, and the cache was huge. The deeply nested structure was packed inefficiently as you have to assume everything is a hash table of a{sv}. It was also slow to load; not much faster than just parsing the XML. It also wasn’t possible to implement the zero-copy XPath queries this way.
  • Is this API and ABI stable?: Not yet, as soon as gnome-software is ported.
  • You implemented XPath in c‽: No, only a tiny subset. See the README.md

Comments welcome.

[F29] Participez à la journée de test consacrée à Silverblue

Posted by Charles-Antoine Couret on September 20, 2018 07:10 AM

Aujourd'hui, ce jeudi 20 septembre, est une journée dédiée à un test précis : sur Silverblue. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Silverblue est le nom de code pour Fedora Workstation à la sauce Atomic. Jusque là seulement la variante Cloud en bénificiait.

L'objectif de cette version est de proposer en somme une Fedora Workstation ayant des bases différentes de la version traditionnelle. En effet, l'objectif est que les applications soient dans des conteneurs via Kubertenes, Flatpak ou gérées via rpm-os-tree. Ce dernier permet de versionniser le système lors des installations et mises à jour de paquets. En cas de problème, il est facile de demander un retour en arrière au système pour retrouver un système stable. Le système devient majoritairement en lecture seule également pour améliorer sa fiabilité et sa sécurité. Sécurité qui comme poun une Fedora traditionnelle est supervisée par SELinux.

Les tests à effecuer sont :

  • Démarrer et se connecter sans erreurs ;
  • Démarrer et arrêter des services comme le serveur SSH ;
  • Vérifier que SELinux est bien activé ;
  • Vérifier si GNOME Logiciels envoit des notifications de mises à jour ;
  • Vérifier si GNOME Logiciels tourne correctement : installer ou supprimer des paquets ;
  • Installer un logiciels sosu forme de Flatpak.

Des tests qui sont pour le coup assez faciles et rapides à mettre en œuvre.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Security Embargos at Red Hat

Posted by Red Hat Security on September 19, 2018 03:00 PM

The software security industry uses the term Embargo to describe the period of time that a security flaw is known privately, prior to a deadline, after which time the details become known to the public. There are no concrete rules for handling embargoed security flaws, but Red Hat uses some industry standard guidelines on how we handle them.

When an issue is under embargo, Red Hat cannot share information about that issue prior to it becoming public after an agreed upon deadline. It is likely that any software project will have to deal with an embargoed security flaw at some point, and this is often the case for Red Hat.

An embargoed flaw is easiest described as an undisclosed security bug; something that is not public information. Generally the audience that knows about an embargoed security issue is very small. This is so that the bug can be fixed prior to a public announcement. Some security bugs are serious enough that it's in the best interest of all parties to not make the information public before vendors can prepare a fix. Otherwise, customers may be left with publicly-disclosed vulnerabilities but no solutions to mitigate them.

Each project, researcher, and distribution channel handles things a little differently, but the same principles still apply. The order is usually:

  • Flaw discovered
  • Flaw reported to vendor
  • Vendor responds
  • Resolution determined
  • Public announcement

Of course, none of this is set in stone and things (such as to whom the flaw is reported, or communications between a vendor and an upstream open source project) can change. The most important thing to remember is that all who have knowledge of an embargo need to be discreet when dealing with embargoed security flaws.

Q: We reported a flaw to Red Hat but I think some info may have been shared publicly. What should we do?
A: Contact Red Hat Product Security at secalert@redhat.com. We will work with you to assess any potential leaks and determine the best way forward.

It's not uncommon for a flaw to be reported to a 3rd party before the news makes its way to Red Hat or upstream. This can be through a distribution channel, a security research company, a group like CERT, even another corporation that works on open source software. This means that whilst Red Hat may be privy to an embargoed bug, the conditions of that embargo may be set by external parties.

Public disclosure of a security flaw will usually happen on a predetermined date and time. These deadlines are important as the security community operates 24 hours a day.

Contact Red Hat Product Security should you require additional help with security embargoes.


Test Day: Fedora Silverblue

Posted by Fedora Community Blog on September 19, 2018 11:41 AM
Fedora Test Days help provide concentrated testing efforts on important software

Why test Fedora Silverblue

Fedora Silverblue is a new variant of Fedora Workstation with rpm-ostree at its core to provide fully atomic upgrades. Furthermore, Fedora Silverblue is immutable and upgrades as a whole, providing easy rollbacks from updates if something goes wrong. Fedora Silverblue is great for developers using Fedora with good support for container-focused workflows.

Additionally, Fedora Silverblue delivers desktop applications as Flatpaks. This provides better isolation/sandboxing of applications, and streamlines updating applications — Flatpaks can be safely updated without reboot.


On September 20, 2018 Team Silverblue and Fedora QA are holding a Test Day for users to try out and test this new Fedora Workstation variant.


The wiki page for the silverblue has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day result page.
You can also file bugs on silverblue issue tracker.

Details are available on the wiki page. We need your help, please share this post with others.

The post Test Day: Fedora Silverblue appeared first on Fedora Community Blog.

Understand Fedora memory usage with top

Posted by Fedora Magazine on September 19, 2018 08:00 AM

Have you used the top utility in a terminal to see memory usage on your Fedora system? If so, you might be surprised to see some of the numbers there. It might look like a lot more memory is consumed than your system has available. This article will explain a little more about memory usage, and how to read these numbers.

Memory usage in real terms

The way the operating system (OS) uses memory may not be self-evident. In fact, some ingenious, behind-the-scenes techniques are at play. They help your OS use memory more efficiently, without involving you.

Most applications are not self contained. Instead, each relies on sets of functions collected in libraries. These libraries are also installed on the system. In Fedora, the RPM packaging system ensures that when you install an app, any libraries on which it relies are installed, too.

When an app runs, the OS doesn’t necessarily load all the information it uses into real memory. Instead, it builds a map to the storage where that code is stored, called virtual memory. The OS then loads only the parts it needs. When it no longer needs portions of memory, it might release or swap them out as appropriate.

This means an app can map a very large amount of virtual memory, while using less real memory on the system at one time. It might even map more RAM than the system has available! In fact, across a whole OS that’s often the case.

In addition, related applications may rely on the same libraries. The Linux kernel in your Fedora system often shares memory between applications. It doesn’t need to load multiple copies of the same library for related apps. This works similarly for separate instances of the same app, too.

Without understanding these details, the output of the top application can be confusing. The following example will clarify this view into memory usage.

Viewing memory usage in top

If you haven’t tried yet, open a terminal and run the top command to see some output. Hit Shift+M to see the list sorted by memory usage. Your display may look slightly different than this example from a running Fedora Workstation:

There are three columns showing memory usage to examine: VIRT, RES, and SHR. The measurements are currently shown in kilobytes (KB).

The VIRT column is the virtual memory mapped for this process. Recall from the earlier description that virtual memory is not actual RAM consumed. For example, the GNOME Shell process gnome-shell is not actually consuming over 3.1 gigabytes of actual RAM. However, it’s built on a number of lower and higher level libraries. The system must map each of those to ensure they can be loaded when necessary.

The RES column shows you how much actual (resident) memory is consumed by the app. In the case of GNOME Shell, that’s about 180788 KB. The example system has roughly 7704 MB of physical memory, which is why the memory usage shows up as 2.3%.

However, of that number, at least 88212 KB is shared memory, shown in the SHR column. This memory might be, for example, library functions that other apps also use. This means the GNOME Shell is using about 92 MB on its own not shared with other processes. Notice that other apps in the example share an even higher percentage of their resident memory. In some apps, the shared portion is the vast majority of the memory usage.

There is a wrinkle here, which is that sometimes processes communicate with each other via memory. That memory is also shared, but can’t necessarily be detected by a utility like top. So yes — even the above clarifications still have some uncertainty!

A note about swap

Your system has another facility it uses to store information, which is swap. Typically this is an area of slower storage (like a hard disk). If the physical memory on the system fills up as needs increase, the OS looks for portions of memory that haven’t been needed in a while. It writes them out to the swap area, where they sit until needed later.

Therefore, prolonged, high swap usage usually means a system is suffering from too little memory for its demands. Sometimes an errant application may be at fault. Or, if you see this often on your system, consider upgrading your machine’s memory, or restricting what you run.

Photo courtesy of Stig Nygaard, via Flickr (CC BY 2.0).

Delivering cron and logwatch emails to gmail in RHEL

Posted by Lukas "lzap" Zapletal on September 19, 2018 12:00 AM

Delivering cron and logwatch emails to gmail in RHEL

One of my servers has no real DNS name, there is no MX record and I can hardly confiture postfix for e-mail delivery. While relaying e-mails to another SMTP server is an option, it’s actually not needed to configure MTA in order to get emails delivered from cron and logwatch. One can use MUA called mailx to do the job.

The goal is to avoid more complex configuration of postfix or (jeeeez) sendmail, so this is an alternative approach. I am not telling you this is the best thing you should do. It just works for few of my Linux servers. This will work both on RHEL6 and RHEL7 and probably even older or newer versions. And of course CentOS as well.

Vixie cron, the default cron in RHEL, uses sendmail command for all emails. This is actually part of postfix package, delivery is handled by the MTA which I actually wanted to avoid. In this tutorial, we will configure vixie cron in RHEL7 to send e-mails via mailx user agent. First of all, get mailx installed:

# yum -y install mailx

Then edit either /etc/mail.rc or /root/.mailrc as follows:

# cat /root/.mailrc
set name="Server1234"
set from="username@gmail.com"
set smtp=smtps://smtp.gmail.com
set smtp-auth=login
set smtp-auth-user=username@gmail.com
set smtp-auth-password=mysecretpassword
set ssl-verify=ignore
set nss-config-dir=/etc/pki/nssdb

Make sure that from address is same as smtp-auth-user address, gmail servers will insist on this. Server certificate is ignored, you may want to install it into the NSS database. We are ready to send a test e-mail:

# mailx -r username@gmail.com username@gmail.com
Subject: Test

This is a test

There will be a warning on the standard error about unkonwn certificate. I suggest to put google server CA into the NSS database, but it’s harmless and you can keep it as is if you don’t mind man-in-the-middle.

Error in certificate: Peer's certificate issuer is not recognized.

Now, create a wrapper script that will explicitly set from and to addresses:

# cat /usr/local/sbin/mailx-r
exec mailx -r username@gmail.com username@gmail.com

Make sure the script is executable. Finally, set it via crond command line option:

# cat /etc/sysconfig/crond
CRONDARGS="-m /usr/local/sbin/mailx-r"

And restart crond:

# service crond restart

You are now receiving e-mails from cron, congratulations. The next step I usually do is installing logwatch. Since it also uses sendmail we want to disable it and run it manually from our own cron script feeding the output to the mailx command:

# yum -y install logwatch

Disable the built-in daily report because this one uses sendmail. While it could be possible to change this via some configuration option, I actually like my very own cron script.

# cat /etc/logwatch/conf/logwatch.conf
DailyReport = no

Now, create your own script and feed the output into mailx. For a weekly report do something like:

# cat /etc/cron.weekly/logwatch
logwatch --print --range 'between -7 days and today' | mailx -s "Logwatch from XYZ" -r username@gmail.com username@gmail.com 2>/dev/null

For a daily report do this:

# cat /etc/cron.daily/logwatch
logwatch --print --range yesterday | mailx -s "Logwatch from XYZ" -r username@gmail.com username@gmail.com 2>/dev/null

Make sure the cron script is executable and test it first. That’s all, enjoy all the e-mails!

Cockpit 178

Posted by Cockpit Project on September 19, 2018 12:00 AM

Cockpit is the modern Linux admin interface. We release regularly. Here are the release notes from version 178.

KubeVirt support has been removed

Cockpit ships cockpit-kubernetes for managing Kubernetes and OpenShift deployments. Since its introduction, important points have come to light:

  • The OpenShift Console is adding support for KubeVirt. It will become the place for managing KubeVirt-based VMs that run in OpenShift.

  • cockpit-kubernetes has been in maintenance mode for a while. It is difficult to support, as it uses a very old version of Angular.

  • We haven’t been able to build a recent kubevirt-enabled OpenShift image in four months. (Issues #9479 & #9638.) As a result, it is impossible to test recent KubeVirt versions.

  • Community interest has waned on KubeVirt support in Cockpit.

As a result, we have removed KubeVirt support from cockpit-kubernetes in this release. In the future, we will likely remove all of cockpit-kubernetes.

Migration path

Fedora 29 will keep cockpit-kubernetes, but it will be dropped from the “rawhide” rolling release. (Rawhide is the name of the unstable version of Fedora. For the immediate future, it is currently what will become Fedora 30). Thus, cockpit-kubernetes will still be supported for approximately the next year-and-a-half.

During this time, the Cockpit team suggests migrating to OpenShift Console’s native KubeVirt support.

Stabilization & security fixes

Cockpit 178 also includes several minor stabilization improvements and a denial of service security fix.

Try it out

Cockpit 178 is available now:

What is the relationship between FSF and FSFE?

Posted by Daniel Pocock on September 18, 2018 11:21 PM

Ever since I started blogging about my role in FSFE as Fellowship representative, I've been receiving communications and queries from various people, both in public and in private, about the relationship between FSF and FSFE. I've written this post to try and document my own experiences of the issue, maybe some people will find this helpful. These comments have also been shared on the LibrePlanet mailing list for discussion (subscribe here)

Being the elected Fellowship representative means I am both a member of FSFE e.V. and also possess a mandate to look out for the interests of the community of volunteers and donors (they are not members of FSFE e.V). In both capacities, I feel uncomfortable about the current situation due to the confusion it creates in the community and the risk that volunteers or donors may be confused.

The FSF has a well known name associated with a distinctive philosophy. Whether people agree with that philosophy or not, they usually know what FSF believes in. That is the power of a brand.

When people see the name FSFE, they often believe it is a subsidiary or group working within the FSF. The way that brands work, people associate the philosophy with the name, just as somebody buying a Ferrari in Berlin expects it to do the same things that a Ferrari does in Boston.

To give an example, when I refer to "our president" in any conversation, people not knowledgeable about the politics believe I am referring to RMS. More specifically, if I say to somebody "would you like me to see if our president can speak at your event?", some people think it is a reference to RMS. In fact, FSFE was set up as a completely independent organization with distinct membership and management and therefore a different president. When I try to explain this to people, they sometimes lose interest and the conversation can go cold very quickly.

FSFE leadership have sometimes diverged from FSF philosophy, for example, it is not hard to find some quotes about "open source" and one fellow recently expressed concern that some people behave like "FSF Light". But given that FSF's crown jewels are the philosophy, how can an "FSF Light" mean anything? What would "Ferrari Light" look like, a red lawnmower? Would it be a fair use of the name Ferrari?

Some concerned fellows have recently gone as far as accusing the FSFE staff of effectively domain squatting or trolling the FSF (I can't link to that because of FSFE's censorship regime). When questions appear about the relationship in public, there is sometimes a violent response with no firm details. (I can't link to that either because of FSFE's censorship regime)

The FSFE constitution calls on FSFE to "join forces" with the FSF and sometimes this appears to happen but I feel this could be taken further.

FSF people have also produced vast amounts of code (the GNU Project) and some donors appear to be contributing funds to FSFE in gratitude for that or in the belief they are supporting that. However, it is not clear to me that funds given to FSFE support that work. As Fellowship representative, a big part of my role is to think about the best interests of those donors and so the possibility that they are being confused concerns me.

Given the vast amounts of money and goodwill contributed by the community to FSFE e.V., including a recent bequest of EUR 150,000 and the direct questions about this issue I feel it is becoming more important for both organizations to clarify the issue.

FSFE has a transparency page on the web site and this would be a good place to publish all documents about their relationship with FSF. For example, FSFE could publish the documents explaining their authorization to use a name derived from FSF and the extent to which they are committed to adhere to FSF's core philosophy and remain true to that in the long term. FSF could also publish some guidelines about the characteristics of a sister organization, especially when that organization is authorized to share the FSF's name.

In the specific case of sister organizations who benefit from the tremendous privilege of using the FSF's name, could it also remove ambiguity if FSF mandated the titles used by officers of sister organizations? For example, the "FSFE President" would be referred to as "FSFE European President", or maybe the word president could be avoided in all sister organizations.

People also raise the question of whether FSFE can speak for all Europeans given that it only has a large presence in Germany and other organizations are bigger in other European countries. Would it be fair for some of those other groups to aspire to sister organization status and name-sharing rights too? Could dozens of smaller FSF sister organizations dilute the impact of one or two who go off-script?

Even if FSFE was to distance itself from FSF or even start using a new name and philosophy, as a member, representative and also volunteer I would feel uncomfortable with that as there is a legacy of donations and volunteering that have brought FSFE to the position the organization is in today.

That said, I would like to emphasize that I regard RMS and the FSF, as the original FSF, as having the final authority over the use of the name and I fully respect FSF's right to act unilaterally, negotiate with sister organizations or simply leave things as they are.

If you have questions or concerns about this topic, I would invite you to raise them on the LibrePlanet-discuss mailing list or feel free to email me directly.

Linus’s awakening

Posted by Ben Cotton on September 18, 2018 12:02 PM

It may be the biggest story in open source in 2018, a year that saw Microsoft purchase GitHub. Linus Torvalds replaced the Code of Conflict for the Linux kernel with a Code of Conduct. In a message on the Linux Kernel Mailing List (LKML), Torvalds explained that he was taking time off to examine the way he led the kernel development community.

Torvalds has taken a lot of flak for his style over the years, including on this blog. While he has done an excellent job shepherding the technical development of the Linux kernel, his community management has often — to put it mildly — left something to be desired. Abusive and insulting behavior is corrosive to a community, and Torvalds has spent the better part of the last three decades enabling and partaking in it.

But he has seen the light, it would seem. To an outside observer, this change is rather abrupt, but it is welcome. Reaction to his message has been mixed. Some, like my friend Jono Bacon, have advocated supporting Linus in his awakening. Others take a more cynical approach:

<figure class="wp-block-embed-twitter wp-block-embed is-type-rich is-provider-twitter"> <script async="async" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script> </figure>

I understand Kelly’s position. It’s frustrating to push for a more welcoming and inclusive community only to be met with insults and then when someone finally comes around to have everyone celebrate. Kelly and others who feel like her are absolutely justified in their position.

For myself, I like to think of it as a modern parable of the prodigal son. As tempting as it is to reject those who awaken late, it is better than them not waking at all. If Linus fails to follow through, it would be right to excoriate him. But if he does follow through, it can only improve the community around one of the most important open source projects. And it will set an example for other projects to follow.

I spend a lot of time thinking about community, particularly since I joined Red Hat as the Fedora Program Manager a few months ago. Community members — especially those in a highly-visible role — have an obligation to model the kind of behavior the community needs. This sometimes means a patient explanation when an angry rant would feel better. It can be demanding and time-consuming work. But an open source project is more than just the code; it’s also the community. We make technology to serve the people, so if our communities are not healthy, we’re not doing our jobs.

The post Linus’s awakening appeared first on Blog Fiasco.

Thunderbird 60 with title bar hidden

Posted by Martin Stransky on September 18, 2018 08:58 AM

Many users like hidden system titlebar as Firefox feature although it’s not finished yet. But we’re very close and I hope to have Firefox 64 in shape that the title bar can be disabled by default at least on Gnome and matches Firefox outfit at Windows and Mac.

Thunderbird 60 was finally released for Fedora and comes with a basic version of the feature as it was introduced at Firefox 60 ESR. There’s a simple checkbox at “Customize” page at Firefox but Thunderbird is missing an easy switch.

To disable the title bar at Thunderbird 60, you need to go to system menu Edit -> Preferences and choose Advanced tab. Then click at Config Editor at page left bottom corner, open it and look for mail.tabs.drawInTitlebar. Double clik on it and your bird should be titleless 🙂

Say thank you this November during Fedora Appreciation Week 2018

Posted by Fedora Community Blog on September 18, 2018 08:30 AM
Say thank you this November during Fedora Appreciation Week 2018

Fedora Appreciation Week is a new event this year organized by the Fedora Community Operations (CommOps) team. Fedora Appreciation Week, abbreviated as FAW, is a week-long event to celebrate the efforts of Fedora Project contributors and to say “thank you” to each other. Fedora Appreciation Week begins Monday, November 5, 2018 and runs until Sunday, November 11, 2018.

What is it?

The Ubuntu Community Appreciation Day inspired the CommOps team to organize a similar event of appreciation for contributors who make Fedora what it is. This includes all types of contributors, from development, design, infrastructure, marketing, engineering, and more.

During this time of appreciation, users and contributors alike are highly encouraged to select either an individual or a group of contributors to thank for their efforts in Fedora. Appreciation can be given as a karma cookie in IRC, a short note of thanks on a mailing list, or a longer form appreciation such as a Fedora Happiness Packet.

This year’s Fedora Appreciation Week will occur during the 15th anniversary of the Fedora Project on November 6, 2018.

Why have an Appreciation Week?

Most Fedora contributors do not get to work together in the same building or office. Our daily interactions are usually in text (e.g. IRC, emails, wikis, etc.). Even though these work well and we accomplish our goals, we lose touch with the human side of contributing. Fedora contributors aren’t robots, but real people! Fedora Appreciation Week is a chance to remember the Fedora community is a community of people, and to thank our colleagues and friends who might be halfway across the room or halfway across the planet.

How to participate in Appreciation Week

More information about how to participate will come in October. In the meanwhile, consider writing a Contributor Story! Contributor Stories are just that: the record of our best moments with our Fedora friends. The story can be about our work in Fedora or something personal or unique which you would like to share with the community. Selected stories will be shared on the Community Blog every day during Fedora Appreciation Week.

See some examples and consider writing one of your own before Fedora Appreciation Week rolls around on Monday, November 5, 2018.

Photo by “My Life Through A Lens” on Unsplash.

The post Say thank you this November during Fedora Appreciation Week 2018 appeared first on Fedora Community Blog.

How to fix missing Python for Ansible in Fedora Vagrant

Posted by Justin W. Flory on September 18, 2018 08:30 AM

Recently, I started to use Vagrant to test Ansible playbooks on Fedora machines. I’m using the Fedora 28 cloud base image. However, when I tried to provision my Vagrant box, I was warned the Python binary is missing.

$ vagrant provision
==> default: Running provisioner: ansible...
    default: Running ansible-playbook...

PLAY [all] *********************************************************************

TASK [Gathering Facts] *********************************************************
fatal: [default]: FAILED! => {"changed": false, "module_stderr": "Shared connection to closed.\r\n", "module_stdout": "\r\n/bin/sh: /usr/bin/python: No such file or directory\r\n", "msg": "MODULE FAILURE", "rc": 127}
	to retry, use: --limit @playbook.retry

Problem: Python 3 by default

This error appears because Fedora 28 does not provide a Python 2 binary by default. Only Python 3 is provided on the base cloud image. I verified this by SSHing into the Vagrant box.

[jflory@vagrant-host vagrant]$ vagrant ssh
[vagrant@localhost ~]$ dnf list installed | grep -i python

Annoyingly, I must install Python 2 manually in the box each time it fails to provision. Surely, there is an easier way? Fortunately, StackOverflow came to the rescue.

Solution: ansible.extra_vars

It’s possible to tell Vagrant where the Python binary is located. You can pass the path to the python3 binary manually in your Vagrantfile.

# Provisioning configuration for Ansible.
config.vm.provision :ansible do |ansible|
  ansible.playbook = "playbook.yml"
  ansible.extra_vars = { ansible_python_interpreter:"/usr/bin/python3" }

Adding these changes to your Vagrantfile allows Ansible to successfully run on the Fedora Vagrant guest. Python is successfully located.

This is an annoying workaround, but it solves the issue and lets you successfully test and iterate changes on Fedora systems. Here’s hoping the Fedora cloud image maintainers add a default binary for /usr/bin/python to point to /usr/bin/python3 in the future.

The post How to fix missing Python for Ansible in Fedora Vagrant appeared first on Justin W. Flory's Blog.

How to install more wallpaper packs on Fedora Workstation

Posted by Fedora Magazine on September 18, 2018 02:50 AM

Every release, the Fedora Design team creates a new default wallpaper for Fedora. In addition to the default wallpaper, the Fedora repositories also contain a set of extra Supplemental Wallpapers for each release. These older wallpapers are not installed by default, but are easily installed from the Fedora Repositories. If you have just set up a fresh install of Fedora, and want to expand your choices for your desktop wallpaper, the older Fedora wallpapers are a great choice.

This post lists out the older wallpapers available in the Fedora repositories, and how to install them on your current Fedora install. On Fedora Workstation, after you have installed your desired pack, they will show up in the Wallpapers tab in the Background chooser in the Settings.

Note: If you are using a desktop environment other than the default for Fedora Workstation (GNOME), there are also packages tailored to some of the more popular alternative desktops. In most of the examples below, simply change


in the dnf install line to






when installing the package.

Fedora 28 Wallpapers

Fedora 28 Default Wallpaper

To install the Fedora 28 default wallpaper, use the following command in the Terminal:

sudo dnf install f28-backgrounds-gnome

Fedora 28 Supplemental Wallpapers

To install the Fedora 28 supplementary wallpapers, use the following command in the Terminal:

sudo dnf install f28-backgrounds-extras-gnome

Fedora 27 Wallpaper

To install the Fedora 27 default wallpaper, use the following command in the Terminal:

sudo dnf install f27-backgrounds-gnome

Fedora 26 Wallpapers

Fedora 26 Default Wallpaper


To install the Fedora 26 default wallpaper, use the following command in the Terminal:

sudo dnf install f26-backgrounds-gnome

Fedora 26 Supplemental Wallpapers

To install the Fedora 26 supplementary wallpapers, use the following command in the Terminal:

sudo dnf install f26-backgrounds-extras-gnome

Fedora 25 Wallpapers

Fedora 25 Default Wallpaper

To install the Fedora 25 default wallpaper, use the following command in the Terminal:

sudo dnf install f25-backgrounds-gnome

Fedora 25 Supplemental Wallpapers

To install the Fedora 25 supplementary wallpapers, use the following command in the Terminal:

sudo dnf install f25-backgrounds-extras-gnome

Fedora 24 Wallpapers

Fedora 24 Default Wallpaper

To install the Fedora 24 default wallpaper, use the following command in the Terminal:

sudo dnf install f24-backgrounds-gnome


Fedora 24 Supplemental Wallpapers

To install the Fedora 24 supplementary wallpapers, use the following command in the Terminal:

sudo dnf install f24-backgrounds-extras-gnome



Fedora 23 Wallpapers

Fedora 23 Default Wallpaper

To install the Fedora 23 default wallpaper, use the following command in the Terminal:

sudo dnf install f23-backgrounds-gnome

Fedora 23 Supplemental Wallpapers

To install the Fedora 23 supplementary wallpapers, use the following command in the Terminal:

sudo dnf install f23-backgrounds-extras-gnome

Fedora 22 Wallpapers

Fedora 22 Default Wallpaper

To install the Fedora 22 default wallpaper, use the following command in the Terminal:

sudo dnf install f22-backgrounds-gnome

Fedora 22 Supplemental Wallpapers

To install the Fedora 22 supplementary wallpapers, use the following command in the Terminal:

sudo dnf install f22-backgrounds-extras-gnome



Fedora 21 Wallpapers

Fedora 21 Default Wallpaper

To install the Fedora 21 default wallpaper, use the following command in the Terminal:

sudo dnf install f21-backgrounds-gnome

Fedora 21 Supplemental Wallpapers

To install the Fedora 21 supplementary wallpapers, use the following command in the Terminal:

sudo dnf install f21-backgrounds-extras-gnome



Fedora 20 Wallpapers

Fedora 20 Default Wallpaper

To install the Fedora 20 default wallpaper, use the following command in the Terminal:

sudo dnf install heisenbug-backgrounds-gnome

Fedora 20 Supplemental Wallpapers

To install the Fedora 20 supplementary wallpapers, use the following command in the Terminal:

sudo dnf install heisenbug-backgrounds-extras-gnome



Fedora 19 Wallpapers

Fedora 19 Default Wallpaper

To install the Fedora 19 default wallpaper, use the following command in the Terminal:

sudo dnf install schroedinger-cat-backgrounds-gnome

Fedora 19 Supplemental Wallpapers

To install the Fedora 19 supplementary wallpapers, use the following command in the Terminal:

sudo dnf install schroedinger-cat-backgrounds-extras-gnome



Fedora 18 Wallpapers

Fedora 18 Default Wallpaper

To install the Fedora 18 default wallpaper, use the following command in the Terminal:

sudo dnf install spherical-cow-backgrounds-gnome

Fedora 18 Supplemental Wallpapers

To install the Fedora 18 supplementary wallpapers, use the following command in the Terminal:

sudo dnf install spherical-cow-backgrounds-extras-gnome



Fedora 17 Default Wallpaper

To install the Fedora 17 default wallpaper, use the following command in the Terminal:

sudo dnf install beefy-miracle-backgrounds-gnome



Fedora 16 Wallpapers

Fedora 16 Default Wallpaper

To install the Fedora 16 default wallpaper, use the following command in the Terminal:

sudo dnf install verne-backgrounds-gnome

Fedora 16 Supplemental Wallpapers

To install the Fedora 16 supplementary wallpapers, use the following command in the Terminal:

sudo dnf install verne-backgrounds-extras-gnome



Fedora 15 Wallpapers

Fedora 15 Default Wallpaper

The default wallpaper for Fedora 15 was a remix of the default GNOME wallpaper at the time. To install the Fedora 15 default wallpaper, use the following command in the Terminal:

sudo dnf install lovelock-backgrounds-stripes-gnome

Fedora 15 Alternate Wallpaper

Fedora 15 also shipped with an alternate wallpaper, that was used by default on non-GNOME spins. To get this wallpaper, use the following command in the Terminal:

sudo dnf install lovelock-backgrounds-gnome



Fedora 14 Wallpapers

Fedora 14 Default Wallpaper

To install the Fedora 14 default wallpaper, use the following command in the Terminal:

sudo dnf install laughlin-backgrounds-gnome

Fedora 14 Supplemental Wallpapers

To install the Fedora 14 supplementary wallpapers, use the following command in the Terminal:

sudo dnf install laughlin-backgrounds-extras-gnome


Fedora 13 Default Wallpaper

To install the Fedora 13 default wallpaper, use the following command in the Terminal:

sudo dnf install goddard-backgrounds-gnome


Fedora 12 Default Wallpaper

To install the Fedora 12 default wallpaper, use the following command in the Terminal:

sudo dnf install constantine-backgrounds


Fedora 11 Default Wallpaper

To install the Fedora 11 default wallpaper, use the following command in the Terminal:

sudo dnf install leonidas-backgrounds-lion


Fedora 10 Default Wallpaper

To install the Fedora 10 default wallpaper, use the following command in the Terminal:

sudo dnf install solar-backgrounds


Fedora 9 Default Wallpaper

To install the Fedora 9 default wallpaper, use the following command in the Terminal:

sudo dnf install desktop-backgrounds-waves


Fedora 8 Default Wallpaper

To install the Fedora 8 default wallpaper, use the following command in the Terminal:

sudo dnf install fedorainfinity-backgrounds

Switching to Universal Ctags

Posted by Lukas "lzap" Zapletal on September 18, 2018 12:00 AM

Switching to Universal Ctags

After some time spent with exuberant-ctags and ripper-tags, I am switching over to universal-ctags. Ruby language and other languages like Golang is vastly improved over to exuberant-ctags and even ripper-tags. Here are my git hooks which I use for all my project checkouts. It’s fast, fluent and works across all my projects.

For navigation I stick with Vim, sometimes I use fzf.vim plugin together with fzf for terminal.

For longer coding sessions, I am trying GNOME Builder which looks great and every single release it is catching up with Submlime Text 3, except it is fully open source and it has usable Vim emulation. I’ve tried Atom and VSCode but these slow things are not for me.

Fedora Firefox – GCC/CLANG dilemma

Posted by Martin Stransky on September 17, 2018 09:00 PM

After reading Mike’s blog post about official Mozilla Firefox switch to LLVM Clang, I was wondering if we should also use that setup for official Fedora Firefox binaries.

The numbers look strong but as Honza Hubicka mentioned, Mozilla uses pretty ancient GCC6 to create binaries and it’s not very fair to compare it with up-to date LLVM Clang 6.

Also if I’m reading the mozilla bug correctly the PGO/LTO is not yet enabled for Linux, only plain optimized builds are used for now…which means the transition at Mozilla is not so far than I expected.

I also went through some Poronix tests which indicates there’s no black and white situation there although Mike claimed that LLVM Clang is generally better that GCC. But it’s possible that Firefox codebase somehow fits better LLVM Clang than GCC.

After some consideration I think we’ll stay with GCC for now and I’m going compare Fedora GCC 8 builds with the Mozilla  LLVM Clang ones when there are available. Both builds can’t use -march=native so It may be an equal comparsion. Also Fedora should enable the PGO+LTO GCC setup to get the best from GCC.

[Update] I was wrong and PGO+LTO should be enabled also for Linux builds now. The numbers looks very well and I wonder if we can match them with GCC8! 🙂

Bodhi 3.10.0 released

Posted by Bodhi on September 17, 2018 03:50 PM

Dependency changes

The composer now requires hawkey.

Server upgrade instructions

This release contains database migrations. To apply them, run::

$ sudo -u apache /usr/bin/alembic -c /etc/bodhi/alembic.ini upgrade head


  • It is no longer an error if a developer tries to create an override for a build that already had
    an override. Instead, Bodhi helpfully edits the old override automatically (#2030).
  • The UI displays the date that expired overrides became expired (#2136).
  • Security updates now require severity to be set (#2206).
  • The Waiver UI now gives the user more context (#2270 and #2363).
  • The CLI can be used to edit Release mail templates (#2475).
  • A new clean_old_composes setting allows admins to disable the automatic compose cleanup
    feature that was new in Bodhi 3.9.0 (#2561).
  • The API can filter releases by state (beb69a0).
  • The CLI now has a --debug flag on a couple of commands (1bd7617).
  • The bindings have some debug level logging when retrieving Greenwave status (b55fa45).
  • The UI now makes it clear that only authenticated users can leave karma on updates
  • Bodhi can now manage Flatpaks (1a6c4e8).
  • Bodhi now ships a /usr/bin/bodhi-skopeo-lite, which is intended to be an alternative for use
    with the skopeo.cmd setting. It allows for multi-arch containers and Flatpaks to be managed by
    Bodhi (a0496fc).
  • The composer now uses librepo/hawkey to do much more extensive testing on the produced yum
    repositories to ensure they are valid (7dda554).

Bug fixes

  • More space was added around some buttons so they don't touch on small screens (#1902).
  • The bodhi releases subcommands no longer prompt for password when not necessary
  • The submit feedback button now appears on low resolution screens (#2509).
  • Articles were fixed in a tooltip on the update page (075f8a9).
  • The CLI can again display missing required tests (cf75ff8).
  • Fix a failure that sometimes occurred when editing multi-build updates (d997ed4).
  • Unknown Koji tags will no longer cause an Exception when creating new updates

Development improvements

  • Line test coverage has reached 100% (2477fc8).
  • A fake Pungi is used in the Vagrant environment to speed up vagrant up (1b4f5fc).
  • No tests are skipped on Python 3 anymore (44d46e3).


The following developers contributed to Bodhi 3.10.0:

  • Anatoli Babenia
  • Clement Verna
  • Mattia Verga
  • Owen W. Taylor
  • Patrick Uiterwijk
  • Pierre-Yves Chibon
  • Ralph Bean
  • Rick Elrod
  • Vismay Golwala
  • Randy Barlow

My summary of the OpenStack Stein PTG in Denver

Posted by Marios Andreou on September 17, 2018 03:00 PM

My summary of the OpenStack Stein PTG in Denver

After only 3 take off/landings I was very happy to participate in the Stein PTG in Denver. This is a brief summary with pointers of the sessions or rooms I attended in the order they happened (Stein PTG Schedule)

Upgrades CI with the stand-alone deployment

We had a productive impromptu round table (weshay++) in one of the empty rooms with the tripleo ci folks present (weshay, panda, sshnaidm, arxcruz, marios) the tripleo upgrades folks present (chem and holser) as well emeritus PTL mwahaha around the stand-alone and how we can use it for upgrades ci. We introduced the proposed spec and one of the main topics discussed was, ultimately is it worth it, to solve all of these subproblems to only end up with some approximation of the upgrade?

The consensus was yes since we can have 2 types of upgrades job: use the stand-alone to ci the actual tasks, i.e. upgrade_tasks and deployment_tasks for each service in the tripleo-heat-templates, and another job (the current job which will be adapted) to ci the upgrades workflow tripleoclient/mistral workflows etc. There was general consensus in this approach between the upgrades and ci representatives so that we could try and sell it to the wider team in the tripleo room on wednesday together.

Upgrades Special Interest Group

Room etherpad.

Monday afternoon was spent in the upgrades SIG room. There was first discussion of the placement api extraction and how this would have to be dealt with during the upgrade, with a solution sketched out around the db migrations required.

This lead into discussion around pre-upgrade checks that could deal with things like db migrations (or just check if something is missing and fail accordingly before the upgrade). As I was reminded during the lunchtime presentations pre upgrade checks is one of the Stein community goals (together with python-3). The idea is that each service would own a set of checks that should be performed before an upgrade is run and that they would be invoked via the openstack client (sthing along the lines of ‘openstack pre-upgrade-check nova’ - I believe there is already some implementation (from the nova team) but I don’t readily have details.

There was then a productive discussion about the purpose and direction of the upgrades SIG. One of the points raised was that the SIG should not be just about the fast forward upgrade even though that has been a main focus until now. The pre-upgrade checks are a good example of that and the SIG will try and continue to promote these with adoption by all the OpenStack services. On that note I proposed that whilst the services themselves will own the service specific pre-upgrade checks, it’s the deployment projects which will own the pre-upgrade infrastructure checks, such as healthy cluster/database or responding service endpoints.

There was ofcourse discussion around the fast forward upgrade with status updates from the deployment projects present (kolla-ansible, TripleO, charms, OSA). TripleO is the only project with an implemented workflow at present. Finally there was a discussion about whether we’re doing better in terms of operator experience for upgrades in general and how we can continue to improve (e.g. rolling upgrades was one of the discussed points here).

Edge room

Room etherpad Room etherpad2 Use cases Edge primer

I was only in attendance for the first part of this session which was about understanding the requirements (and hopefully continuing to find the common ground). The room started with a review of the various proposed use cases from dublin and any review of work since then. One of the main points raised by shardy is that in TripleO whilst we have a number of exploratory efforts ongoing (like split controlplane for example) it would be good to have a specific architecture to aim for and that is missing currently. It was agreed that the existing use cases will be extended to include the proposed architecture and that these can serve as a starting point for anyone looking to deploy with edge locations.

There are pointers to the rest of the edge sessions in the etherpad above.

TripleO room

Room etherpad Team picture

The order of sessions was slightly revised from that listed in the etherpad above because the East coast storms forced folks to change travel plans. The following order is to the best of my recollection ;)

TripleO and Edge cloud deployments

Session etherpad

There was first a summary from the Edge room from shardy and then tripleo specific discussion around the current work (split controlplane). There was some discussion around possibly using/repurposing “the multinode job” for multiple stacks to simulate the Edge locations in ci. There was also discussion around the networking aspects (though this will depend on the architecture which we don’t yet have fully targetted) with respect to the tripleo deployment networks (controlplane/internalapi etc) in an edge deployment. Finally there was consideration of the work needed in tripleo-common and the mistral workflows needed for the split controlplane deployment.

OS / Platform

(tracked on main tripleo etherpad linked above)

The main items discussed here were Python 3 support, removing instack-undercloud and “that upgrade” to Centos8 on Stein.

For Python3 the discussion included the fact that in TripleO we are bound by whatever python the deployed services support (as well as what the upstream distribution will be i.e. Centos 7/8 and which python ships where).

For the Centos8/Stein upgrade the upgrades folks chem and holser lead the discussion outlining how we will need a completely new workflow, which may be dictated in large by how the Centos8 is delivered. One of the approaches discussed here was to use a completely external/distinct upgrade workflow for the OS, versus the TripleO driven OpenStack upgrade itself. We got into more details about this during the Baremetal session see below).

TripleO CI

Session etherpad

One of the first items raised was the stand-alone deployment and its use in ci. The general proposal is that we should use a lot more of it! In particular to replace existing jobs (like scenarios 1/2) with a standalone deployement.

There was also discussion around the stand-alone for the upgrades ci as we agreed with the upgrades folks on Monday (spec). The idea of service vs workflow upgrades was presented/solidified here and I have just updated v8 of the spec accordingly to emphasise this point.

Other points discussed in the CI session were testing ovb in infra and how we could make jobs voting. The first move will be towards removing te-broker.

There was also some consideration of the involvement of the ci team with other squads and vice versa. There is a new column in our trello board called “requests from other DFG”.

A further point raised was the reproducer scripts and future directions including running and not only generating this in ci. As related side note it sounds like folks are using the reproducer and having some successes.

Ansible / Framework

(tracked on main tripleo etherpad linked above)

In this session an overview of the work towards splitting out the ansible tasks from the tripleo-heat-templates into re-usable roles was given by jillr and slagle. More info and pointers in the the main tripleo etherpad above.


Session etherpad

Discussion around the workflow to change overcloud/service passwords (this is currently borked!). In particular problems around trying to CI this since the deploy takes too long to have deploy + stack update for the passwords and validation within the timeout. Possibly could be a 3rd party (but then non voting) job for now. There was also an overview of work towards using Castellan with TripleO, as well as discussion around selinux and locking down ssh.


Session etherpad

CLI/UI feature parity is a main goal for this cycle (and further probably it seems there is a lot to do) and plan management operations around this. Also good discussion around validations with Tengu joining remotely via Bluejeans to champion the effort of providing a nice way to run these via the tripleoclient.


Session etherpad

This session started with discussion around metalsmith vs nova on the undercloud and the required upgrade path to make this so. Also considered were the overcloud image customization and discussions around network automation (ansible with python-networking-ansible ml2 driver ).

However unexpectedly and the most interesting part of this session personally was an impromptu design session started by ipilcher (prompted by a question from phuongh who I believe was new to the room). The session was about the upgrade to Centos8 and three main approaches were explored, the “big bang” (everything off upgrade everything back), “some kind of rolling upgrade” and finally supporting either Centos8/Rocky or Centos7/Stein. The first and third were deemed unworkable but there was a very lively and well engaged group design session trying to navigate to a workable process for the ‘rolling upgrade’ aka split personality. Thanks to ipilcher (via bandini) the whiteboards looked like this.

Flatpak on windows

Posted by Alexander Larsson on September 17, 2018 02:44 PM

As I teased about last week I recently played around with WSL, which lets you run Linux applications on Windows. This isn’t necessarily very useful, as there isn’t really a lack of native applications on Windows, but it is still interesting from a technical viewpoint.

I created a wip/WSL branch of flatpak that has some workarounds needed for flatpak to work, and wrote some simple docs on how to build and test it.

There are some really big problems with this port. For example, WSL doesn’t support seccomp or network namespaces which removes some of the utility of the sandbox. There is also a bad bug that makes read-only bind-mounts not work for flatpak, which is really unsafe as apps can modify themselves (or the runtime). There were also various other bugs that I reported. Additionally, some apps rely on things on the linux host that don’t exist in the WSL environment (such as pulseaudio, or various dbus services).

Still, its amazing that it works as well as it does. I was able to run various games, gnome and kde apps, and even the linux versions of telegram. Massive kudos to the Microsoft developers who worked on this!

I know you crave more screenshots, so here is one:

Episode 114 - Review of "Click Here to Kill Everybody"

Posted by Open Source Security Podcast on September 17, 2018 12:13 AM
Josh and Kurt review Bruce Schneier's new book Click Here to Kill Everybody. It's a book everyone could benefit from reading. It does a nice job explaining many existing security problems in a simple manner.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="https://html5-player.libsyn.com/embed/episode/id/7052800/height/90/theme/custom/autoplay/no/autonext/no/thumbnail/yes/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="100%"></iframe>

Show Notes

Fedora 28 : Using AdonisJs web framework.

Posted by mythcat on September 16, 2018 11:15 AM
AdonisJs is a Node.js web framework with a breath of fresh air and drizzle of elegant syntax on top of it.
We prefer developer joy and stability over anything else.
I tested today this web framework named AdonisJs with Fedora 28.
The main goal was to use mysql with mariadb from Fedora 28 distro.
Let's start with instalation of AdonisJs on Fedora 28:
[root@desk mythcat]# npm i --global @adonisjs/cli
/usr/bin/adonis -> /usr/lib/node_modules/@adonisjs/cli/index.js
+ @adonisjs/cli@4.0.7
added 406 packages from 182 contributors in 33.533s
Go to default user:
[root@desk mythcat]# exit
Create the application , I named myapp:
[mythcat@desk ~]$ adonis new myapp
_ _ _ _
/ \ __| | ___ _ __ (_)___ | |___
/ _ \ / _` |/ _ \| '_ \| / __|_ | / __|
/ ___ \ (_| | (_) | | | | \__ \ |_| \__ \
/_/ \_\__,_|\___/|_| |_|_|___/\___/|___/

[1/6] 🔬 Requirements matched [node & npm]
[2/6] 🔦 Ensuring project directory is clean [myapp]
[3/6] 📥 Cloned [adonisjs/adonis-fullstack-app]
[4/6] 📦 Dependencies installed
[5/6] 📖 Environment variables copied [.env]
[6/6] 🔑 Key generated [adonis key:generate]

🚀 Successfully created project
👉 Get started with the following commands
Let's test the default myapp with this commands:
$ cd myapp
$ adonis serve --dev

[mythcat@desk ~]$ cd myapp
[mythcat@desk myapp]$ adonis serve --dev

> Watching files for changes...

2018-09-16T09:47:46.799Z - info: serving app on
This is the result of the running myapp on web address:
Let's see the folders and files from AdonisJS:
[mythcat@desk myapp]$ ls
ace config node_modules package-lock.json README.md server.js
app database package.json public resources start
The configuration for web files can be see here:
[mythcat@desk myapp]$ vim  start/routes.js 
'use strict'

| Routes
| Http routes are entry points to your web application. You can create
| routes for different URL's and bind Controller actions to them.
| A complete guide on routing is available here.
| http://adonisjs.com/docs/4.1/routing

/** @type {import('@adonisjs/framework/src/Route/Manager'} */
const Route = use('Route')

This is telling Adonis that when the root of the site is loaded, render a template/view called welcome.
That welcome template can be found in /resources/views/welcome.edge.
[mythcat@desk myapp]$ cd resources/
[mythcat@desk resources]$ ll
total 0
drwxrwxr-x. 2 mythcat mythcat 26 Sep 16 12:46 views
[mythcat@desk resources]$ cd views/
[mythcat@desk views]$ ll
total 4
-rw-rw-r--. 1 mythcat mythcat 339 Sep 16 12:46 welcome.edge
Let's see the source code of this default webpage:
[mythcat@desk views]$ vim welcome.edge 
For example , if you change into start/routes.js from welcome to index then you need to rename the welcome.edge to index.edge .
About css files you can make changes into public/style.css :
[mythcat@desk myapp]$ cd public/
[mythcat@desk public]$ vim style.css

@import url('https://fonts.googleapis.com/css?family=Montserrat:300');

html, body {
height: 100%;
width: 100%;

body {
font-family: 'Montserrat', sans-serif;
font-weight: 300;
background-image: url("/splash.png");
background-color: #220052;

* {
margin: 0;
padding: 0;
Using the mysql database is simple. Into Fedora 28 distro you can use mariadb, let's install it.
[mythcat@desk myapp]$ su
[root@desk myapp]# dnf install mariadb mariadb-server
[root@desk myapp]# systemctl start mariadb
[root@desk myapp]# systemctl status mariadb
● mariadb.service - MariaDB 10.2 database server
You need to make changes into .env file into your project folder.
[mythcat@desk myapp]$ vim .env
Use this changes for mariadb:
Install into AdonisJs with this command:
[mythcat@desk myapp]$ adonis install mysql
[1/1] 📦 Dependencies installed [mysql]
Use this mysql commands to create the database:
[mythcat@desk myapp]$ mysql -u root
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 18
Server version: 10.2.17-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create database adonis;
Query OK, 1 row affected (0.11 sec)

MariaDB [(none)]> exit;
Let's test migration command for files to allow you to create and delete tables.
[mythcat@desk myapp]$ adonis migration:run
migrate: 1503248427885_user.js
migrate: 1503248427886_token.js
Database migrated successfully in 4.11 s
[mythcat@desk myapp]$ adonis make:migration jobs
> Choose an action undefined
✔ create database/migrations/1537095058367_jobs_schema.js
Now that our database and tables have been created, I can create a model for handling jobs table and associated data.
The next tasks to follow depends by your website:

  • Creating a Model 
  • Creating a Controller 
  • User Authentication

Configuration, initialisation et utilisation de tripwire

Posted by Didier Fabert (tartare) on September 15, 2018 03:00 PM

Avec la sortie de la nouvelle version de tripwire, j’en profite pour rédiger un tutoriel sur la configuration, l’initialisation et l’utilisation de tripwire.
Tripwire ne peut que constater la modification des fichiers et doit donc être installé/configuré juste après l’installation de la distribution.

Configuration initiale

L’initialisation consiste à créer deux paires de clés, et à chiffrer le fichier de configuration, ainsi que celui de définition de la politique. La paire nommée site servant à chiffrer/déchiffrer les fichiers de configuration et de définition de la politique, et la paire nommée local servant à chiffrer/déchiffrer la base de données et les rapports.

Méthode manuelle pas à pas

Générer la paire de clés nommée site:

sudo twadmin -m G -v -S /etc/tripwire/site.key

Générer la paire de clés nommée local:

sudo twadmin -m G -v -L /etc/tripwire/$(hostname --fqdn)-local.key

Chiffrer le fichier de configuration avec la paire de clés nommée site:

sudo twadmin -m F -c /etc/tripwire/tw.cfg -S /etc/tripwire/site.key /etc/tripwire/twcfg.txt

Chiffrer le fichier de politique avec la paire de clés nommée site:

sudo twadmin -m P -p /etc/tripwire/tw.pol -S /etc/tripwire/site.key /etc/tripwire/twpol.txt

Avec le script qui va bien

Ce script interactif est présent dans le paquet RPM. Dans le cas d’un déploiement automatique (via ansible ou autre), il faudra utiliser la méthode manuelle.

sudo tripwire-setup-keyfiles

The Tripwire site and local passphrases are used to sign a  variety  of
files, such as the configuration, policy, and database files.

Passphrases should be at least 8 characters in length and contain  both
letters and numbers.

See the Tripwire manual for more information.

Creating key files...

(When selecting a passphrase, keep in mind that good passphrases typically
have upper and lower case letters, digits and punctuation marks, and are
at least 8 characters in length.)

Enter the site keyfile passphrase:
Verify the site keyfile passphrase:
Generating key (this may take several minutes)...Key generation complete.

(When selecting a passphrase, keep in mind that good passphrases typically
have upper and lower case letters, digits and punctuation marks, and are
at least 8 characters in length.)

Enter the local keyfile passphrase:
Verify the local keyfile passphrase:
Generating key (this may take several minutes)...Key generation complete.

Signing configuration file...
Please enter your site passphrase: 
Wrote configuration file: /etc/tripwire/tw.cfg

A clear-text version of the Tripwire configuration file:
has been preserved for your inspection.  It  is  recommended  that  you
move this file to a secure location and/or encrypt it in place (using a
tool such as GPG, for example) after you have examined it.

Signing policy file...
Please enter your site passphrase: 
Wrote policy file: /etc/tripwire/tw.pol

A clear-text version of the Tripwire policy file:
has been preserved for  your  inspection.  This  implements  a  minimal
policy, intended only to test  essential  Tripwire  functionality.  You
should edit the policy file to  describe  your  system,  and  then  use
twadmin to generate a new signed copy of the Tripwire policy.

Once you have a satisfactory Tripwire policy file, you should move  the
clear-text version to a secure location  and/or  encrypt  it  in  place
(using a tool such as GPG, for example).

Now run "tripwire --init" to enter Database Initialization  Mode.  This
reads the policy file, generates a database based on its contents,  and
then cryptographically signs the resulting  database.  Options  can  be
entered on the command line to specify which policy, configuration, and
key files are used  to  create  the  database.  The  filename  for  the
database can be specified as well. If no  options  are  specified,  the
default values from the current configuration file are used.


On va créer un petit script afin de modifier notre fichier de politique. Celui-ci commentera en direct, dans le fichier de politique, les fichiers absents de notre système.

Fichier /tmp/configure-tripwire.sh

while read line
        if [ "${line:0:13}" == "### Filename:" ]
                key=$(echo ${line:14} | sed "s;/;\\\\/;g")
                sudo sed -i -e "/${key}/ s/^/#/" /etc/tripwire/twpol.txt

On lance une première fois l’initialisation en envoyant la sortie sur notre script.

sudo tripwire --init | /tmp/configure-tripwire.sh

Une fois notre fichier de politique modifier, on va chiffrer ce fichier, encore une fois

sudo twadmin -m P -p /etc/tripwire/tw.pol -S /etc/tripwire/site.key /etc/tripwire/twpol.txt

Et on relance l’initialisation

sudo tripwire --init


Tripwire n’utilise que la version chiffrée des fichiers de configuration et de définition de la politique. On peut tout à fait supprimer/vider les fichiers en clair. On peut toutefois les récupérer à partir de leur version chiffrée.

  • Récupération du fichier de configuration
    sudo twadmin -m f > /etc/tripwire/twcfg.txt
  • Récupération du fichier de définition de la politique
    sudo twadmin -m p > /etc/tripwire/twpol.txt

Tâches usuelles:

  • Vérification du système (En plus de la tâche planifiée). C’est utile après un mise à jour, pour valider ensuite ces modifications.
    sudo tripwire --check
  • Application des modifications du système trouvées lors de la vérification dans la base de données et rapportées dans le rapport
    sudo tripwire --update --accept-all --twrfile /var/lib/tripwire/report/<nom d'hôte pleinement qualifié>-<date du rapport>-<heure du rapport>.twr
  • Mise à jour de la politique, une fois le fichier /etc/tripwire/twpol.txt modifié
    sudo tripwire --update-policy -p /etc/tripwire/tw.pol --secure-mode low /etc/tripwire/twpol.txt
  • Visualiser un rapport précis
    sudo twprint -m r -r /var/lib/tripwire/report/web.tartarefr.eu-20180520-033515.twr
  • Dumper de la base de données
    sudo twprint -m d

Stop using GitHub as a measure of open source contributions

Posted by Joe Brockmeier on September 15, 2018 02:04 PM
With Microsoft and the rest of the tech industry trying to flaunt their open source bona fides, people keep trotting out deeply flawed analyses of “who contributes most to open source” based on … what they can measure on GitHub. For many reasons, using GitHub as the gold standard for open source contributions is extremely biased and doesn’t […]

FPgM report: 2018-37

Posted by Fedora Community Blog on September 14, 2018 08:21 PM
Fedora Program Manager weekly report on Fedora Project development and progress

Here’s your report of what has happened in Fedora Program Management this week. The Fedora 29 Beta was declared No-Go and will move to the alternate target date of 25 September. Also, the Respins SIG announced an updated set of live ISOs for Fedora 28.

I’ve set up weekly office hours in #fedora-meeting. Drop by if you have any questions or comments about the schedule, Changes, elections or anything else.

Upcoming meetings


Fedora 29

  • Beta freeze is underway through September 18.
  • Beta release date is now targeted at 25 September. The GA date remains unchanged.

Fedora 30

Fedora 29 Status


The numbers below reflect the current state of Bugzilla tracking bugs.

ON_QA 39
(total) 41

Deferred to Fedora 30

Fedora 30 Changes

Fedora 30 includes a Change that will cause ambiguous python shebangs to error.  A list of failing builds is available on Taskotron.


The post FPgM report: 2018-37 appeared first on Fedora Community Blog.

Caching mechanism in ansible-bender

Posted by Tomas Tomecek on September 14, 2018 08:05 PM

A few weeks ago I announced a new project — ansible-bender (ab). It’s a simple tool to create container images using Ansible playbooks. The same concept as ansible-container, but ab is only about builds.

Ansible-bender utilizes an ansible connection plugin to invoke a playbook in a container and buildah as the container technology.

Recently I was able to land a caching mechanism - every task result is being cached. Since ansible doesn’t allow doing such thing easily, it was quite a feat.

How it’s done?

On the other hand, ansible allows you to write custom plugins and modules to extend its functionality: connection, caching, action, and the one I ended up using: callback plugins.

Callback plugins are being invoked during a play: when a play starts, before a task is executed, after the task finished and more. I used these last two entrypoints to commit a container state after a task finished and load an image if base image and task remained the same. No big deal, but ab now also has a persistent database on disk (a json file) to track all builds. This will be really handy when implementing commands such as: restart build, list builds, get build logs and such.

You may be asking: when the plugin is invoked, how does the code figure out what build it is? This was easily solved by an environment variable which holds the build ID. The plugin code then tracks down the build in database and is able to store the data about the cached item.

The caching support is in git master now, I still wanna test it a bit more before releasing 0.2.0 out.

Here’s how it looks:

$ ab build ./tests/data/basic_playbook.yaml python:3-alpine a-basic-image

PLAY [all] ******************************************************************************************

TASK [Gathering Facts] ******************************************************************************
ok: [a-basic-image-20180614-230625659538-cont]

TASK [print local env vars] *************************************************************************
loaded from cache: '56d49ac2a0f1cdf77ee3a4c0b9ceaece47dee81d1908b9917950d131ac29d29c'
skipping: [a-basic-image-20180614-230625659538-cont]
recording cache hit

TASK [print all remote env vars] ********************************************************************
loaded from cache: '57368def84d5bf6d6d29f3b0d8c565d1249c9df0b4e54679edc3501a07b90b34'
skipping: [a-basic-image-20180614-230625659538-cont]
recording cache hit

TASK [Run a sample command] *************************************************************************
loaded from cache: '8df0d25de40be31a910890fb93f2679593231cc801018e218abe12a426aa7497'
skipping: [a-basic-image-20180614-230625659538-cont]
recording cache hit

TASK [create a file] ********************************************************************************
loaded from cache: '27918a77ac6db100efecc114d5ed76a0ef5e9316bd4454b28bf8971b7205ad40'
skipping: [a-basic-image-20180614-230625659538-cont]
recording cache hit

PLAY RECAP ******************************************************************************************
a-basic-image-20180614-230625659538-cont : ok=1    changed=0    unreachable=0    failed=0

Getting image source signatures
Skipping fetch of repeat blob sha256:73046094a9b835e443af1a9d736fcfc11a994107500e474d0abf399499ed280c
Skipping fetch of repeat blob sha256:2e1059332702e87e617fe70be429c4f4d1b7c7ecfa06f898757889db9b4f7c1c
Skipping fetch of repeat blob sha256:fca20f9a2bbdd0460cb640e2c1c9b9f8f6736e0de3ad6af2e07102389ad8ba9a
Skipping fetch of repeat blob sha256:468ebd6ca830527febf2f900719becebd39e57912fad33bdb2934280d21b97dc
Skipping fetch of repeat blob sha256:252dff90bea0272a4a70ea641c572c712d0b8590e2a5f56eadcb3f215e46207d
Skipping fetch of repeat blob sha256:20504705b9487f5188cd6552c549a7085b4009296e521a4194d803ab2f526bac
Skipping fetch of repeat blob sha256:4f4fb700ef54461cfa02571ae0db9a0dc1e0cdb5577484a6d75e68dc38e8acc1
Copying config sha256:cee792679a80a8511e7acbf3f3ca4f71fb102481f86d49ad5a3481c35698e9b2

 0 B / 5.19 KiB [--------------------------------------------------------------]
 5.19 KiB / 5.19 KiB [======================================================] 0s
Writing manifest to image destination
Storing signatures
Image 'a-basic-image' was built successfully \o/

As you can see, every step was loaded from cache and hence the task itself was skipped. Except for setup (Gathering Facts). This one is never cached.


There is still room for improvements:

  • Controlling caching per task (e.g. never cache this).
  • Disable/enable caching.
  • Writing more tests.

I couldn’t done this work without help of Sviatoslav Sydorenko — he helped me out, when I asked: thank you!

F28-20180914 updated Live isos Released

Posted by Ben Williams on September 14, 2018 07:31 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated F28-20180914 Live ISOs, carrying the 4.18.5-200 kernel.

This set of updated isos will save about 1GB of updates after install.  (for new installs.)

We would also like to thank Fedora- QA  for running the following Tests on our ISOs.:


These can be found at  http://tinyurl.com/live-respins .We would also like to thank the following irc nicks for helping test these isos: dowdle, and Southern_Gentlem.

As always we are always needing Testers to help with our respins. We have a new Badge for People whom help test.  See us in #fedora-respins on Freenode IRC.

Give Fedora Silverblue a test drive

Posted by Fedora Magazine on September 14, 2018 08:00 AM

Fedora Silverblue is a new variant of Fedora Workstation with rpm-ostree at its core to provide fully atomic upgrades.  Furthermore, Fedora Silverblue is immutable and upgrades as a whole, providing easy rollbacks from updates if something goes wrong. Fedora Silverblue is great for developers using Fedora with good support for container-focused workflows.

Additionally,  Fedora Silverblue delivers desktop applications as  Flatpaks. This provides better isolation / sandboxing of applications, and streamlines updating applications — Flatpaks can be safely updated without reboot.

The Fedora Workstation team is running a Test Day for Silverblue next week, so if you want to try it out, and help out the development effort at the same time, keep reading.

Test Fedora Silverblue

Next Thursday, September 20 2018, Team Silverblue and Fedora QA are holding a Test Day for users to try out and test this new Fedora Workstation variant.

The wiki page for the silverblue has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events.

PHP version 5.6.38, 7.0.32, 7.1.22 and 7.2.10

Posted by Remi Collet on September 14, 2018 04:48 AM

RPM of PHP version 7.2.10 are available in remi repository for Fedora 28-29 and in remi-php72 repository for Fedora 26-27 and Enterprise Linux  6 (RHEL, CentOS).

RPM of PHP version 7.1.22 are available in remi repository for Fedora 26-27 and in remi-php71 repository for Enterprise Linux (RHEL, CentOS).

RPM of PHP version 7.0.32 are available in remi-php70 repository for Enterprise Linux (RHEL, CentOS).

RPM of PHP version 5.6.38 are available in remi-php56 repository for Enterprise Linux.

emblem-important-2-24.pngPHP version 5.5 have reached its end of life and is no longer maintained by the PHP project.

These versions are also available as Software Collections in the remi-safe repository.

security-medium-2-24.pngThese versions fix one security bug in mod_php, so update is strongly recommended.

Version announcements:

emblem-notice-24.pngInstallation : use the Configuration Wizard and choose your version and installation mode.

Replacement of default PHP by version 7.2 installation (simplest):

yum-config-manager --enable remi-php72
yum update php\*

Parallel installation of version 7.2 as Software Collection (x86_64 only):

yum install php72

Replacement of default PHP by version 7.1 installation (simplest):

yum-config-manager --enable remi-php71
yum update

Parallel installation of version 7.1 as Software Collection (x86_64 only):

yum install php71

And soon in the official updates:

emblem-important-2-24.pngTo be noticed :

  • EL7 rpm are build using RHEL-7.5
  • EL6 rpm are build using RHEL-6.10
  • a lot of new extensions are also available, see the PECL extension RPM status page

emblem-notice-24.pngInformation, read:

Base packages (php)

Software Collections (php56 / php70 / php71 / php72)

What is the difference between moderation and censorship?

Posted by Daniel Pocock on September 13, 2018 09:09 PM

FSFE fellows recently started discussing my blog posts about Who were the fellowship? and An FSFE Fellowship Representative's dilemma.

Fellows making posts in support of reform have reported their emails were rejected. Some fellows had CC'd me on their posts to the list and these posts never appeared publicly. These are some examples of responses received by a fellow trying to post on the list:

The list moderation team decided now to put your email address on moderation for one month. This is not censorship.

One fellow forwarded me a rejected message to look at. It isn't obscene, doesn't attack anybody and doesn't violate the code of conduct. The fellow writes:

+1 for somebody to answer the original questions with real answers
-1 for more character assassination

Censors moderators responded to that fellow:

This message is in constructive and unsuited for a public discussion list.

Why would moderators block something like that? In the same thread, they allowed some very personal attack messages in favour of existing management.

Moderation + Bias = Censorship

Even links to the public list archives are giving errors and people are joking that they will only work again after the censors PR team change all the previous emails to comply with the censorship communications policy exposed in my last blog.

Fellows have started noticing that the blog of their representative is not being syndicated on Planet FSFE any more.

Some people complained that my last blog didn't provide evidence to justify my concerns about censorship. I'd like to thank FSFE management for helping me respond to that concern so conclusively with these heavy-handed actions against the community over the last 48 hours.

The collapse of the fellowship described in my earlier blog has been caused by FSFE management decisions. The solutions need to come from the grass roots. A totalitarian crackdown on all communications is a great way to make sure that never happens.

FSFE claims to be a representative of the free software community in Europe. Does this behaviour reflect how other communities operate? How successful would other communities be if they suffocated ideas in this manner?

This is what people see right now trying to follow links to the main FSFE Discussion list archive:

Fedora 28 : The Revel framework with golang.

Posted by mythcat on September 13, 2018 08:33 PM
The development team has a very effective text:
A high productivity, full-stack web framework for the Go language.
They used this words:
Revel provides routing, parameter parsing, validation, session/flash, templating, caching, job running, a testing framework, and even internationalization.I tested it yesterday and today I will show you how effective it is.
This framework is very easy to install and use it with Fedora 28.
I could say this is like django framework with the temptation style.
Let's start with a simple example into a folder created by me and named gocode:
[mythcat@desk ~]$ mkdir ~/gocode
[mythcat@desk ~]$ export GOPATH=~/gocode
[mythcat@desk ~]$ echo export GOPATH=$GOPATH >> ~/.bash_profile
[mythcat@desk ~]$ cd gocode/
[mythcat@desk gocode]$ go get github.com/revel/revel
[mythcat@desk gocode]$ ls
pkg src
[mythcat@desk gocode]$ go get github.com/revel/cmd/revel
[mythcat@desk gocode]$ ls
bin pkg src
[mythcat@desk gocode]$ export PATH="$PATH:$GOPATH/bin"
[mythcat@desk gocode]$ revel helpDEBUG 19:24:09 revel server.go:27: arguments by adding nil
~ revel! http://revel.github.io
usage: revel command [arguments]

The commands are:

new create a skeleton Revel application
run run a Revel application
build build a Revel application (e.g. for deployment)
package package a Revel application (e.g. for deployment)
clean clean a Revel application's temp files
test run all tests from the command-line
version displays the Revel Framework and Go version

Use "revel help [command]" for more information.
[mythcat@desk gocode]$ ls
bin pkg src
[mythcat@desk gocode]$ revel new myapp
DEBUG 19:38:50 revel server.go:27: RegisterServerEngine: Registered engine
~ revel! http://revel.github.io
Your application is ready:

You can run it with:
revel run myapp
[mythcat@desk gocode]$ revel run myapp
DEBUG 19:39:15 revel server.go:27: RegisterServerEngine: Registered engine
~ revel! http://revel.github.io
Trying to build with myapp (0x0,0x0)
DEBUG 19:39:15 revel module.go:152: Sorted keys section=module keys=module.static
Let's see the source code - in this case the default files: app.go and Index.html .
[mythcat@desk gocode]$ cd src/myapp/app/
[mythcat@desk app]$ ls
controllers init.go routes tmp views
[mythcat@desk app]$ cd controllers/
[mythcat@desk controllers]$ ls
[mythcat@desk controllers]$ cat app.go
package controllers

import (

type App struct {

func (c App) Index() revel.Result {
return c.Render()

[mythcat@desk App]$ cat Index.html
The cat command will show the source code of Index.html file.
Let's add a golang variable named greeting to app.go and Index.html files:
[mythcat@desk controllers]$ cat app.go 
package controllers

import (

type App struct {

func (c App) Index() revel.Result {
greeting := "Fedora and revel framework !"
return c.Render(greeting)
This variable greeting will will be add into file Index.html with tag p after It works!
This is result of two screenshots from start install and after I change with the variable greeting.

Fedora: LibreOffice remote connection.

Posted by mythcat on September 13, 2018 08:31 PM
I wrote a tutorial about remote connection in LibreOffice, see this article.
The main problem with LibreOffice development is the changes.
Now the feature Open remote is not working.
The bug is officially recognized and represented, see the page.
They inform us, see comment 31:
  John R Mead 2017-12-17 22:43:03 UTC Well, it still doesn't work with LibreOffice Version: (x64); I'm not providing screenshots this time since it's identical in everyway to my previous attempts. At the end of all the copy/pasting between LibreOffice and my browser to get all the relevent permissions, when I finally make the attempt to connect, I get the same message: The specified device is invalid. 

Current system configuration of potentially relevant software: 

Windows 10 Professional (x64) Version 1709 (build 16299.125) 
Java 9.0.1 
Microsoft - OneDrive version (x64) 
LibreOffice Version: (x64)

It's a common practice to see open source software with bugs, most bugs are already available to users to avoid problems.
Most backup bugs are authentication and changes in development.

How (not) to pack smart

Posted by Jonathan Dieter on September 13, 2018 05:25 PM

The media center in transit

As those who know me are probably aware, in July, we left Beirut, Lebanon and moved to Skibbereen, Ireland, my wife’s hometown. We’ve spent the last couple of months settling in, and I’ve been looking for a position that will be a good fit.

One item in the “settling in” checklist is the joyful process of setting up my home network. I was able to bring all of my data with me, but, with my background in system administration, making sure that data is safe is very important to me. I’ve set up RAID1 on my media center that doubles as a datastore, and I’ve just finished assembling a Raspberry Pi with an external hard drive as my backup. Once we have decent internet (dependent on us knowing where we’ll live for the next while which is dependent on me knowing where I’ll be working), I’ll also be setting up something on the cloud, most likely with Amazon Glacier.

But the story of bringing the media center computer to Ireland is the one that I would like to share, especially as it wasn’t as straightforward as you might think.

How (not) to pack smart

In Lebanon, I had a media center in the living room, connected to a TV and a nice 5.1 surround sound system. We didn’t watch much TV, but we did like listening to music and having our photos on a random slideshow was a great way of keeping our kids aware of distant family.

When it came time to leave Lebanon, I had originally planned to just bring the hard drives and buy a new desktop. But when it came time to pack, we had extra luggage space, so I changed my mind and decided to bring the motherboard, RAM, graphics card, etc. The question was, how am I going to keep all that safe? If only there was some box designed to protect a computer’s internals! I decided to just bring the computer case. Granted, it was old, large and heavy, but Qatar airways was giving us 30 kg (66 lbs) per bag, so we had the space.

This is the point where I had my epiphany. I consider myself an expert on packing. I’ve spent all of my adult life traveling between Lebanon, the US and Ireland, and I have become quite skilled at getting our belongings from one place to the other without damage.

So, as I was looking at the computer case, it hit me that it’s made out of metal and would be able to withstand the attention of the most careless baggage handler. Why not put all of our fragile goods in it? If I padded it with enough clothing, everything would be snug as a bug! I gave myself a pat on the back for such a brilliant idea and proceeded to fill the case with ceramic bowls, a Starbucks gift mug, my Wii remotes, and various other technology, with plenty of socks and underwear to keep things from rattling around. Then, not wanting anything to fall out the side, I screwed the side cover on. With both screws. And tossed the screwdriver in a different bag.

I showed off my ingenious packing job to my wife, expounding on the fact that her precious bowls were safe in the bowels of the computer case. She rolled her eyes at me (a common occurrence when I share my brilliant ideas with her), and I went off to wrap the case in a blanket and put it in our lightest piece of luggage, a large duffel bag filled with lots of clothing.

Now one or two readers may be leaning back in their chairs, astounded at my brilliance, but I suspect that the majority will have spotted the teensy-weensy little flaw in my cunning plan. I am ashamed to admit that I didn’t spot it until we were actually in the airport.

Beirut’s Rafic Hariri International Airport differs from most in that you clear your first security check before checking in. It was only as we were in the queue that it suddenly occurred to me that airport security might be somewhat unimpressed with my irregular packing scheme.

As I watched the duffel bag go through the scanner, the guy at the machine sat up and looked at me. “What do you have in the bag?” he asked.

“It’s just my computer.”

“Can you please take it out of the bag and run it through the scanner again?” So I opened the duffel bag, dumped half the clothes on the floor, pulled out the computer, unwrapped it from the blanket, and sent it through the x-ray machine again. The guy pointed at a big dark blob on the screen, and asked, “What’s this?”

“I think it’s one of my wife’s ceramic bowls,” I answered.

“Can you please open the computer up and show me?”

“Uh, no, I don’t know where my screwdriver is.”

The security guy at the front of the x-ray machine walked around to look at the screen, and there was an animated discussion as both pointed and debated what they should do. One was of the opinion that only an idiot would try to smuggle something out of the country in a computer case, while the other pointed out (quite rightly) that only an idiot would pack a ceramic bowl in a computer case.

The suspicious security guy then came over to me and asked for my passport. He escorted us to the check-in desks, told my wife and kids to wait there, and escorted me and the computer to the other side of the airport where there was another security checkpoint, and, more importantly, his boss.

“Can you please open the case?” the boss asked, in that calm tone that professional soldiers use just before beating you to a pulp and tossing you into deepest corner of Gitmo.

“Um, I don’t have my screwdriver. I’m sorry,” I answered sheepishly.

He looked at me steadily, said “Ok,” and then wandered away, presumably to find a screwdriver. As I waited for him to return, I started fiddling with the screws. I managed to unscrew the first with my fingers, but couldn’t get the second one undone.

One of the security guys saw what I was doing, and, to my lasting astonishment, handed me a sharp-tipped knife. Granted, it was just a dinner knife, but still… I used the knife to unscrew the final screw and handed the knife back to the security guy. He called the boss back over (he still hadn’t found a screwdriver), and I started to open the case for him.

“Stand back! Don’t touch it!” he ordered.

I stepped back as five security guys surrounded my computer and started to pull out and examine each item. The socks and underwear used as padding went flying everywhere. It was obvious that these guys had decided that I was a smuggler and they were going to catch me!

One guy pulled out a ceramic bowl and held it up to the light, checking for who knows what. Another opened up my Wii remotes, pulled the batteries out of them and checked for anything that shouldn’t be there. A third tried to pull the graphics card out of the computer case, and, when it wouldn’t come, used the flashlight on his phone to see if anything was hidden in the GPU fan.

I watched as the fourth pulled out a souvenir Starbucks mug, removed it from the box, examined the box in detail, checked the mug for hidden compartments, and then put the mug back in the box. The fifth guy then picked up the Starbucks mug and repeated the examination, just in case the fourth guy had missed something.

When it became obvious that I wasn’t trying to smuggle anything illegal out of the country, the security guys gradually drifted away in disappointment. I was left with one guy who handed me my passport, told me to pack my computer back up, and then stood back and watched as I tried to fit everything back into the case.

As I used the dinner knife to put the screws back in, he looked at me, and said, “That’s a very… unusual… way to pack. Why did you do it?”

I gave him the only response I could. “It seemed like a good idea at the time.”

I took my computer back to my family at the check-in desk, packed it into the duffel bag, and checked it to Ireland. When we picked it up in Dublin, everything in the case had survived the journey, and the computer worked perfectly. I still don’t know if using a computer case as a suitcase was a very good idea or a very bad one.

Secure Boot — Fedora, RHEL, and Shim Upstream Maintenance: Government Involvement or Lack Thereof

Posted by Peter Jones on September 13, 2018 02:12 PM

You probably remember when I said some things about Secure Boot in June of 2014. I said there’d be more along those lines, and there is.

So there’s another statement about that here.

I’m going to try to remember to post a message like this once per month or so. If I miss one, keep an eye out, but maybe don’t get terribly suspicious unless I miss several in a row.

Note that there are parts of this chain I’m not a part of, and obviously linux distributions I’m not involved in that support Secure Boot. I encourage other maintainers to offer similar statements for their respective involvement.

Creating Windows templates for virt-builder

Posted by Richard W.M. Jones on September 13, 2018 09:07 AM

virt-builder is a tool for rapidly creating customized Linux images. Recently I’ve added support for Windows although for rather obvious licensing reasons we cannot distribute the Windows templates which would be needed to provide Windows support for everyone. However you can build your own Windows templates as described here and then:

$ virt-builder -l | grep windows
windows-10.0-server      x86_64     Windows Server 2016 (x86_64)
windows-6.2-server       x86_64     Windows Server 2012 (x86_64)
windows-6.3-server       x86_64     Windows Server 2012 R2 (x86_64)
$ virt-builder windows-6.3-server
[   0.6] Downloading: http://xx/builder/windows-6.3-server.xz
[   5.1] Planning how to build this image
[   5.1] Uncompressing
[  60.1] Opening the new disk
[  77.6] Setting a random seed
virt-builder: warning: random seed could not be set for this type of guest
virt-builder: warning: passwords could not be set for this type of guest
[  77.6] Finishing off
                   Output file: windows-6.3-server.img
                   Output size: 10.0G
                 Output format: raw
            Total usable space: 9.7G
                    Free space: 3.5G (36%)

To build a Windows template repository you will need the latest libguestfs sources checked out from https://github.com/libguestfs/libguestfs and you will also need a suitable Windows Volume License, KMS or MSDN developer subscription. Also the final Windows templates are at least ten times larger than Linux templates, so virt-builder operations take correspondingly longer and use lots more disk space.

First download install ISOs for the Windows guests you want to use.

After cloning the latest libguestfs sources, go into the builder/templates subdirectory. Edit the top of the make-template.ml script to set the path which contains the Windows ISOs. You will also possibly need to edit the names of the ISOs later in the script.

Build a template, eg:

$ ../../run ./make-template.ml windows 2k12 x86_64

You’ll need to read the script to understand what the arguments do. The script will ask you for the product key, where you should enter the volume license key or your MSDN key.

Each time you run the script successfully you’ll end up with two files called something like:


The version numbers are Windows internal version numbers.

After you’ve created templates for all the Windows guest types you need, copy them to any (private) web server, and concatenate all the index fragments into the final index file:

$ cat *.index-fragment > index

Finally create a virt-builder repo file pointing to this index file:

# cat /etc/virt-builder/repos.d/windows.conf

You can now create Windows guests in virt-builder. However note they are not sysprepped. We can’t do this because it requires some Windows tooling. So while these guests are good for small tests and similar, they’re not suitable for creating actual Windows long-lived VMs. To do that you will need to add a sysprep.exe step somewhere in the template creation process.

Flock- 2018

Posted by Pooja Yadav on September 12, 2018 01:18 PM
I attended Flock this year which is the Fedora Project's annual contributor-focused conference. This was my first Flock and it turned out be one of the best conferences I have attended so far.

It was a total 4 day event, where I got a chance to learn more about fedora, how things are changing in fedora and meet people face to face. There were many exciting things in Flock besides talks like hack fest, workshop and many activities like Yoga, Candy swap, Scavenger Hunt which made it more interesting.


The day started with the morning Yoga. Then the first keynote on "State of Fedora" where Fedora Project leaders talked about their objectives and editions. It was really helpful in understanding what are the new changes and how they are going to effect the community. There were lot of interesting talks from which I selected few, which were "RHEL,Fedora and CentOS: Solving the Pentose Triangle" by Josh Boyer and Brendan Conoboy. After this I attended a talk on "Desktop testing using dogtail and behave" by Petr Schindler  as I want to know more about desktop testing and the framework he was using for this. After the talk we also had a great discussion about the framework, it was really great to know more about desktop testing.

After this there was Candy Swap event which was organised by Justin Flory, where people from different country bought candies , I also took some Indian candies and sweets to share with others.was a great event, got to taste lot of candies from various regions .


It started with keynote: "The Power of One:For the good of community" by Rebecca Fernandez which was really motivating as she talked about how we can bring change to the community which are good for others and how we can make a change in the community which we are wishing for. Then I attended a workshop on "Imposter Syndrome and Unconscious Bias Training" by Jona Azizaj, Justin Flory, Bee Padalkar, Amita Sharma as I want to understand how this can be identified and how we can overcome this. This was a interactive workshop where we identified are we suffering from this Syndrome or not and how we can manage it in our life and workplace.

 Then I attended talk on "Fedora CI:Process, Progress and Infrastructure" by bstinson as I have strated contributing to CI and want to know more about the process. After this I attended a talk on "Scalable Fedora Design" by Abhishek Sharma where he shared his knowledge on design system and how Fedora can embraces a design system to ensure consistency and manage design debt.

The day ended with Scavenger Hunt which was organised in the Downtown(Dresden) where we got to know about some interesting facts and history of Dresden. It was really fun going around the city and completing the tasks given. Thanks to the organizers Dominika Bula,Brian Exelbierd , Jennifer Madriaga. I really enjoyed acting a scene from a movie where we in-acted a scene from a bollywood movie "Sholey".


We had hackfect on "i18n development and testing" postlunch. In morning I had a discussion with Lukas about OpenQA as I need to discuss some issues. He told me about how they are using it and his framework which he build to create needles. It was really great to know we can create needles and use them for OpenQA without using OpenQA needles editor with this framework. As I am using OpenQA for automation so it was a great help. We had great discussion about OpenQA which cleared many doubts of mine.
Post lunch we had our hack-fest, where we discussed about the following topics:

  • Upstream First Testing
  • Automation
  • New Features(Langpacks, Fonts etc)
It was great to have this hack fest, as now we were clear on many things and know how we can proceed further with Upstream First and CI initiative. 

After this we went for Dinner at Roller Coaster Restaurant where everything came rolling to us, it was fun :P. There also I had discussion with lot of people. I met a guy from opensuse with whom I talked about OpenQA, also had some healthy conversation with Dodji about exercise, diet and health. It was a great evening spent well with great people.


On the last day most of my time I spent socializing with different people from different domains. I also attended lightening talk to gather more knowledge in less time. All the lightening talks were awesome. After this we had group photo followed by Wrap up Session.

The conference was really good, as I got to know people who can be bugged for specific issues and with whom I can discuss various issues. I also really enjoyed the various events at flock and stayed fit by attending the morning Yoga session . Thanks Suzanne Yeghiayan for taking these session.

Next day, we(Pravin, Sinny, Mike, Praveen, Sayan) went to Saxon Switzerland for a trek, it was great trek with you guys. Thanks Mike for taking us to such a beautiful place.

Thanks everyone for making this a awesome conference.

How to turn on an LED with Fedora IoT

Posted by Fedora Magazine on September 12, 2018 08:00 AM

Do you enjoy running Fedora, containers, and have a Raspberry Pi? What about using all three together to play with LEDs? This article introduces Fedora IoT and shows you how to install a preview image on a Raspberry Pi. You’ll also learn how to interact with GPIO in order to light up an LED.

What is Fedora IoT?

Fedora IoT is one of the current Fedora Project objectives, with a plan to become a full Fedora Edition. The result will be a system that runs on ARM (aarch64 only at the moment) devices such as the Raspberry Pi, as well as on the x86_64 architecture.

Fedora IoT is based on OSTree, like Fedora Silverblue and the former Atomic Host.

Download and install Fedora IoT

The official Fedora IoT images are coming with the Fedora 29 release. However, in the meantime you can download a Fedora 28-based image for this experiment.

You have two options to install the system: either flash the SD card using a dd command, or use a fedora-arm-installer tool. The Fedora Wiki offers more information about setting up a physical device for IoT. Also, remember that you might need to resize the third partition.

Once you insert the SD card into the device, you’ll need to complete the installation by creating a user. This step requires either a serial connection, or a HDMI display with a keyboard to interact with the device.

When the system is installed and ready, the next step is to configure a network connection. Log in to the system with the user you have just created choose one of the following options:

  • If you need to configure your network manually, run a command similar to the following. Remember to use the right addresses for your network:
    $ nmcli connection add con-name cable ipv4.addresses \ ipv4.gateway \
    connection.autoconnect true ipv4.dns "," \
    type ethernet ifname eth0 ipv4.method manual
  • If there’s a DHCP service on your network, run a command like this:
    $ nmcli con add type ethernet con-name cable ifname eth0

The GPIO interface in Fedora

Many tutorials about GPIO on Linux focus on a legacy GPIO sysfis interface. This interface is deprecated, and the upstream Linux kernel community plan to remove it completely, due to security and other issues.

The Fedora kernel is already compiled without this legacy interface, so there’s no /sys/class/gpio on the system. This tutorial uses a new character device /dev/gpiochipN provided by the upstream kernel. This is the current way of interacting with GPIO.

To interact with this new device, you need to use a library and a set of command line interface tools. The common command line tools such as echo or cat won’t work with this device.

You can install the CLI tools by installing the libgpiod-utils package. A corresponding Python library is provided by the python3-libgpiod package.

Creating a container with Podman

Podman is a container runtime with a command line interface similar to Docker. The big advantage of Podman is it doesn’t run any daemon in the background. That’s especially useful for devices with limited resources. Podman also allows you to start containerized services with systemd unit files. Plus, it has many additional features.

We’ll create a container in these two steps:

  1. Create a layered image containing the required packages.
  2. Create a new container starting from our image.

First, create a file Dockerfile with the content below. This tells podman to build an image based on the latest Fedora image available in the registry. Then it updates the system inside and installs some packages:

FROM fedora:latest
RUN  dnf -y update
RUN  dnf -y install libgpiod-utils python3-libgpiod

You have created a build recipe of a container image based on the latest Fedora with updates, plus packages to interact with GPIO.

Now, run the following command to build your base image:

$ sudo podman build --tag fedora:gpiobase -f ./Dockerfile

You have just created your custom image with all the bits in place. You can play with this base container images as many times as you want without installing the packages every time you run it.

Working with Podman

To verify the image is present, run the following command:

$ sudo podman images
REPOSITORY                 TAG        IMAGE ID       CREATED          SIZE
localhost/fedora           gpiobase   67a2b2b93b4b   10 minutes ago  488MB
docker.io/library/fedora   latest     c18042d7fac6   2 days ago     300MB

Now, start the container and do some actual experiments. Containers are normally isolated and don’t have an access to the host system, including the GPIO interface. Therefore, you need to mount it inside while starting the container. To do this, use the –device option in the following command:

$ sudo podman run -it --name gpioexperiment --device=/dev/gpiochip0 localhost/fedora:gpiobase /bin/bash

You are now inside the running container. Before you move on, here are some more container commands. For now, exit the container by typing exit or pressing Ctrl+D.

To list the the existing containers, including those not currently running, such as the one you just created, run:

$ sudo podman container ls -a
CONTAINER ID   IMAGE             COMMAND     CREATED          STATUS                              PORTS   NAMES
64e661d5d4e8   localhost/fedora:gpiobase   /bin/bash 37 seconds ago Exited (0) Less than a second ago           gpioexperiment

To create a new container, run this command:

$ sudo podman run -it --name newexperiment --device=/dev/gpiochip0 localhost/fedora:gpiobase /bin/bash

Delete it with the following command:

$ sudo podman rm newexperiment

Turn on an LED

Now you can use the container you already created. If you exited from the container, start it again with this command:

$ sudo podman start -ia gpioexperiment

As already discussed, you can use the CLI tools provided by the libgpiod-utils package in Fedora. To list the available GPIO chips, run:

$ gpiodetect
gpiochip0 [pinctrl-bcm2835] (54 lines)

To get the list of the lines exposed by a specific chip, run:

$ gpioinfo gpiochip0

Notice there’s no correlation between the number of physical pins and the number of lines printed by the previous command. What’s important is the BCM number, as shown on pinout.xyz. It is not advised to play with the lines that don’t have a corresponding BCM number.

Now, connect an LED to the physical pin 40, that is BCM 21. Remember: the shorter leg of the LED (the negative leg, called the cathode) must be connected to a GND pin of the Raspberry Pi with a 330 ohm resistor, and the long leg (the anode) to the physical pin 40.

To turn the LED on, run the following command. It will stay on until you press Ctrl+C:

$ gpioset --mode=wait gpiochip0 21=1

To light it up for a certain period of time, add the -b (run in the background) and -s NUM (how many seconds) parameters, as shown below. For example, to light the LED for 5 seconds, run:

$ gpioset -b -s 5 --mode=time gpiochip0 21=1

Another useful command is gpioget. It gets the status of a pin (high or low), and can be useful to detect buttons and switches.

Closeup of LED connection with GPIO


You can also play with LEDs using Python — there are some examples here. And you can also use the i2c devices inside the container as well. In addition, Podman is not strictly related to this Fedora edition. You can install it on any existing Fedora Edition, or try it on the two new OSTree-based systems in Fedora: Fedora Silverblue and Fedora CoreOS.

Fedora at FrOSCon 2018 – Event report

Posted by Christian Dersch on September 11, 2018 06:23 PM
As every year, FrOSCon took place in Sankt Augustin (near Bonn) at last weekend of August (25th/26th), which is one of Germanys biggest FOSS events. Of course, Fedora should not miss such an event. Therefore a team of Fedora ambassadors and other contributors joined the event to present Fedora: Aleksandra Fedorova, Raphael Groner, Till Maas and me.

Fedora booth at FrOSCon

We presented Fedora 28 Workstation running on a laptop, showing the latest Fedora experience, as well as a Raspberry Pi 3 running Fedora, controlling the vehicle it is mounted on. In addition we provided some Fedora swag, including stickers, pens, the nice "Fedora loves Python" flyers and the Fedora Workstation beginners guide. In case of the beginners guide, we learned, that there are still many people who do not feel that comfortable with English language and would prefer a local (in this case german) translation to get started. In general, people gave good feedback on Fedora 😉

Raphael, Till and Aleksandra

Aleksandra and Till are local Fedora ambassadors, therefore especially Aleksandra promoted the recently founded Fedora User Group NRW at FrOSCon to get in touch with more people and find new members.

Me and Aleksandra, Fedora Raspberry Pi vehicle on the right

FrOSCon 2018 was another successful FOSS event, we met many Fedora users, hopefully have some new users now and also met developers of other projects like Rust and openSUSE.

See you again at FrOSCon in 2019 😉

The Backtracking ULP Incident of 2018

Posted by Erik Erlandson on September 11, 2018 02:01 PM

This week I finally started applying my new convex optimization library to solve for interpolating splines with monotonic constraints. Things seemed to be going well. My convex optimization was passing unit tests. My monotone splines were passing their unit tests too. I cut an initial release, and announced it to the world.

Because Murphy rules my world, it was barely an hour later that I was playing around with my new toys in a REPL, and when I tried splining an example data set my library call went into an infinite loop:

java // It looks mostly harmless: double[] x = { 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0 }; double[] y = { 0.0, 0.15, 0.05, 0.3, 0.5, 0.7, 0.95, 0.98, 1.0 }; MonotonicSplineInterpolator interpolator = new MonotonicSplineInterpolator(); PolynomialSplineFunction s = interpolator.interpolate(x, y);

In addition to being a bit embarrassing, it was also a real head-scratcher. There was nothing odd about the data I had just given it. In fact it was a small variation of a problem it had just solved a few seconds prior.

There was nothing to do but put my code back up on blocks and break out the print statements. I ran my problem data set and watched it spin. Fast forward a half hour or so, and I localized the problem to a bit of code that does the "backtracking" phase of a convex optimization:

```java for (double t = 1.0 ; t >= epsilon ; t *= beta) {

tx = x.add(xDelta.mapMultiply(t));
tv = convexObjective.value(tx);
if (tv == Double.POSITIVE_INFINITY) continue;
if (tv <= v + t*alpha*gdd) {
    foundStep = true;

} ```

My infinite loop was happening because my backtracking loop above was "succeeding" -- that is, reporting it had found a forward step -- but not actually moving foward along its vector. And the reason turned out to be that my test tv <= v + t*alpha*gdd was succeding because v + t*alpha*gdd was evaluating to just v, and I effectively had tv == v.

I had been bitten by one of the oldest floating-point fallacies: forgetting that x + y can equal x if y gets smaller than the Unit in the Last Place (ULP) of x.

This was an especially evil bug, as it very frequently doesn't manifest. My unit testing in two libraries failed to trigger it. I have since added the offending data set to my splining unit tests, in case the code ever regresses somehow.

Now that I understood my problem, it turns out that I could use this to my advantage, as an effective test for local convergence. If I can't find a step size that reduces my local objective function by an amount measurable to floating point resolution, then I am as good as converged at this stage of the algorithm. I re-wrote my code to reflect this insight, and added some annotations so I don't forget what I learned:

```java for (double t = 1.0; t > 0.0; t *= beta) {

tx = x.add(xDelta.mapMultiply(t));
tv = convexObjective.value(tx);
if (Double.isInfinite(tv)) {
    // this is barrier convention for "outside the feasible domain",
    // so try a smaller step
double vtt = v + (t * alpha * gdd);
if (vtt == v) {
    // (t)(alpha)(gdd) is less than ULP(v)
    // Further tests for improvement are going to fail
if (tv <= vtt) {
    // This step resulted in an improvement, so halt with success
    foundStep = true;

} ```

I tend to pride myself on being aware that floating point numerics are a leaky abstraction, and the various ways these leaks can show up in computations, but pride goeth before a fall, and after all these years I can still burn myself! It never hurts to be reminded that you can never let your guard down with floating point numbers, and unit testing can never guarantee correctness. That goes double for numeric methods!

The Commons Clause doesn't help the commons

Posted by Matthew Garrett on September 10, 2018 11:26 PM
The Commons Clause was announced recently, along with several projects moving portions of their codebase under it. It's an additional restriction intended to be applied to existing open source licenses with the effect of preventing the work from being sold[1], where the definition of being sold includes being used as a component of an online pay-for service. As described in the FAQ, this changes the effective license of the work from an open source license to a source-available license. However, the site doesn't go into a great deal of detail as to why you'd want to do that.

Fortunately one of the VCs behind this move wrote an opinion article that goes into more detail. The central argument is that Amazon make use of a great deal of open source software and integrate it into commercial products that are incredibly lucrative, but give little back to the community in return. By adopting the commons clause, Amazon will be forced to negotiate with the projects before being able to use covered versions of the software. This will, apparently, prevent behaviour that is not conducive to sustainable open-source communities.

But this is where things get somewhat confusing. The author continues:

Our view is that open-source software was never intended for cloud infrastructure companies to take and sell. That is not the original ethos of open source.

which is a pretty astonishingly unsupported argument. Open source code has been incorporated into proprietary applications without giving back to the originating community since before the term open source even existed. MIT-licensed X11 became part of not only multiple Unixes, but also a variety of proprietary commercial products for non-Unix platforms. Large portions of BSD ended up in a whole range of proprietary operating systems (including older versions of Windows). The only argument in favour of this assertion is that cloud infrastructure companies didn't exist at that point in time, so they weren't taken into consideration[2] - but no argument is made as to why cloud infrastructure companies are fundamentally different to proprietary operating system companies in this respect. Both took open source code, incorporated it into other products and sold them on without (in most cases) giving anything back.

There's one counter-argument. When companies sold products based on open source code, they distributed it. Copyleft licenses like the GPL trigger on distribution, and as a result selling products based on copyleft code meant that the community would gain access to any modifications the vendor had made - improvements could be incorporated back into the original work, and everyone benefited. Incorporating open source code into a cloud product generally doesn't count as distribution, and so the source code disclosure requirements don't trigger. So perhaps that's the distinction being made?

Well, no. The GNU Affero GPL has a clause that covers this case - if you provide a network service based on AGPLed code then you must provide the source code in a similar way to if you distributed it under a more traditional copyleft license. But the article's author goes on to say:

AGPL makes it inconvenient but does not prevent cloud infrastructure providers from engaging in the abusive behavior described above. It simply says that they must release any modifications they make while engaging in such behavior.

IE, the problem isn't that cloud providers aren't giving back code, it's that they're using the code without contributing financially. There's no difference between what cloud providers are doing now and what proprietary operating system vendors were doing 30 years ago. The argument that "open source" was never intended to permit this sort of behaviour is simply untrue. The use of permissive licenses has always allowed large companies to benefit disproportionately when compared to the authors of said code. There's nothing new to see here.

But that doesn't mean that the status quo is good - the argument for why the commons clause is required may be specious, but that doesn't mean it's bad. We've seen multiple cases of open source projects struggling to obtain the resources required to make a project sustainable, even as many large companies make significant amounts of money off that work. Does the commons clause help us here?

As hinted at in the title, the answer's no. The commons clause attempts to change the power dynamic of the author/user role, but it does so in a way that's fundamentally tied to a business model and in a way that prevents many of the things that make open source software interesting to begin with. Let's talk about some problems.

The power dynamic still doesn't favour contributors

The commons clause only really works if there's a single copyright holder - if not, selling the code requires you to get permission from multiple people. But the clause does nothing to guarantee that the people who actually write the code benefit, merely that whoever holds the copyright does. If I rewrite a large part of a covered work and that code is merged (presumably after I've signed a CLA that assigns a copyright grant to the project owners), I have no power in any negotiations with any cloud providers. There's no guarantee that the project stewards will choose to reward me in any way. I contribute to them but get nothing back in return - instead, my improved code allows the project owners to charge more and provide stronger returns for the VCs. The inequity has shifted, but individual contributors still lose out.

It discourages use of covered projects

One of the benefits of being able to use open source software is that you don't need to fill out purchase orders or start commercial negotiations before you're able to deploy. Turns out the project doesn't actually fill your needs? Revert it, and all you've lost is some development time. Adding additional barriers is going to reduce uptake of covered projects, and that does nothing to benefit the contributors.

You can no longer meaningfully fork a project

One of the strengths of open source projects is that if the original project stewards turn out to violate the trust of their community, someone can fork it and provide a reasonable alternative. But if the project is released with the commons clause, it's impossible to sell any forked versions - anyone who wishes to do so would still need the permission of the original copyright holder, and they can refuse that in order to prevent a fork from gaining any significant uptake.

It doesn't inherently benefit the commons

The entire argument here is that the cloud providers are exploiting the commons, and by forcing them to pay for a license that allows them to make use of that software the commons will benefit. But there's no obvious link between these things. Maybe extra money will result in more development work being done and the commons benefiting, but maybe extra money will instead just result in greater payout to shareholders. Forcing cloud providers to release their modifications to the wider world would be of benefit to the commons, but this is explicitly ruled out as a goal. The clause isn't inherently incompatible with this - the negotiations between a vendor and a project to obtain a license to be permitted to sell the code could include a commitment to provide patches rather money, for instance, but the focus on money makes it clear that this wasn't the authors' priority.

What we're left with is a license condition that does nothing to benefit individual contributors or other users, and costs us the opportunity to fork projects in response to disagreements over design decisions or governance. What it does is ensure that a range of VC-backed projects are in a better position to improve their returns, without any guarantee that the commons will be left better off. It's an attempt to solve a problem that's existed since before the term "open source" was even coined, by simply layering on a business model that's also existed since before the term "open source" was even coined[3]. It's not anything new, and open source derives from an explicit rejection of this sort of business model.

That's not to say we're in a good place at the moment. It's clear that there is a giant level of power disparity between many projects and the consumers of those projects. But we're not going to fix that by simply discarding many of the benefits of open source and going back to an older way of doing things. Companies like Tidelift[4] are trying to identify ways of making this sustainable without losing the things that make open source a better way of doing software development in the first place, and that's what we should be focusing on rather than just admitting defeat to satisfy a small number of VC-backed firms that have otherwise failed to develop a sustainable business model.

[1] It is unclear how this interacts with licenses that include clauses that assert you can remove any additional restrictions that have been applied
[2] Although companies like Hotmail were making money from running open source software before the open source definition existed, so this still seems like a reach
[3] "Source available" predates my existence, let alone any existing open source licenses
[4] Disclosure: I know several people involved in Tidelift, but have no financial involvement in the company

comment count unavailable comments

ASG! 2018 Tickets

Posted by Lennart Poettering on September 10, 2018 10:00 PM

<large>All Systems Go! 2018 Tickets Selling Out Quickly!</large>

Buy your tickets for All Systems Go! 2018 soon, they are quickly selling out! The conference takes place on September 28-30, in Berlin, Germany, in a bit over two weeks.

Why should you attend? If you are interested in low-level Linux userspace, then All Systems Go! is the right conference for you. It covers all topics relevant to foundational open-source Linux technologies. For details on the covered topics see our schedule for day #1 and for day #2.

For more information please visit our conference website!

See you in Berlin!

Unified Mailboxes in KMail

Posted by Daniel Vrátil on September 10, 2018 07:48 PM

Today KMail has gained a new cool feature that has been repeatedly requested in the User survey last year as well as on forums and social networks: Unified mailboxes.

Unified mailboxes offer not only a unified inbox – a single “Inbox” folder showing emails from inboxes of all your accounts, it also provides unified sent and drafts folders by default. But we did not stop there: you can create completely custom unified mailboxes consisting of any folders you choose. You can even customize the default ones (for example exclude an Inbox from a particular account).

Some obligatory screenshots:

The feature will be present in the December release of KDE Applications.


Do you want to help us bring more cool features like this to Kontact?

Then take a look at some of the junior jobs that we have! They are simple mostly programming tasks that don’t require any knowledge of Akonadi or all the complexities of Kontact. Feel free to pick any task from the list and reach out to us! We’ll be happy to guide you. Read more here…

New badge: F29 i18n Test Day Participant !

Posted by Fedora Badges on September 10, 2018 11:31 AM
F29 i18n Test Day ParticipantYou helped to test Fedora 29 i18n features