Fedora People

Minor service disruption

Posted by Fedora Infrastructure Status on September 19, 2017 02:47 PM
Service 'Mailing Lists' now has status: minor: Mailing lists email sending crashed since late Sunday. Cause fixed, emails going out now.

F26-20170918 Updated Live isos released

Posted by Ben Williams on September 19, 2017 01:30 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated 26 Live ISOs, carrying the 4.12.13-300 kernel and include the following CVEs:

– Fix CVE-2017-12154 (rhbz 1491224 1491231)

– Fix CVE-2017-12153 (rhbz 1491046 1491057)

– Fix CVE-2017-1000251 (rhbz 1489716 1490906) (bluebourne)

These can be found at  http://tinyurl.com/live-respins), seeders are welcome and encouraged, however addition of additional trackers is strictly prohibited. These isos save about 850M of updates on new installs. We would also like to thank the following irc nicks for helping test these isos: brain83,dowdle,linuxmodder,vwbusguy, short-bike, Southern_Gentlem

Launching Pipewire!

Posted by Christian F.K. Schaller on September 19, 2017 01:18 PM

In quite a few blog posts I been referencing Pipewire our new Linux infrastructure piece to handle multimedia under Linux better. Well we are finally ready to formally launch pipewire as a project and have created a Pipewire website and logo.Pipewire logo

To give you all some background, Pipewire is the latest creation of GStreamer co-creator Wim Taymans. The original reason it was created was that we realized that as desktop applications would be moving towards primarly being shipped as containerized Flatpaks we would need something for video similar to what PulseAudio was doing for Audio. As part of his job here at Red Hat Wim had already been contributing to PulseAudio for a while, including implementing a new security model for PulseAudio to ensure we could securely have containerized applications output sound through PulseAudio. So he set out to write Pipewire, although initially the name he used was PulseVideo. As he was working on figuring out the core design of PipeWire he came to the conclusion that designing Pipewire to just be able to do video would be a mistake as a major challenge he was familiar with working on GStreamer was how to ensure perfect audio and video syncronisation. If both audio and video could be routed through the same media daemon then ensuring audio and video worked well together would be a lot simpler and frameworks such as GStreamer would need to do a lot less heavy lifting to make it work. So just before we starting sharing the code publicaly we renamed the project to Pinos, named after Pinos de Alhaurín, a small town close to where Wim is living in southern Spain. In retrospect Pinos was probably not the worlds best name :)

Anyway as work progressed Wim decided to also take a look at Jack, as supporting the pro-audio usecase was an area PulseAudio had never tried to do, yet we felt that if we could ensure Pipewire supported the pro-audio usecase in addition to consumer level audio and video it would improve our multimedia infrastructure significantly and ensure pro-audio became a first class citizen on the Linux desktop. Of course as the scope grew the development time got longer too.

Another major usecase for Pipewire for us was that we knew that with the migration to Wayland we would need a new mechanism to handle screen capture as the way it was done under X was very insecure. So Jonas Ådahl started working on creating an API we could support in the compositor and use Pipewire to output. This is meant to cover both single frame capture like screenshot, to local desktop recording and remoting protocols. It is important to note here that what we have done is not just implement support for a specific protocol like RDP or VNC, but we ensured there is an advaned infrastructure in place to support any protocol on top of. For instance we will be working with the Spice team here at Red Hat to ensure SPICE can take advantage of Pipewire and the new API for instance. We will also ensure Chrome and Firefox supports this so that you can share your Wayland desktop through systems such as Blue Jeans.

Where we are now
So after multiple years of development we are now landing Pipewire in Fedora Workstation 27. This initial version is video only as that is the most urgent thing we need supported for Flatpaks and Wayland. So audio is completely unaffected by this for now and rolling that out will require quite a bit of work as we do not want to risk breaking audio on your system as a result of this change. We know that for many the original rollout of PulseAudio was painful and we do not want a repeat of that history.

So I strongly recommend grabbing the Fedora Workstation 27 beta to test pipewire and check out the new website at Pipewire.org and the initial documentation at the Pipewire wiki. Especially interesting is probably the pages that will eventually outline our plans for handling PulseAudio and JACK usecases.

If you are interested in Pipewire please join us on IRC in #pipewire on freenode. Also if things goes as planned Wim will be on Linux Unplugged tonight talking to Chris Fisher and the Unplugged crew about Pipewire, so tune in!

New badge: F27 i18n Test Day Participant !

Posted by Fedora Badges on September 19, 2017 08:18 AM
F27 i18n Test Day ParticipantYou helped test i18n features in Fedora 27! Thanks!

Participez à la journée de test consacrée à l'internationalisation

Posted by Charles-Antoine Couret on September 19, 2017 06:00 AM

Aujourd'hui, ce mardi 19 septembre, est une journée dédiée à un test précis : sur l'internationalisation de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Comme chaque version de Fedora, la mise à jour de ses outils impliquent souvent l’apparition de nouvelles chaînes de caractères à traduire et de nouveaux outils liés à la prise en charge de langues (en particulier asiatiques).

Pour favoriser l'usage de Fedora dans l'ensemble des pays du monde, il est préférable de s'assurer que tout ce qui touche à l'internationalisation de Fedora soit testée et fonctionne. Notamment parce qu'une partie doit être fonctionnelle dès le LiveCD d'installation (donc sans mise à jour).

Les tests du jour couvrent :

  • Le bon fonctionnement d'ibus pour la gestion des entrées claviers ;
  • La personnalisation des polices de caractères ;
  • L'installation automatique des paquets de langues des logiciels installés suivant la langue du système ;
  • La traduction fonctionnelle par défaut des applications ;
  • Les polices Serif chinois par défaut (changement de Fedora 27) ;
  • Test de libpinyin 2.1 pour la saisie rapide du chinois Pinyin (changement de Fedora 27).

Bien entendu, étant donné les critères, à moins de savoir une langue chinoise, l'ensemble des tests n'est pas forcément réalisable. Mais en tant que francophones, de nombreuses problématiques nous concernent et remonter les problèmes est important. En effet, ce ne sont pas les autres communautés linguistiques qui identifieront les problèmes d'intégration de la langue française.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Writing LaTeX well in Vim

Posted by Ankur Sinha "FranciscoD" on September 18, 2017 11:40 PM

Vim is a great text editor if one takes a bit of time to learn how to use it properly. There's quite enough documentation on how to use Vim correctly, and efficiently so I shan't cover that here. vimtutor is an excellent resource to begin at.

Similarly, LaTeX is a brilliant documentation system, especially for scientific writing if one takes the time to learn it. Unlike the usual Microsoft Word type systems, LaTeX is a set of commands/macros. Once the document is written using these, it must be compiled to produce a PDF document. It may appear daunting at first, but after one is familiar with it, it makes writing a breeze. Now, there are a editors especially designed for LaTeX, but given that I use Vim for about all my writing, I use it for LaTeX too.

On Fedora, you can install Vim using DNF: sudo dnf install vim-enhanced vim-X11. I install the X11 package too to use the system clipboard.

LaTeX tools

To begin with, there are a few command line commands that one can use other than the necessary latex, pdflatex, bibtex, biber, and so on commands:

  • latexmk is a great tool that figures out the compilation sequence required to generate the document, and it does it for you.
  • lacheck and chktex are both linters for LaTeX that make writing a lot easier.
  • detex strips a tex document of LaTeX commands to produce only the text bits.
  • diction, and style give the author an idea of the readability of the text.

One can use any text editor and then these utilities to improve their LaTeX writing experience.

On Fedora, install these with DNF: sudo dnf install latexmk /usr/bin/lacheck /usr/bin/chktex /usr/bin/detex diction. (Yes, you can tell DNF what file you want to install too!)

Built-in Vim features

Vim already contains quite a few features that make writing quite easy;

  • Omni completion provides good suggestions based on the text under the cursor.
  • There's in-built spell checking already.
  • Folding logical bits makes the document easier to read and navigate through.
  • Syntax highlighting makes it a lot easier to read code by marking different commands in different colours.
  • There are different flavours of linenumbers that make moving about a document much simpler.
  • At some point, the conceal feature was added that further improves readability of documents
  • Buffers, tabs, windows are available in Vim too, of course.

Vim plug-ins

There are a lot of Vim plug-ins that extend some functionality or the other. The simplest way to install plug-ins is to use Vundle. Here are some plug-ins that I use. They're not all specific to LaTeX.

  • Fastfold makes folding faster.
  • vim-polyglot provides better syntax highlighting for a many languages.
  • vim-airline provides an excellent, informative status line.
  • tagbar lists sections (tags in general) in a different pane.
  • vim-colors-solarized provides the solarized themes for Vim.
  • vimtex provides commands to quickly compile LaTeX files, complete references, citations, navigate quicker, view the generated files, and so on.
  • ultisnips provides lots of snippets for many languages, including LaTeX. Get the snippets from the vim-snippets plug-in.
  • YouCompleteMe is a completion engine that supports many languages. Remember that this one needs to be compiled!
  • Syntastic provides syntax checkers for many languages, including LaTeX.

I've also used vim-latex in the past and it's very very good. However, since I have other plug-ins that provide the various functionality that it brings together for many other languages too, I'm no longer using it. Worth a go, though.

An example document

The image below shows a LaTeX file open in Vim with different plug-ins in action:

Screenshot of Vim with a LaTeX file open showing various features.
  • On top, one can see the open buffer. Only one buffer is open at the moment.
  • In the left hand side margin, one can see the fold indicators.
  • The S> bit is an indicator from the linter that Syntastic uses, showing a warning or an error.
  • The line numbers are also visible in the left margin. Since I am in insert mode, they're just plain line numbers. Once one leaves insert mode, they change to relative.
  • On line 171, the conceal feature shows Greek symbols instead of their LaTeX commands.
  • Syntax highlighting is clearly visible. The commands have different colours. This is the solarized dark theme, of course.
  • The "pop-up" shows Ultisnips at work. Here, I'm looking at adding a new equation environment.
  • Underneath the pop up, the dashed line is a folded section. The + symbol in the left margin implies that it is folded.
  • In the status line, one can see that spell check is enabled, and that I'm using the en_gb language.
  • Next, the git status, and the git branch I'm in. That's the vim-fugitive plug-in at work.
  • Then, the filetype, the encoding, the number of words and so on provided by the airline plug-in.

Neat, huh? There is a lot more there that isn't easy to show in a screen-shot. For example, \ll will compile the LaTeX file; \lv opens the generated PDF file in a PDF viewer, Evince in my case; \lc will clean the directory of any temporary files that were generated while compiling the document.

I keep all my vimfiles on Github. Feel free to take a look and derive your own. I tweak my configuration each time I find something new, though, so it may change rather frequently. Remember to read the documentation for whatever plug-ins in use. They provide a lot of options, lots of shortcuts, lots of other commands, and sometimes setting them up incorrectly can cause vim to behave in unexpected ways.

TL;DR: Use Vim, and use LaTeX!!

AnsibleFest SF 2017

Posted by Adam Miller on September 18, 2017 10:19 PM

AnsibleFest was amazing, it always is. This has been my Third one and it's always one that I look forward to attending. The Ansible Events Team does an absolutely stellar job of putting things together and I'm extremely happy I was not only able to attend but that I was accepted as a speaker.

Kick Off and Product Announcements

The event kicked off with some really great product announcements, some interesting bits about Ansible Tower and the newly announced Ansible Engine.

Ansible AWX

<object class="align-center" data="https://maxamillion.sh/images/ansible_awx_logo.svg" style="width: 600px; height: 400px;" type="image/svg+xml"> Ansible AWX Logo</object>

As an avid fan of Open Source Software, the announcement and immediate release of Ansible AWX was the headliner of the event for me. This is the open source upstream to Ansible Tower that Red Hat made the commitment to release once Ansible was acquired in accordance with their continued commitment to Open Source. If you live in Ansible user or contributor land, you know this is something that's been a hot topic for quite some time and I'm so glad it's been launched officially. I've been learning Django over the last week so I can start contributing. Looking forward to it.

Ansible Community Leader and Red Hat CEO Fireside Chat

Fireside Chat with Robyn and Jim

Immediately following the Ansible AWX announcement was a fireside chat with Ansible Community Leader Robyn Bergeron (who is also previously the Fedora Project Leader) and Red Hat CEO Jim Whitehurst to discuss various market trends in the realm of infrastructure automation, the ability to deliver faster and more rapidly, and the challenges business are having with the concept of "Digital Transformation." This was a really cool thing to get the perspective of things from both an open source community perspective and that of a business minded individual, and to see where those two perspectives met in the middle and/or overlapped.

Ansible Community Days

The day before and the day after the main headline of AnsibleFest was the Community Days, the day before AnsibleFest focused entirely on topics around Ansible Core and the greater Ansible Community. The day after AnsibleFest focused on Ansible AWX in the morning, explaining architecture and various technical implementation details to try and get some exposure to things for those of us in the room who weren't previously privy to that information. The afternoon of the second day involved the "Ansible Ambassadors" community (I'm not sure if this is an official term)

Ansible All The Things

I gave a presentation that I like to call "Ansible All The Things" or "Ansible Everything" (depending on who my audience is and how acceptable they are of meme jokes). The basic idea though is to look at Ansible not as a configuration management tool, which I feel a lot of the "Tech Media" (for lack of a better term) has classified it as and therefore it is often known as to the more broad audience, but instead think of it like a Task Automation utility. This particular task automation utility also comes with a nice python API and a way to interact by anything that can "speak JSON." This has some advantages if you step back and thing about this abstract concept of a tool with a programming interface that is ultimately as generic as passing JSON around (with added convenience for python programmers). Effectively you have a method of running a task, or series of tasks, on one or many systems in your infrastructure. This is powerful enough to be used for all sorts of things like configuration management (yes, Ansible can perform configuration management tasks but it's also so much more than that), provisioning, deployment, orchestration, command line tooling, builds, event-based execution, workflow automation, continuous integration, and containers.

For those who would like to check you my slides, they are here.

Infrastructure Testing with Molecule

I had the opportunity to attend a presentation about Molecule, which I was really excited about because this is a toolchain I've wanted to dig into for a while. This is effectively the goal: Infrastructure as Code, TDD/CI on your Code, and transitively your Infrastructure. What a time to be alive.

Anyways, the talk itself was absolutely fantastic. Elana Hashman is a spectacular speaker and the amount of research she put into the talk was apparent. The room was captivated and the questions and conversations were enthusiastic, this was clearly a topic space people were interested in. I also have to give a tip of the hat to the live Demo that went off flawlessly, I've never personally pulled off a live Demo without at least one goof that contained the amount of live editing of code that was contained in this one. Kudos.

For those who are interested in the presentation materials, check them out here. (Do it, it's really good.)

Closing Time

The event was wonderful and I hope to have the opportunity to go next year to the North America based AnsibleFest (they also do one in the EU/UK but it's not often I can pull together the funding to that trip).

Bodhi 2.11.0 released

Posted by Bodhi on September 18, 2017 08:05 PM


  • Bodhi now batches non-urgent updates together for less frequent churn. There is a new
    bodhi-dequeue-stable CLI that is intended be added to cron that looks for batched updates and
    moves them to stable


  • Improved bugtracker linking in markdown input
  • Don't disable autopush when the update is already requested for stable
  • There is now a timeout on fetching results from ResultsDB in the backend
  • Critical path updates now have positive days_to_stable and will only comment about pushing to
    stable when appropriate

Development improvements

  • More docblocks have been written.

Release contributors

The following developers contributed to Bodhi 2.11.0:

  • Caleigh Runge-Hottman
  • Ryan Lerch
  • Rimsha Khan
  • Randy Barlow

Arch Arch and away! What's with the Arch warriors?

Posted by Sachin Kamath on September 18, 2017 06:36 PM
Arch Arch and away! What's with the Arch warriors?

Foo : Hey, I just installed Arch and I can't connect to the internet.

Bar : Hey, my DE won't boot in Arch. Help, please.

FooBar : Man, I installed Arch and I can't find sudo. Will I die?

I wanted to put this up on Devrant but thought I'd write a bit to enlighten the wannabe Arch-ers. I have been recently getting a lot of messages asking me to help debug issues on Arch. As a fellow Linux and Arch user, I've always responded to most of the messages. But hey, there's a limit to it.

If you choose to begin your Linux adventures with Arch Linux after trying Ubuntu for a month, you're probably doing it wrong. If there's a solid reason why you think Arch is for you; awesome! Do it. You will learn new things. A lot of new things. But hey, what's the point in learning what arch-chroot does if you can't figure out what sudo is or what wpa_supplicant does?

Remember, when you decided to install Arch - you signed up for it. If you really want to get the feel of using Arch and not do all the hard work, try Antergos or Monjaro Linux. They are built on top of Arch Linux and they're kickass distros. You'll love it. Antergos is arch hiding behind a nice GUI installer ;)

Or better yet, start with a distro that comes with out-of-the-box support and get the feel of Linux before attempting to move to Arch. I've seen a lot of people switch back to Windows after trying to install Arch because some online guide said "Arch was awesome", but never said it's not for beginners. I'd recommend Fedora any day because :

  • It's awesome
  • it doesn't have any third party packages with weird licenses
  • Freedom. Features. Friends. First
  • has tons of spins and custom bundles to choose from. (Psst.. Have you tried Security Labs? You might ditch Kali)

Also, please learn to use a search engine whenever you are stuck. It doesn't hurt to use it, does it? So stop asking, and start googling! duck-ing!. When you're confident that you've got your back, go ahead - fire up that live-USB and arch-chroot.

Flock to Fedora 2017

Posted by Adam Miller on September 18, 2017 04:48 PM

Flock to Fedora 2017

Every year, the Fedora User and Developer community puts on an conference entitled "Flock to Fedora" or "Flock" for short. This year was no different and the event was hosted in lovely Cape Cod, MA.

This year's Flock was a little different in focus than previous year's, the goal of the event organizers appeared to be that of "doing" as apposed to "watching presentations" which worked out great. As an user and contributor conference, almost everyone there was a current user or contributor so workshops to help enhance people's knowledge level, have them contribute to an aspect of the project, or to introduce them to a new area of the Fedora Project in a more hands-on way was met with enthusiastic participation. There were definitely still "speaking tracks" but there were more "participation tracks" than years past and it turned out to be a lot of fun.


At the time of this writing, the videos had not yet been posted but it was reported that they will be found at the link below.

All the sessions were being recorded and I highly recommend anyone interested to check them out here.

I will recap my experience and take aways from the sessions I attended and participated in as well as post slides and/or talk materials that I know of.

Flock Day 1

Keynote: Fedora State of the Union

The Fedora Project Leader, Matt Miller took the stage for the morning keynote following a brief instructional Logistics/Intro statement by event organizers. Matt discussed the current state of Fedora, where we are, where we're going, ongoing work and current notable Changes with work under way.

Big key take-aways here was that Fedora Modularity and Fedora CI are big initiatives aiming to bring more content to our users, in newly consumable ways, even faster than ever before without compromising quality (and hopefully improving it).

Flock 2017 Keynote State of Fedora slides

Factory 2.0, Fedora, and the Future

One of the big pain points from the Fedora contributor's standpoint is how long it takes to compose the entire distro into an usable thing. Right now, once contributors have pushed source code and built RPMs out of it, you have to take this giant pile of RPMs, create a repository, then start to build things out of it that are stand-alone useful for users. These kinds of things are install media, live images, cloud and virt images, container images, etc.

Factory 2.0 aims to streamline these processes, make them faster, more intelligent based on tracking metadata about release artifacts and taking action upon those artifacts only when necessary, and make everything "change driven" such that we won't re-spin things for the sake of re-spinning or because some time period has elapsed, but instead will take action conditionally on a change occurring to one of the sources feeding into an artifact.

For those who remember last Flock, there was discussion of this concept of the Eternal September and this was a progress report update of the work that's being done to handle that as well as clean up the piles of technical debt that have accrued over the last 10+ years.

Multi-Arch Container Layered Image Build System

Next time slot that I attended was my presentation on the new plans to provide a Multi-Architecture implementation of the Fedora Layered Image Build Service. The goal here is to provide a single entry point for Fedora Container Maintainers to contribute containerized content, submit it to the build system, and then have multiple architecture container builds as a result. This is similar to how the build system operates for RPMs today and we aim to provide a consistent experience for all contributors.

This is something that's still being actively implemented with various upstream components that make up the build service, but will land in the coming months. It was my original hope to be able to provide a live demo, but it unfortunately didn't work out.

Multi-Arch Fedora Layered Image Build Service slides

Become a Container Maintainer

A workshop put together by Josh Berkus that I helped with was to introduce people who'd never created a container within the Fedora Layered Image Build Service to our best practices and guidelines. Josh took everyone through an exercise of looking at a Dockerfile that was not in compliance with the guidelines and then interactively with the audience bringing it into compliance.

After the example was completed, Josh put up a list of packages or projects that would be good candidates for becoming containerized and shipped to the Fedora User base. Everyone split up into teams of two (we got lucky, there was an even number of people in the room), and they worked together to containerize something off the list. He and I spent a period of the time going around and helping workshop attendees and then with about 10 minutes left the teams traded their containerized app or service with someone else and performed a container review in order to give them an idea of what that side of the process is like.

Hopefully we've gained some new long term container maintainers!

Fedora Environment on Windows Subsystem for Linux

This session is one that I think many were surprised would ever happen, most notably because I think for those who've been in the Linux landscape for long enough to remember Microsoft's top brass calling Linux a cancer, we never would have predicted Windows Subsystem for Linux existing. However, time goes on, management changes, and innovation wins. Now we have this magical thing called "Windows Subsystem for Linux" that doesn't actually run Linux at all, but instead runs programs meant to run on Linux without modification or recompilation.

The session goes through how this works, how the Windows kernel accomplishes the feats of magic that it does and the work that Seth Jennings (the session's presenter) put in to get Fedora working as a Linux distribution to run on top of Windows Subsystem for Linux. It's certainly very cool, a wild time to be alive, and something I think will ultimately be great for Fedora as an avenue to attract new users without having to shove them into the deep end right away.

Fedora Environment on Windows Subsystem for Linux slides

Day 2


Going along with the theme of continuing to try and deliver things faster to our users, this session discusses a new service that's being rolled out in Fedora Infrastructure that will address the needs of "keeping things fresh" in Fedora. Introducing Freshmaker

As it stands today, we don't have a good mechanism by which to track the "freshness" of various pieces of software, there's been some attempts at this in the past and they weren't necessarily incorrect or flawed but they never had the opportunity to come to fruition for one reason or another. Good news is that Freshmaker is a real thing, it's a component of Factory 2.0 and is tasked the job of making sure that software in build pipeline is fully up to date with latest input sources for easy of maintaining updated release artifacts for end users to download.

Gating on Automated Tests in Fedora - Greenwave

Greenwave is another component of Factory 2.0 with the goal of automatically blocking or releasing software based on automated testing such that the tests are authoritative. This session discussed the motivations and the design as well as discussed how to override Greenwave via WaiverDB.

Discussing Kubernetes and Origin Deployment Options

This session was mostly about kubernetes, OpenShift, and how to deploy them on Fedora in different ways. There was a brief presentation and then discussions about preferred methods of deployment, what we as a community would like to and/or should pursue as the recommended method by which we direct new users to install these technologies.

Fedora ARM Status Update

Fedora's ARM champion, Peter Robinson, gave an update of where things are in ARM land, discussing the various development boards available and what Fedora contributors and community members can expect in the next couple Fedora releases.

On OpenShift in Fedora Infrastructure

This session was a working/discussion session that revolved around how the Fedora Infrastructure Team plans to utilize OpenShift in the future for Fedora services in order to achieve higher utilization of the hardware we currently have available and to allow for applications to be developed and deployed in a more flexible way. The current plans are still being discussed and reviewed, which is part of what this session was for, but stay tuned for more in the coming weeks.

The Future of fedmsg?

Currently, fedmsg is Fedora's unified message bus. This is where all information about activities within the Fedora Infrastructure are sent and that's not slated to change anytime soon. However, there's new use cases for the messages that will go out on the message bus that have changed in scope and the reliability of message delivery is something that will become a more hard pressing requirement. This presentation was about a proposal to add new transports for messages in addition to the one that already exists, allowing various services needing to listen for fedmsgs to subscribe to the protocol endpoint that most makes sense for the purpose. This session opened a discussion with a proposal to satisfy the newer needs while leaving the current infrastructure in place by taking advantage of some of the features of ZeroMQ.

Day 3

What does Red Hat want?

This was a very candid and honest presentation by our once long standing Fedora Infrastructure lead, Mike McGrath, who spoke on behalf of Red Hat as the primary corporate sponsor of Fedora as to what specifically Red Hat as an entity hopes to gain from the ongoing collaboration with the Fedora Community, and the innovations they hope to help foster moving forward. I unfortunately did not take good notes so don't have much in the way to provide as far as specifics so we'll have to wait for the videos to become available for those interested in this material.

Fedora Infrastructure: To infinity and beyond

The Fedora Infrastructure lead, Kevin Fenzi, stood infront of a whiteboard and kicked off a workshop where interested parties and contributors to the Fedora Infrastructure outlined and planned major initiatives for the Fedora Infrastructure for the next year. Headliners here from general consensus is that OpenShift will definitely be leveraged more heavily but it will require some well defined policy around development and deployment for the sake of sanitizing where code libraries come from for security, auditing, and compliance purposes. The other main topic of discussion was metrics reporting, various options will be evaluated with front runners being the Elastic Stack, Hawkular, and Prometheus.

Modularity - the future, building, and packaging

This session was a great introduction to how things are going to fit together, we dove pretty far into the weeds with some of the tech behind how Fedora Modularity fits together and ultimately if anyone is interested in digging in there, the official docs really are quite good. I would recommend anyone interesting in learning about the technical details about modularity to give it a look.

Let's Create a Module

In this workshop, put on my Tomas Tomecek, we learned how to create a module and feed it into the Fedora Module Build System (MBS). This was an interesting exercise to go through because it helped define the relationship between rpms, modules, non-rpm content, and the metadata that ties all of this together with disjoint modules to create variable lifecycles between different sets of software that come together as a module. I was unable to find the slides from the talk, but our presenter recently tweeted that a colleague of his wrote a blog post he thinks is even better than the workshop, so maybe give that a go. :)

Continuous Integration and Deliver of our Operating System

The topic of Continuous Integration (CI) is one that's extremely common in software development teams and it is not a new concept. But what if we were going to take that concept and apply it to the entire Fedora distribution? Now that might be something special and could really pay off for the user and contributor base therein. This is exactly what the Fedora CI initiative aims to do.

What's most interesting to me about this presentation was that it went through an exercise of thought and then showed specifically how a small team was able to accomplish more work than almost anyone though they could because they treat the bot they've written to integrate their CI pipeline with various other services as a member of the team. They taught themselves to not think of it as a system but as a team member they could offload work to, the work that nobody else wanted to do.

I look forward to seeing a lot of this work come to fruition.

Day 4

The last day of the conference we had a "Show and Tell" where various members from different aspects of the projects got together and worked on things. The rest of the day was a hackathon for those who were still in-town and not traveling back home mid-day.

As always, Flock was a blast and I can't wait for Flock 2018!

Until next time...

Slice of Cake #19

Posted by Brian "bex" Exelbierd on September 18, 2017 01:10 PM

A slice of cake

In the last bunch of weeks as FCAIC I1:

  • Failed to post my slice of cake on 21 August 2017. If I had, it would have mentioned that I:
    • fought with a printer … and mostly lost :(
    • worked hard on the final Flock tasks including booklet printing, last minute supplies and badge labels (see printer fight above).
    • was excited and surprised by the rapid launch of the all new docs.fedoraproject.org. A blog post explaining the changes is coming soon2.
  • Attended Flock. Overall I think the conference was a success and I am excited by all of the session and general feedback people have been posting. I know that the council has a lot to think about as we work on next year.
  • Attended a bunch of meetings in Raleigh in Red Hat Tower.

À la mode

  • Was actually in the US on Labor Day for the first time in a long time. It’s still weird to work US holidays.
  • Was the emcee at Write The Docs in Prague, Czech Republic. Two days of talks and all of them introduced by me :).

Cake Around the World

I’ll be traveling some and hope you’ll ping me for coffee if you’re nearby.

  • Personal travel with limited Internet 29 September - 14 October
  • Latinoware, Foz de Iguacu, Brazil, 18 - 20 October
  • BuildStuff.lt, Vilnius, Lithuania, 15 - 17 November
  1. Holy moly it has been a while since I served up some cake! 

  2. For some definitions of soon. 

Flock 2017: How to make your application into a Flatpak?

Posted by Pravin Satpute on September 18, 2017 10:43 AM

"How to make your application into a Flatpak?" was on the first day and delivered by Owen Taylor.

Its around 1 and half year we are observing development of Flatpak's and i am sure this is going to be one of the breakthrough technology for distribution of packages in coming years.  I attended this topic to get more idea about what is happening? and What plan in coming future?

Session was very information and it was mostly from architectural overview of flatpak. 

I will update my blog with recording once it get available. Meanwhile in this blog i am going to cover only Q&A part from session.

Question: If i install normal rpm and flatpak for same application, how system will differentiate between it?
Answer: In command like, application id will be different for one from rpm and one from flatpak. Both will appear and one can choose.

Question: Flatpak is bundle of libraries. Now if Platform like Fedora provide one flatpak for application and same time upstream also provide flatpak. Will one get replaced with other?
Answer: We cant replace one with other.

Question: I created flatpak on F25 and failed in Wayland, some permission missing.
Answer: If it is build for X11, it should work on wayland as well.

Question: Can we test flatpak on F26?
Answer: flatpak.org are there, we can download and start testing. F26 is very much updated.

Question: Are we releasing any application as a flatpak only in Fedora in future?
Answer: Lets packager decide it, if its working well.  At least we are not doing this forfor F27, F28. Fedora 29 packages may able to do it.

Question: Whe we will have Firefox, Libreoffice as a flatpak?
Answer: Low hanging fruits first and gradually we can think or ask people for it. First lets get infra ready.

Question: Is any dependency on kernel?
Answer: Generally very minimal dependency on kernel, more for graphics driver. No, strong dependency between kernel and runtime.

Question: Can you consider flatpak with similar tech in android etc?
Answer: Idea of using specific file system is purely flatpak and docker/containers.   Flatpak has more secured communication model.

I hope, i able to catch all the Q&A correctly, if anyone has to update anything about this feel free to send me email or just update in comment section.

Running the Fedora kernel regression tests

Posted by Fedora Magazine on September 18, 2017 10:10 AM

When a new kernel is released, users often want to know if it’s usable. The kernel has a set of test cases that can be run to help validate it. These tests are run automatically on every successful build. They are designed to validate a basic set of functionality and some Fedora specific features, such as secure boot signing. Here’s how you can run them.

This wiki page provided by the Fedora QA team describes the process to run the regression tests on your local machine. To run these tests, you need the gcc, git, and python-fedora packages installed on your system. Use this sudo command if needed:

sudo dnf install gcc git python-fedora

Getting and running the tests

First, clone the kernel-tests repository and move into the directory:

git clone https://pagure.io/kernel-tests.git
cd kernel-tests

Next, set some configuration options. The easiest way to get started is to copy the config.example file:

cp config.example .config

The most important settings are the ones labeled to set submission. By default, tests do not submit results to the server. To submit results anonymously, use the setting submit=anonymous. To submit results linked to your FAS username, set submit=authenticated and username=<your FAS login> in .config. If you link your submission to your FAS username, you’ll also receive a Fedora badge.

To run the basic set of tests, use this command:

$ sudo ./runtests.sh

To run the performance test suites, use this command:

$ sudo ./runtests.sh -t performance

The expected result is that the tests pass. However, some tests may fail occasionally due to system load. If a test fails repeatedly, though, consider helping by reporting the failure on Bugzilla.

Running these regression tests helps validate the kernel. Look for more tests added in the near future to help make the kernel better.

Building Modules for Fedora 27

Posted by Adam Samalik on September 18, 2017 08:34 AM

Let me start with a wrong presumption that you have everything set up – you are a packager who knows what they want to achieve, you have a dist-git repository created, you have all the tooling installed. And of course, you know what Modularity is, and how and why do we use modulemd to define modular content. You know what Host, Platform, and Bootstrap modules are and how to use them.

Why would I make wrong presumptions like that? First of all, it might not be wrong at all. Especially, if you are a returning visitor, you don’t want to learn about the one-off things every time. I want to start with the stuff you will be doing on regular basis. And instead of explaining the basics over and over again, I will just point you to the right places that will help you solve a specific problem, like not having a dist-git repository.

Updating a module

Let’s go through the whole process of updating a module. I will introduce some build failures on purpose, so you know how to deal with them should they happen to you.

Start with cloning the module dist-git repository to get the modulemd file defining the installer module.

$ fedpkg clone modules/installer
$ cd installer
$ ls
Makefile installer.yaml sources tests

I want to build an updated version of installer. I have generated a new, updated version of modulemd (I will be talking about generating modulemd files later in this post), and I am ready to build it.

$ git add installer.yaml
$ git commit -m "building a new version"
$ git push
$ mbs-build submit
asamalik's build #942 of installer-master has been submitted

Now I want to watch the build and see how it goes.

$ mbs-build watch 942
 NetworkManager https://koji.fedoraproject.org/koji/taskinfo?taskID=21857852
 realmd https://koji.fedoraproject.org/koji/taskinfo?taskID=21857947
 python-urllib3 https://koji.fedoraproject.org/koji/taskinfo?taskID=21857955

 40 components in the COMPLETE state
 3 components in the FAILED state
asamalik's build #942 of installer-master is in the "failed" state

Oh no, it failed! I reviewed the Koji logs using the links above, and I see that:

  • NetworkManager and python-urllib3 failed randomly on tests. That happens sometimes and just resubmitting them would fix it.
  • realmd however needs to be fixed before I can proceed.

After fixing realmd and updating the modulemd to reference the fix, I can submit a new build.

$ git add installer.yaml
$ git commit -m "use the right version of automake with realmd"
$ git push
$ mbs-build submit
asamalik's build #942 of installer-master has been submitted

To watch the build, I use the following command.

$ mbs-build watch 943
 sssd https://koji.fedoraproject.org/koji/taskinfo?taskID=21859852

 42 components in the COMPLETE state
 1 components in the FAILED state
asamalik's build #943 of installer-master is in the "failed" state

Good news is that realmd worked this time. However, sssd failed. I know it built before, and by investigating the logs I found out it’s one of the random test failures again. Resubmitting the build will fix it. In this case, I don’t need to create a new version, just resubmit the current one.

$ mbs-build submit
asamalik's build #943 of installer-master has been resubmitted

Watch the build:

$ mbs-build watch 943
 43 components in the COMPLETE state
asamalik's build #943 of installer-master is in the "ready" state

Rebuilding against new base

The Platform module has been updated and there is a soname bump in rpmlib. I need to rebuild the module against the new platform. However, I’m not changing anything in the modulemd. I know that builds map 1:1 to git commits, so I need to create an empty commit and submit the build.

$ git commit --allow-empty -m "rebuild against new platform"
$ git push
$ mbs-build submit
asamalik's build #952 of installer-master has been submitted

Making sure you have everything

Now it’s the time to make sure you are not missing a step in your module building journey! Did I miss something? Ask for it in the comments or tweet at me and I will try to update the post.

How do I get a dist-git repository?

To get a dist-git repository, you need to have your modulemd to go through a Fedora review process for modules. Please make sure your modulemd comply with the Fedora packaging guidelines for modules. Completing the review process wil result in having a dist-git repository.

What packages go into a module?

Your module need to run on top of the Platform module which together with the Host and Shim modules form the base operating system. You can see the definition of Platform, Host, and Shim modules on github.

You should include all the dependencies needed for your module to run on the Platform module, with few exceptions:

  • If your module needs a common runtime (like Perl or Python) that are already modularized, you shoud use these as a dependencies rather than bundling them to your module.
  • If you see that your module needs a common package (like net-tools), you shouldn’t bundle them either. They should be split into individual modules.
  • To make sure your module doesn’t conflict with other modules, it can’t contain the same packages as other modules. Your module can either depend on these other modules, or such packages can be split into new modules.

To make it easier during the transition from a traditional release to a modular one, there is a set of useful scripts in the dependency-report-scripts repository, and the results are being pushed to the dependency-report repository. You are welcome to participate.

Adding your module to the dependency-report

The module definitions are in the modularity-modules organizations, in a README.md format simlar to the Platform and Host definition. The scripts will take these repositories as an input together with the Fedora Everything repository to produce dependency reports and basic modulemd files.

How can I generate a modulemd file?

The dependency-report-scripts can produce a basic modulemd, stored in the dependency-report repository. The modulemd files stored in the dependency-report repository reference their components using the f27 branch for now. To produce a more reproducible and predictable result, I recommend you to use the generate_modulemd_with_hashes.sh script inside the dependency-report-scripts repository:

$ generate_modulemd_with_hashes.sh MODULE_NAME

The result will be stored at the same place as the previous modulemd, that was using branches as references.

$ ls output/modules/MODULE_NAME/MODULE_NAME.yaml

Where do I get the mbs-build tool?

Install the module-build-service package from Fedora repositories:

$ sudo dnf install module-build-service

What can I build to help?

See the F27 Content Tracking repository and help with the modules that are listed, or propose new ones. You can also ask on #fedora-modularity IRC channel.

Where can I learn more about Modularity?

Please see the Fedora Modularity documentation website.



Flock to Fedora: Fedora Contributor Conference

Posted by Jona Azizaj on September 18, 2017 01:54 AM

Flock is where you meet with other members of the Fedora community who share whatever your interest is, whether that’s the kernel or the cloud, hardware or UX design. Flock 2017 took place in Hyannis, Massachusetts, on 29 August – 1 September and was a more action-oriented event than previous years. Day #0 This was […]

The post Flock to Fedora: Fedora Contributor Conference appeared first on Jona Azizaj.

Put Everything on Fire and Make Friends with the Robots

Posted by Till Maas on September 17, 2017 02:53 PM


From 29 August to 1 September 2017 Fedora contributors from all over the world flocked to Hyannis, Massachusetts to meet and work on improving Fedora. I was lucky to be part of it.

Our leader Matthew Miller opened the conference with his State of Fedora presentation that included a perfect quote for Fedora:

Let’s put things on fire in a good way and make friends with robots!

For me this is a great motto that also fits the conference. To stay in the innovators/early-adopters space, Fedora needs to constantly change. Unavoidable, this results in fires. But we need to make sure these are good fires that create space for good, new things. We changed a lot of things to allow modularity and now we can build great things with it. The other part is about increasing automation. This is what hopefully will result from the fire when adopting pagure for dist-git. Now we have the infrastructure ready for automating more tasks for packaging. This matches one of the last sessions by Stef Walter about Continuous Integration and Delivery. There he shared his experience with integrating robots into the creation of Cockpit and described best practices in making robots an integral part of a team.



There were many other interesting sessions and luckily there will be recordings of most of them. For the Atomic Host 101 workshop Dusty Mabe prepared an interesting lab session that he published in his blog so you can do it at home, too. It gave nice introduction into the special properties of Atomic Host. Being there in person I had the advantage that I could ask questions to find out more about the details which Dusty and Colin Walters happily answered. This is in my opinion the great extra value of live workshops on which Flock focussed: Getting introduced into new topics and giving/receiving immediate feedback. It is great to see, that there are already further use cases for the technology behind Atomic Host besides using them as a container host. Patrick Uiterwijk and Owen Taylor shared their experience running Atomic workstation. It seems to be doable but be prepared for a lot of extra work. In his visionary session about Fedora IoT Peter Robinson suggested to make use of the atomic upgrades for IoT devices to reduce the risk of bricking devices and allowing to easier recover to a previously known-good state. The ideas for Fedora IoT sound very promising but they still need a lot of work to be implemented.

In an evening session there was an the Fedorator building workshop. After reading about it on the Ambassador mailing list it was very interesting to learn to how to build one and understand it’s design. I volunteered to take one back to Germany, so hopefully we can present them at one of the next conferences. Anyone going to OpenRheinRuhr? The workshops fits great into another topic our project leader promoted – improving the Fedora Ambassador work. He focused on improving what Ambassadors do. For example targeting more events matching the Fedora objectives would be an idea. To me it would be great for Ambassadors to decide more consciously which events to visit and to properly show off Fedora’s unique selling points. However, at least for me personally the critical criterion is usually the proximity to my home town and whether I have time to be there at all. Therefore it might take a while to get enough people and presentations/workshops together. to actively pursue this. I like the vision nonetheless. Speaking of improving the Ambassador work, I see also potential in improving how to represent Fedora at events. But this could be a full blog post on its own and I want to get this one finished.

In the Bodhi hacking session Randy Barlow and Jeremy Cline helped me setting up a Bodhi 2 development instance. I lost track of Bodhi development after the switch from Bodhi 1. The new setup with vagrant is very nice and allows to test new code very easily. We tried to improve the speed for Fedora Easy Karma to get it’s update information. Initially we hoped adding indexes to the database would help but it seems there is also a lot of serialization going on that might make Bodhi send a whole database dump in the end instead of just the necessary data.

The other sessions, the hallway track and social events were awesome, too. Everyone was very friendly and it was great to meet more people in person that I only knew online. The organizers did a great job as well. The main possibility for improvement would be to make the scheduling more collision-free. There were some sessions that did not get many attendees because sessions for the same target audience were at the same time. My sessions suffered from this as well. Maybe there could be a poll in advance to figure out which sessions people want to go at the same time and then plan the schedule with this feedback. The Chaos Communication Congress uses halfnarp for this. Maybe this is an option for Flock, too.

All in all I am happy to have been part of Flock 2017 and hope to be there in 2018 when it is back in Europe.

Experimenting with a Personal Fedora Atomic and OpenShift Origin Server

Posted by Devan Goodwin on September 17, 2017 12:17 PM

Last weekend I decided to play around with my old workstation that’s just been sitting around powered off for years now, mostly replaced by a RaspBerry Pi3 which handles most of my home network and storage. The workstation is from around 2009, isn’t particularly fast but has 6GB of RAM and doesn’t consume much power if you yank the video card. I have a small Rails 4 app running as a backend for one of my Android pet projects, and thought it might be fun to repurpose this machine to host it on OpenShift.

I really don’t want to have to think about upgrades much, CentOS would then be a pretty good option but I thought it would be better to get a taste of what’s been going on with Project Atomic, a minimal OS just for running containers, which allows you to upgrade and rollback the whole OS filesystem as if you were using git.

Fedora Atomic

For an Atomic OS I went with Fedora Atomic. Normally Fedora would be too much upgrade maintenance for what I want, but being able to upgrade everything and reboot so easily with Atomic seems like it should negate that, and I’d like access to be able to see the latest work being done in this area. Fedora installation has become incredibly refined over the years since I started using it, installing Atomic was no exception. (just had to dd an ISO to a USB drive and boot off it)

Post install you might want to do a one time root partition extension so you can do Atomic upgrades:

$ lvresize --resizefs --size=+10G /dev/mapper/fedora--atomic-root

For a headless server you might want Cockpit for remote management:

$ atomic install cockpit/ws
$ atomic run cockpit/ws

At this point you can see it running locally with a curl to https://yourip:9090, but accessing it remotely was quite difficult. Fedora Atomic does not appear to be using firewalld, but does appear to have a default deny iptables policy in play. There’s got to be a better way but I worked past this with:

$ systemctl iptables stop
$ vi /etc/sysconfig/iptables

Right after the ACCEPT for ssh (port 22) I added:

-A INPUT -p tcp -m state --state NEW -m tcp --dport 9090 -j ACCEPT


$ systemctl iptables start

OpenShift Origin

To install OpenShift Origin I went with openshift-ansible. Using something like oc cluster would probably be a logical option here for a single system but I wanted to stick to a more production focused path.

To do this we need an Ansible inventory:



# htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
openshift_master_htpasswd_users={'dgoodwin': 'HASHEDPW'}

[masters] schedulable=true

# Master would be unschedulable by default leaving us with nowhere for pods to run. schedulable=true


To get the hashed password you can just create a temporary htpasswd file:

$ htpasswd -c deleteme-file username

Copy the whole hashed password out of the file (including trailing ‘/’) and then delete it.

We’ll also need something for persistent storage. With a setup like this I just went with NFS using the following small playbook:

- hosts:
  - masters
  become: yes
  - file: path=/var/nfsshare state=directory mode=0777
  - lineinfile: dest=/etc/exports line="/var/nfsshare/,sync,no_root_squash)"
  - service: name=rpcbind state=started enabled=yes
  - service: name=nfs-server state=restarted enabled=yes
  - service: name=rpc-statd state=started enabled=yes
  - service: name=nfs-idmapd state=started enabled=yes

This should probably be modified to create a few PV’s just as sub-directories in /var/nfsshare.

At this point you should be ready to run openshift-ansible. A recent addition was the ability to run openshift-ansible playbooks via a container rather than having to git clone. You can read more about this here.

After installation I needed a couple tweaks because this is a single system “cluster”, by default the master is not schedulable and thus you’ve got nowhere to run pods.

$ oc label node master.local region=infra
$ oadm manage-node master.local --schedulable=trueoadm manage-node master.local --schedulable=true

We also need to define a single persistent volume:

$ cat nfspv.yml
apiVersion: v1
kind: PersistentVolume
  name: nfspv1
    storage: 5Gi
  - ReadWriteOnce
    path: /var/nfsshare
  persistentVolumeReclaimPolicy: Recycle
$ oc create -f /root/nfspv.yml

At this point I had OpenShift origin up and running containerized on Fedora Atomic Host. You can operate as the cluster admin by using root on the Atomic host, but you’ll naturally want to login as the regular user we created earlier for actually using it to run applications:

$ oc login -u dgoodwin https://yourip:8443


This of course isn’t the most logical thing to do, a single node Kubernetes or OpenShift cluster doesn’t make a lot of sense. It is however, kind of fun to play with and have running in your basement. I really like Atomic for servers, having that small footprint and not having to worry about upgrades is really satisfying. OpenShift, even at such a small scale, is handy for building from source and keeping an eye on your logs.

Not sure how long I’ll keep it running, my $5 Linode was doing this job previously and was able to run my app as a straight container (no OpenShift) just fine, so realistically it might end up back there soon, but regardless it is presently fun to tinker with and learn about.

Test Days: Internationalization (i18n) features for Fedora 27

Posted by Fedora Community Blog on September 17, 2017 11:09 AM

All this week, we will be testing for  i18n features in Fedora 27. Those are as follows:

  • Default Chinese Serif Font – Fedora already provides default Chinese Sans fonts, now Fedora 27 will also provide default Chinese Serif fonts
  • Libpinyin 2.1 Enhance the experience of Fedora for Chinese Zhuyin users by speeding up Zhuyin input.
There has been further improvements in features introduced in previous versions of Fedora those are as follows:
  • Emoji typing – n the computing world, it’s rare to have person not know about emoji. Before, it was difficult to type emoji in Fedora. Now, we have an emoji typing feature in Fedora using ibus-typing-booster. We have few emoji fonts in Fedora now.
  • Unicode 10.0.0 – With each release, Unicode introduces new characters and scripts to its encoding standard. We have a good number of additions in Unicode 10.0.0 version. These are mainly few more Emoji characters, Bitcoin symbol (U+20BF), Typicon marks and few symbols in the range of U1F900 to U1F9FF.
  • IBus typing booster Multilingual support –IBus typing booster is now updated in Fedora to provide updated translations, Unicode 10.0.0 support, emoji annotations from CLDR.

Other than this, we also need to make sure all other languages works well specifically input, output, storage and printing.

How to participate

Most of the information is available on the Test Day wiki page. In case of doubts, feel free to send an email to the testing team mailing list.

Though it is a test day, we normally keep it on for the whole week. If you don’t have time tomorrow, feel free to complete it in the coming few days and upload your test results.

Let’s test and make sure this works well for our users!

The post Test Days: Internationalization (i18n) features for Fedora 27 appeared first on Fedora Community Blog.

Bilan mensuel de la documentation francophone de Fedora-fr.org, numéro 2

Posted by Charles-Antoine Couret on September 17, 2017 06:00 AM

Pour rappel, vous pouvez consulter l'état du travail en cours sur la documentation.

À cause des vacances et d'autres histoires personnelles, j'ai loupé le bilan d'août et j'ai moins contribué que la période juin-juillet. En cette rentrée, faisons quand même un bilan du travail abattu depuis deux mois.

Les sujets traités ont changé quelque peu. C'est plus centré autours des thématiques :

  • Les dépôts externes ;
  • La création de médias personnels et leur exploitation ;
  • L'usage serveur.

Personnellement je me suis occupé plutôt des deux premiers thèmes. Par exemple le dépôt de RPMFusion et les paquets multimédia qu'il propose sont un incontournable auprès des utilisateurs de Fedora. Il était important de s'y atteler car de nombreux paquets ont évolué, que ce soit des nouveaux venus, des évolutions majeures comme plus de codecs accessibles via mplayer ou GStreamer.

Concernant le deuxième point, il était nécessaire de revoir les procédures pour exploiter kickstart qui a évolué (notamment depuis l'apparition des Spins et des produits Server, Workstation et Cloud). Les moyens de télécharger Fedora ont changé, le Live CD a peu à peu laissé place au Live USB qui a été particulièrement mis en avant avec l'outil Fedora Media Writer. D'ailleurs l'installation d'un dual Boot avec Windows a aussi beaucoup évolué, car GRUB a changé avec sa version 2 ce qui a simplifié la procédure et Windows est aussi plus conciliant sur ce genre de cohabitation.

J'ai également rafraîchi la page d'aide pour contribuer à Fedora, car depuis le temps les sujets ont évolué, de nouvelles ressources ou secteurs ce sont développés comme l'assurance qualité et l'ajout aussi de la documentation francophone jusqu'ici absente.

Le dernier point a été particulièrement étudié par Nicolas, car Fedora a bien entendu un usage serveur important dont le paysage a changé. Outre MySQL devenu MariaDB, de nombreuses commandes ont changé avec l’évolution des outils. Si l’environnement web LAMP est surtout concerné, cela touche également SELinux, le serveur de courriel Dovecot et Openldap. Un énorme travail a été accomplis, donc merci à lui !

Je remercie également les autres contributeurs, relecteurs ou toute autres personnes qui se sont impliquées dans ce processus comme Édouard, Nicolas Chauvet, et d'autres.

Aujourd'hui donc, nous sommes à 42 articles traités, contre 25 au précédent bilan. Je suis satisfait des progrès réalisés sur la documentation. Il y a beaucoup de travail à mener encore, mais il semble possible que la documentation soit dans un état très acceptable d'ici Fedora 27 ou la fin de l'année 2017. Ensuite il faudra veiller à maintenir la documentation à jour continuellement et ajouter des articles suivant les besoins du moment.

Je vous invite en tout cas à nous donner un coup de main, pour cela je vous conseille de suivre la procédure pour contribuer à la documentation et si possible de participer à nos ateliers hebdomadaires tous les lundi soir à partir de 21h (heure de Paris) sur le canal IRC #fedora-doc-fr du serveur Freenode. Rien ne vous empêche de contribuer en dehors du cadre des ateliers, toute l'aide est la bienvenue. Alors, n'hésitez pas !

Switching Raspbian to Pixel desktop

Posted by Rajeesh K Nambiar on September 17, 2017 05:40 AM

Official Raspbian images based on Debian Stretch by default has the Pixel desktop environment and will login new users to it. But if you have had a Raspbian installation with another DE (such as LXDE), here are the steps to install and login to the Pixel desktop.

  1. apt-get install raspberrypi-ui-mods
  2. sed -i 's/^autologin-user=pi/#autologin-user=pi/' /etc/lightdm/lightdm.conf
  3. update-alternatives --set x-session-manager /usr/bin/startlxde-pi
  4. sed -i 's/^Session=.*/Session=lightdm-xsession/' ${USER}/.dmrc

Make sure the user’s ‘.dmrc’ file is updated with the new startlxde-pi session as that is where lightdm login manager looks to decide which desktop should be launched.

Tagged: linux

The state of open source accelerated graphics on ARM devices

Posted by Peter Robinson on September 16, 2017 11:00 PM

I’ve been meaning to write about the state of accelerated open source graphics options for a while now to give an update on a blog post I wrote over 5 years ago in January 2012, before the Raspberry Pi even existed! Reading back through that post it was pretty dark times for any form of GUI on ARM devices but with the massive changes in ARM devices and the massive change in SBCs (Single Board Computers) heralded by things like the Raspberry Pi have things improved at all? The answer is generally yes!

The bad

Looking back at that post the MALI situation is still just as dire with ARM still steadfastly refusing to budge. The the LIMA reverse engineering effort started with promise, but went up in smoke with a fairly public community break down, I don’t envision that situation improving any time soon although just recently there appears to be some forward movement happening finally after a long silence. This only covers the MALI-400 series and any newer GPU is a completely different architecture/IP. Even with sessions recently at Linaro Connect titled What’s happening with ARM Mali drivers I don’t see fast change here.

The Imagination Technologies PowerVR is still just as dire as situation as it was five years ago. The company’s incompetent management recently managed to avoid being bought by Apple which in turn, because they’ve screwed the open source community while milking the Apple cash cow, essentially means they’re screwed. I suspect they’ll either open source to try and remain a relevant contender or die in a tire fire. Only time will tell there, in the mean time any ARM SoC that has this IP on board is useless for anything graphical so I’d tend to avoid it, thankfully there seems to be less of them these days.

The good

Despite the two bad examples above there’s actually been a lot of good change in the last five years. We now have a number of options for fully accelerated 2D/3D graphics on ARM SoCs and I run GNOME Shell on Wayland, yes the full open source shiny, on a number of different devices regularly.

NVIDIA true to the rumours did open up all the graphics on the Tegra series of hardware. The new Tegra K/X series have GPUs similar to their x86 offerings with Kepler/Maxwell/Pascal GPU cores but NVIDIA supports these devices by contributing to the nouveau open driver rather than the closed x86 driver. The performance on 32 bit TK1 devices has been decent for a number of releases of Fedora and improves all the time, we’ll be supporting the X series (X1/X2) with their Maxwell/Pascal GPUs in Fedora 27.

In the old post I brushed past Vivante with a mere mention of Marvell and Freescale (now NXP). The Vivante GPUs ship in NXP i.MX6 and i.MX4, some Marvell chips and some TI chips. There was a reverse engineering effort called etnaviv that must have started not long after I wrote that post and after a number of years of development support landed upstream in the kernel late 2015, and in mesa in the 7.1 release allowing us to support fully accelerated Wayland in Fedora 26! Did anyone notice? I didn’t really yell about it as much as I should have! It supports fully accelerated 3D in mesa/wayland, is pretty stable and is improving all the time, well done to all the contributors to that effort!

Another I brushed past in the old post was the Qualcomm Snapdragon SoC. They ship with a Adreno GPU. This was previously closed source, with the SoC primarily used by phone/tablet manufacturers I suspect they didn’t care… until Rob Clark (and no doubt there were other contributors) decided to reverse engineer the driver with the open freedreno driver. This is now the default driver with even Qualcomm contributing to it. We’ll support this in Fedora 27, initially with the 96boards Dragonboard 410c using the freedreno driver, but I doubt it’ll be the last Qualcomm based device we support. The Snapdragon 835 SoC, the device in all the high end Android phones this year and the ARM Windows 10 laptops, is really nice with decent performance, I’d love to be able to support a device with that SoC!

Raspberry Pi, as I mentioned in the introduction, wasn’t even out in when I wrote the original post. When it fist launched there wasn’t an open driver but 5 years later there is, sponsored by Broadcom no less. We introduced initial support for the Raspberry Pi with the open vc4 driver by Eric Anholt in Fedora 25 and it’s improving regularly. It supports fully accelerated 3D in mesa/wayland, and 2D via glamor in mesa.

So in conclusion we have improved by A LOT! We now have numerous different GPUs with open drivers to choose from in all price ranges that support fully accelerated 2D/3D desktops from four different vendors on both ARMv7 and aarch64. The media acceleration offload is also looking quite good, but that’s one for another post. The biggest holdout is MALI, and that would need two open drivers or ARM to come to the table, LIMA might work out for the 400 series, but that won’t work on the newer midguard series. With support in a number of drivers for the shiny new Wayland there’s an increasing number of devices people can use to enjoy the latest desktops fully accelerated!

Petit bilan de Rawhide, épisode 5, septembre 2017

Posted by Charles-Antoine Couret on September 16, 2017 09:30 PM

Je n'ai pas écrit de bilan de Rawhide en juin, juillet et août, l'approche de la version finale de Fedora 26 a amené trop peu de changements visibles pour que ce soit pertinent de les noter au départ puis j'ai manqué de temps pour traiter les avancées de Fedora 27.

Fedora 26 étant disponible depuis le 11 juillet, mon ordinateur personnel est repassé aussi tôt sur Fedora Rawhide qui est devenu il y a un mois Fedora 27 en devenir.

Fedora 27 Beta va d'ailleurs bientôt arriver, d'ici quelques semaines. Ce sera une première sans version Alpha préliminaire.


Dès le début GNOME 3.26 a des changements assez visibles. Je vous invite à lire les notes de versions pour plus de détails (et les illustrations). L'utilitaire gnome-tweak-tool a été remanié, les options sont plus nombreuses et le style de sélection ressemble à l'interface de GNOME Builder. Le centre de contrôle de GNOME a été également très modifié, avec une nouvelle organisation via cette barre latérale permanente et la refonte de nombreuses pages comme ce qui touche à l'affichage ou au réseau.

Je ne sais pas pourquoi mais la police par défaut de GNOME Terminal (police à chasse fixe) est Monospace Regular ce qui change bien entendu le rendu.

Quand les fenêtres dans GNOME sont rétrécies ou agrandies, il y a un nouvel effet visuel. Rien de sensationnel, mais c'est plutôt agréable sans pertes de performance dans la foulée j'ai l'impression. La barre de GNOME devient également transparente s'il n'y a pas de fenêtres maximisées.

Pendant quelques semaines, Empathy a également bénéficié de la refonte de son interface pour être plus homogène avec les autres applications GNOME en adoptant une interface proche de Polari. Mais cela était expérimental et l'interface habituelle a repli place. En espérant qu'il reviendra bientôt.

Et bien d'autres que je n'ai sans doute pas remarqué ou qui sont plus insignifiants.


Rawhide comporte bien évidemment de bogues. Même si des procédures doivent être mises en place durant ce cycle pour améliorer la qualité globale de cette branche de Fedora, des bogues importants resteront probablement présents.

En premier lieu, mon système de fichier /home chiffré avec LUKS n'était plus monté automatiquement. C'est plutôt gênant, on doit le faire à la main mais heureusement sans autres conséquences notamment en terme de cohérence des données. Heureusement corrigé depuis.

Un autre bogue, assez pénible, le sélecteur de fichier de GTK crashe. Donc dès qu'il faut choisir / ajouter un fichier dans un programme, l'application plante. Cela a été corrigé avant que je n'en fasse un rapport.

Firefox 55 a été oublié des mises à jour des paquets, à la base suite à des soucis pour le compiler puis par oubli du mainteneur. :-)

Sinon GNOME est très instable. La session se ferme régulièrement (toutes les heures presque) voire à certains moment comme à l'ouverture d'une machine virtuelle. Beaucoup de rapports de bogues tournent autours de ce sujet. Il semble que le composant gjs soit le fautif et une correction semble en cours. C'est rare que GNOME soit autant en difficulté et étant donné l'importance du sujet Fedora bloque l'évolution du cycle de F27 le temps que cela se calme.

Randa Report Part 2

Posted by Daniel Vrátil on September 16, 2017 08:50 PM

Let me start by annoying you with some pictures:

<3 Randa

The misty mountains below which the coders dwell.

We totally had snow in September. Well, not in Randa, but in the valley next to it, but still. SNOW!

And now for the serious part: in my last blog post, I talked about achieving our main goal for this year’s Randa meetings – we successfully ported the entire Kontact away from the obsoleted KDateTime class. Since we finished this on Thursday, there was still enough time left to start working on something new and exciting.

Volker and Frederik went on to work on a KWin plugin to simulate various kinds of color blindness which will help developers to see how visually impaired users see their software, I did a bit more code clean-up after the porting and a bit of code-review.

On Friday morning Volker and I discussed search in KDE PIM. Broken and unreliable search was one of the most often mentioned issues in the KMail User Survey, which we ran for the past month and a half, so I decided to tackle the problem and make our indexing and searching infrastructure fast and reliable.

The final plan consists of several phases – starting with reorganizing our current code to put related pieces of code (like email indexing and querying code) into a single place to make the code easier to maintain. This phase is already progressing and I hope to finish it within the next week. The second phase will involve moving the code that is responsible for indexing data into the backend Resources – whenever a backend retrieves an email from an IMAP server or an event from Google Calendar it will also index it and will send the index data alongside the raw data to Akonadi. Akonadi will then take care for just writing the data into the Xapian database. This will speed up indexing, reduce the IO load and will ensure that all data are reliably indexed and stored before they are presented in Kontact. The third phase will involve changing Kontact to query the index database directly, instead of asking Akonadi to do it for us. This will speed up the search and provide results faster. The final phase will focus on which data we are actually indexing. As they say – less is sometimes more – so having fewer, but better-defined data will allow us to provide better and more exact search results to the user.

Once this is settled, we can make applications to depend on the search index – for example KOrganizer will be able to query the index to only get events from e.g. December 2017 instead of fetching all events from the calendar and then figuring out if they should be displayed or not, making calendars of even the busiest people to load instantaneously.

All in all, it was an extremely productive hackfest for the PIM team and I’d again like to thank Mario, Christian and the rest of the Randa team for organizing the hackfest. You guys rock!

And remember, you can still donate to the Randa fundraiser to make future productive sprints possible!

Fun with fonts

Posted by Matthias Clasen on September 16, 2017 06:58 PM

I had the opportunity to spend some time in Montreal last week to meet with some lovely font designers and typophiles around the ATypI conference.

At the conference, variable fonts celebrated their first birthday. This is a new feature in OpenType 1.8 – but really, it is a very old feature, previously known under names like multiple master fonts.

The idea is simple: A single font file can provide not just the shapes for the glyphs of a single font family, but can also axes along which these shapes can be varied to generate multiple variations of the underlying design. An infinite number, really. Additionally, fonts may pick out certain variations and give them a name.

A lot has to happen to realize this simple idea. If you want to get a glimpse at what is going on behind the scenes, you can look at the OpenType spec.

A while ago, Behdad and I agreed that we want to have font variations available in the Linux text rendering stack. So we used the opportunity of meeting in Montreal to work on it. It is a little involved, since there are several layers of libraries that all need to know about these features before we can show anything: freetype, harfbuzz, cairo, fontconfig, pango, gtk.

freetype and harfbuzz are more or less ready with APIs like FT_Get_MM_Var or hb_font_set_variations that let us access and control the font variations. So we concentrated on the remaining pieces.

As the conference comes to a close today, it is time to present how far we got.

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1900-1" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2017/09/vf-axes.webm?_=1" type="video/webm">https://blogs.gnome.org/mclasen/files/2017/09/vf-axes.webm</video>

This video is showing a font with several axes in the Font Features example in gtk-demo. As you can see, the font changes in real time as the axes get modified in the UI. It is worth pointing out that the minimum, maximum and default values for the axes, as well as their names, are all provided by the font.

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1900-2" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2017/09/vf-named-styles.webm?_=2" type="video/webm">https://blogs.gnome.org/mclasen/files/2017/09/vf-named-styles.webm</video>

This video is showing the named variations (called Instances here) that are provided by the font. Selecting one of them makes the font change in real time and also updates the axis sliders below.

Eventually, it would be nice if this was available in the font chooser, so users can take advantage of it without having to wait for specific support in applications.

<video class="wp-video-shortcode" controls="controls" height="267" id="video-1900-3" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2017/09/vf-picker.webm?_=3" type="video/webm">https://blogs.gnome.org/mclasen/files/2017/09/vf-picker.webm</video>

This video shows a quick prototype of how that could look. With all these new font features coming in, now may be a good time to have a hackfest around improving the font chooser.

One frustrating aspect of working on advanced font features is that it is just hard to know if the fonts you are using on your system have any of these fancy features, beyond just being a bag of glyphs. Therefore, I also spent a bit of time on making this information available in the font viewer.

And thats it!

Our patches for cairo, fontconfig, pango, gtk and gnome-font-viewer are currently under review on various mailing lists, bugs and branches, but they should make their way into releases in due course, so that you can have more fun with fonts too!

Website and portfolio design

Posted by Suzanne Hillman (Outreachy) on September 16, 2017 06:30 PM

I’ve been slowly moving my website from the official pelican-based version to an in-progress Wix-based version. I learned interesting things around current web development while using the Pelican version, but I found it difficult to implement the kinds of design choices I wanted to make. I also found it quite difficult to get a responsive design that _stayed_ responsive when I made changes to the CSS file.

Wix is very nice for many design decisions, in large part because one can take a particular design element and put it wherever you want on the page. There is no futzing with the HTML or CSS, and no need to learn Python or Javascript.

Given that I want my page to be welcoming and easy to follow, easily choosing specific design elements is vital.

Tell a story!

One of the most important aspects of a UX portfolio is demonstrating one’s UX skills. This means walking folks through your process and making it easy to follow and understand.

One of my major challenges was (and is!) deciding how to structure my portfolio to offer the greatest ease of use without losing too many of the specific details. Upon the recommendation of one of the many recruiters I’ve spoken with, I’ve been adding an overview page to each piece of my projects:

<figure><figcaption>In this version, I offer an overview and links to more details of some of the pieces.</figcaption></figure>

If you compare this to the currently official version of my page, it’s a clear and huge difference in usability:

<figure><figcaption>This doesn’t show the overall goal, what my role was, or offer much guidance. It’s also not physically structured for easy reading.</figcaption></figure>

How to tell my story?

One of my major struggles is with offering too much information. Too many details, and too little structure.

I want people to know what I did! Unfortunately, if there’s not enough structure, they won’t read any of it. If there’s too much information, they won’ read any of it. So my major task is to take what I have and create overviews; not just for the main page of a project, but for sub pages.

This is unfortunately not quick or easy! As a result, I’m working on bringing the overviews back to my pelican site as I make them, with the eventual goal of fully transitioning. Sadly, I have been unable to convince my pelican site to let me stack things horizontally. My impression is that this is one of the major improvements to my Wix site, so even though I’ll bring some of the ideas back to Pelican, they are simply not as well-designed there.

I’ll be asking for feedback before I move over completely, of course. In the meantime, it’s pretty clear to me that my Wix site is just _better_. I’ll also be grabbing what I have at the Pelican site before ditching it, as I will worry that I’d lose information otherwise.

Other changes?

I’m also ditching the references to Cambio Buddies. It was a valuable and useful project, but I had very little guidance for what I was doing. I made a lot of mistakes, used techniques poorly, and am generally not happy with that project. Maybe it’s a mistake to remove my first project, but I just don’t like it.

Some folks have suggested incorporating the ‘current’ and ‘complete’ design projects into a single area. I’m reluctant to do this, since the current projects are still in process: I don’t want to be presenting them as if they were finished when they are not.

Similarly, folks have suggested getting rid of the design artifacts page. I’m not completely clear on why: they are in the Projects area, and it seems helpful to let folks get to a specific artifact quickly if they so desire.

One of my early bits of feedback for the Pelican portfolio was a lack of process. I’m still not entirely clear on what was meant by that, although I suspect that the lack of overview may have been part of it.

Internationalization Track at Flock 2017

Posted by Jens Petersen on September 16, 2017 06:20 PM
In the afternoon of Thursday 31st August we had the i18n/l10n/g11n Track at Flock 2017 in Hyannis, MA. We were in the second largest conference room next to the main auditorium, competing for attendance against other sessions like the Modularity track. We had a small interested growing audience through the afternoon.

Fedora i18n

In this longer session Parag Nemade and I started by reviewing i18n Changes in recent Fedora releases since Fedora 24 up to the coming 27 release. Originally I had planned for this session to be more of an open discussion, but with the large room since it would be hard to have detailed longer discussions, I had added more slides to cover various planned and potential future features and topics for discussion.

The slides for this session are here.

Parag reviewed the F24 features for DNF installation of langpacks including the subpackaging of glibc locales, F25 emoji input with ibus and Unicode 9.0. I did a quick demo of ibus-typing-booster including emoji and Unicode symbol input. F26 features libpinyin-2.0 which suports multiple sentence candidates. F27 will have further improvements and also a new Chinese Serif font, as well as Unicode 10.

Then we moved on to discussing some future features:

Langpacks installation

This is an ongoing topic: we are still considering what is the best approach or solution. Currently after Live install of Fedora Workstation, langpacks for LibreOffice for the user's language are not installed (unlike after a net install). The simplest workaround for now is probably to go into Gnome Software and add the needed language support metapackage from Addons -> Localization manually, or just install the appropriate langpacks metapackage or even the explicit langpack if one knows the langcode... but that is not really friendly enough for general users or that easy to work out. We would prefer that langpacks for the user's desktop language got installed automatically without any effort or poking around. (Also current glibc-langpack-* and libreoffice-langpack-* (fixed in git master) packages have versioned weak dependencies which causes their langpacks not to be installed immediately if their latest core packages are not up to date yet, but first at the next "dnf update" transaction, which is probably causing additional user confusion.)

One simple workaround would be to include LibreOffice langpacks for some major languages by default, though that would increase the size of the Live iso image further and create unnecessary extra update churn for everyone, so probably not the best solution. Langpack metapackage installation could also be done as an extra step in the installer through a initial-setup Anaconda plugin perhaps, but the user may not be connected to the network when they are doing the Live install from a USB stick say. After that the current ideas become more GNOME specific. Gnome-initial-setup could use PackageKit to pull in the appropriate metapackage after the user as confirmed their desktop language. Gnome-control-center should do the same when switching the desktop to another language. Shaun McCance suggested there could be signal (maybe dbus or dconf?) to trigger such actions.

But there is still a question how to do the actual package installation. If it is done silently in the background there seems a risk the user might logout unknowingly or even attempt to shutdown the computer while the package installation is still taking place. I don't know if systemd would pause the shutdown?

Input methods

I briefly discussed the overall current state of IMEs (Input Method Engines) and input methods (ibus). First wishing that in the future we could treat IMEs as a logical layer above keyboard layouts (as Microsoft Windows has also done at least) and not on the same level as Keyboard Layouts, since an IME is inherently dependent on keyboard layout even if it overrides it.

For Wayland there is also the question of whether we should use the native Input Method protocol for more secure input. This relates also the recent support for ibus that has been added to Flatpak and ibus upstream by Alex Larsson for flatpak applications to connect to ibus through a flatpak portal. But currently the GTK immodules work okay and no-one seems to be working actively on using Wayland input-method.

Emoji input and rendering

Recently more emoji are being added to each new Unicode release and their usage is proliferating. Unicode 9 also changed some of their character properties which caused rendering regressions in Pango, which have largely been fixed now. Matthias Clasen added emoji input support to GTK3's simple input context, and together with Behdad recently merged the needed fixes to Cairo and Pango for color emoji rendering to work correctly. We also get some Pango patches merged upstream to improve the rendering of compound emoji, and Peng Wu continues work on fixing the Pango testsuite for emoji for Unicode 9.0. So we now will have emoji input support provided by GTK, ibus (by Takao Fujiwara), and through ibus-typing-booster (Mike Fabian).

At the same time there are a couple of competing color emoji fonts now: the older Emoji One (recent releases apparently have a problematic license), Google's Noto Emoji, Emoji Two (a free fork of Emoji One). However Noto has the best coverage, already supporting Unicode 10, so we feel it would make the better default emoji font for Fedora. It is not really possible currently to user-configure a preferred Emoji font without tweaking fontconfig rules or installing/removing the font packages.

Really great progress has been made anyway, so things are starting to look a lot brighter in this area in which the Linux Desktop has been lagging.

Glibc locales

Mike Fabian and Rafał Lużyński became upstream glibc locale maintainers and they have been busy reviewing the backlog of patches and merging fixes. This follows on from earlier work Mike did updating glibc to Unicode 7, 8, 9, and now 10, which got merged quickly this time due to this change. Work will also start on merging CLDR locale data to glibc which should lead to better cross-platform consistency.


I already mentioned about the ibus support that has recently been added to Flatpak and ibus in the Input Method section above. Additionally for fontconfig changes are being discussed upstream to allow flatpak apps to get corrected cache file paths due to the difference in prefix from the host system. So all this should lead to a better i18n experience for Flatpak users.


Pravin Satpute talked briefly about fonts and I briefly demoed that at least for the Wikipedia multilingual list of languages we have quite good language font coverage now in a default Fedora install.

Modularity and Atomic

What the Modularity initiative means for i18n packages is still open for discussion.
Do we need modules or fonts and input methods? I don't know yet. Perhaps not so much, but after doing one of the Modularity User Test sessions at Flock, I think that language modules maybe will make sense for locales, fonts, input-methods, and langpacks, etc. Module profiles might allow installing different sets or sizes of packages for Workstation than Atomic for example.

Also it is not clear yet how locales should be handled for Atomic and langpacks for Atomic Workstation (or even Flatpak for that matter). Should they be done using overlays?

Other ideas

Pravin talked on QA related to i18n, and then I closed with a few wilder ideas:
dynamic locale switching, separation of translations from packages, multilingual configuration (locale and font fallback).


In the following session, Sundeep Anand described his Translation Statistics projects called Transtats. This is a web application for tracking the status of translations of packages across upstream (Github, Pagure, etc), the translation system (Zanata, Transifex, Damned Lies, etc), and downstream in Fedora or other distros. Later there are plans to track not just translation stats, but also down to even the string level, so it should be possible to detect string breakage after string freeze for example. It is hoped that this tool (written in Python and Django) which we plan to deploy in Fedora infrastructure, will help different interested parties to better see and understand the current translation status of Fedora, be they developers who will be able to check that their packages are well translated and carry the latest translations, testers or project managers wanting to know how well core packages are translated across releases, or obviously translators keen to see how well and quickly their work is getting up- and down-stream. We hope this service will increase the quality and quantity of translations we see in Fedora and upstream projects.

To do all this Transtats has to do a number of things, including following the schedule for Fedora releases, tracking package builds (this is still pending), pulling translation statistics from translation systems, mapping packages in releases to translation system and upstream branches. It is planned also that Transtats will send out notifications to developers and translators when certain events happen.


In the last session which focused more on l10n, Pravin Satpute described the status of Fedora G11n, Alex Eng presented the latest developments around the Zanata translation system, and Jean-Baptiste discussed the state of Fedora L10n with some nice graphic slides.

After some post-session discussions, some of us went for dinner together on the Hyannis Main St. Overall it was a very beneficial day and everyone seemed to get a lot out of it.

UX Hiring Process investigation ongoing

Posted by Suzanne Hillman (Outreachy) on September 16, 2017 06:03 PM

Hi folks!

I’ve been meeting, finding, and interviewing folks at various points in their UX careers. It’s been fascinating, and reminds me that I’m _much_ better at networking when I have a reason to talk to people.

I’ve not yet had a chance to analyze my interviews in depth thus far, but I have noticed some interesting trends.


Portfolios and online presence

  1. It’s difficult to know what to put in a UX portfolio, especially for researchers. Lots of folks talk about having a story for your reader to more easily understand and follow what you’ve done. I’m collecting information on what this could mean in practice.
  2. It’s really helpful to have an online presence that shows how you think about design, whether a blog, twitter, behance, dribble, or github. Some companies won’t consider someone without an online presence demonstrating their thought processes and personality. Put links to your online UX presence in your resume.

Finding your first job

  1. There’s not a lot of companies hiring folks who are new, and there seems to be a bit of a lull right now even among those who typically would be doing so. There’s a much better chance to get a job if you have at least 2–3 years of experience.
  2. Most internships require that one is currently or recently in school. It’s also difficult to find mentors or apprenticeships.
  3. Folks doing the hiring may or may not understand what UX is, what each UX role involves, or what the best things to look for are. Job descriptions may or may not involve throwing everything they might want in there, so it’s often worth applying even if you don’t know all of what they are asking for.
  4. Lots of companies are playing catchup — they feel like they should have gotten into UX 10 years ago, so think they need senior UXers to get things jumpstarted. Those senior UXers are typically under-resourced and rarely have time or space to take on juniors and help get them the experience they need. Unfortunately, without higher ups understanding and believing in UX, even hiring seniors often results in failure of the UX team.
  5. Very few folks I talked to have specific tools they prefer folks to know how to use, except in cases where getting permission to use specific tools is complicated. This is especially relevant given the sheer number of tools out there, whether for wireframing, prototyping, or creating high-fidelity visual designs.

Keep learning

  1. It’s hard to figure out what online resources and books are the most useful to read or follow.
  2. It’s important to keep toward learning more about UX — even for folks who have a UX job. The field is constantly evolving.

Getting experience and taking criticism

  1. It’s difficult to get experience before you have a job in UX. This may be worse for researchers, as visual designers have an easier time selling their skills (but ‘looking pretty’ may not actually translate to ‘useful’).
  2. Even if you’re not great at sketching by hand, it’s really important to be able to jot your down ideas on paper visually. This offers a way to communicate your thoughts, and is quick and easy enough that you’re less likely to be attached to the ideas you’ve come up with. In turn, the sketchiness and reduced attachment makes criticism easier to take.
  3. Work with other folks on your designs. Practice giving and taking criticism, because no one gets it right on the first try. Design is a process for a reason, and there’s a lot of different pieces to it.

Possible solutions?

Getting experience

This is a significant problem. Given that few places are hiring folks without a couple of years of experience, newbies and career changers need to find ways to get that experience.

For those who can afford it and have access, in-person UX programs like Bentley’s master’s in human factors program and Jarod Spool’s Center Centre are an excellent choice. These offer curated and guided information, connections, and practice at design. Unfortunately, these and other programs rely on proximity and available time and money, and are not inexpensive (although Center centre tries to mitigate that part).

There are also online courses which can be helpful, and bootcamps both on and offline, but these again cost money and may or may not offer built-in networking.

So how does one find work, even if unpaid? There’s a few options that I am aware of:

  1. If you can make Code for Boston’s weekly meetings, that’s a good option. They tend to have ideas for what to work on, and specifically mention both developers and designers.
  2. You can find other folks looking for UX work, and see if they want to team up with you on something. This is especially useful if you each have different skills: like a researcher and a visual designer, or a UX person and a developer. This does require being able to find those folks, and is one possible option for how my project can offer help. These designs are less likely to go live, but any projects are better than no projects.
  3. You might be able to find non-profits who need help, although this does require a) that the non-profit is able to understand the value of what you can offer them, b) you know the right people to talk to, and c) that they have someone able to implement the suggestions you make. Attending a Give Camp may help with those problems, but the New England page appears to not be functional (the website for it goes to a godaddy page). This may be another thing I can offer help with through UXPA.
  4. Outreachy might be another option. This is a program to help women and minorities get into open source software, and is not specifically focused on UX. However, I was able to do a UX research and interaction design project with the Fedora Project through outreachy, and it was fabulously helpful and interesting.
  5. You may be able to find an open source project to help out with, such as Red Hat’s Patternfly Design Library (also on github).
  6. Do you know any developers working on projects in their spare time? See if you can help them out.
  7. If you are currently in school, or just recently left, look for design internships. These are easier to get if you have some design experience, perhaps through your classwork.

Options 2 and 6 may be more difficult for designers just starting out, as they are much easier to do if one has some guidance for how to approach design problems.

Finding a mentor

Mentorship is really important, especially if you cannot afford to attend school and get guidance that way. Unfortunately, it can be difficult to find a mentor, and precisely what a mentor can offer or do for you varies by the mentor.

Ideally, I think that mentors should offer:

  1. Guidance around how to start or continue your UX learning process.
  2. Suggestions for how to improve the things that you’ve identified as weaknesses in your skillset. Alternately, ways to identify those weaknesses.
  3. Portfolio and resume reviews.

Beyond this, it’d be lovely if mentors could offer networking help (eg: connections to open positions and folks who may eventually have open positions), and suggestions for projects to work on.

The XX UX community offers a mentorship matching program in some cities, although Boston is not yet one of them. This may be another opportunity for my project to help folks out, whether by working with XX UX (which would mean it’s only available to women), or by building on their example and making our own program.

Curated resources

Given how much information there is out there, a possible way to help folks out would be to offer resources that experienced UX folks agree would be useful to those who are starting out.

These resources could include basic guidance for portfolios for various design specialities, design interviews (including design exercises), and job applications, as well as structure within which to learn design processes.

Also relevant might be instruction on persuasion, on communicating and working within cross-functional teams, and on presentation skills (both creating a presentation and presenting it).

We might want to include specific information such as the use of short-cut keys within design programs (Ctrl d, alt Ctrl shift arrow keys for movement, etc), recommendations for tools to start out with and an introduction to their use, and suggestions for how to use those tools to more easily share and maintain one’s designs (since all good design involves many different folks in various different teams).

Finally, we could offer recommendations for good books for folks in various stages of learning.

Keep learning!

One of the most important things for someone new to a field is to keep learning. Be visibly interested in and passionate about your field: it’ll communicate itself to those you are working with, and will help keep you informed and aware of what’s going on.

At the same time, don’t believe everything you read — some folks make things look more clear-cut and simple than actually happens. Reality is messy!

Don’t be afraid to try things out. No one in UX knows everything about it, and mistakes are how to learn.

Remember to mention others who had a role in any design you talk about: design isn’t typically an individual process (collaboration is important!), and hiring managers want to know that you understand and can talk about your role in the project.

If you’re interested in research, learn both qualitative and quantitative methods. Most of your work will probably be in qualitative spaces, but it’s useful to be able to measure success (are we accomplishing our goals?). It’s also helpful to understand basic data visualization techniques.

Remember to take pictures at all stages of your process! This will be hugely helpful when it comes time to make your portfolio.

Flock 2017 event report

Posted by Eduardo Mayorga on September 16, 2017 04:22 AM

This year I had the opportunity to attend Flock to Fedora for the first time. I had previously attended two FUDCons in the LATAM region, so this was a new experience for me as this conference only takes place in the North America and EMEA regions. Flock 2017 itself had a different focus, that is, do-sessions rather than just talks.

I will focus on three sessions that I attended, those what were of particular interest to me and from which I learnt the most.

How to add tests to your packages

This was a workshop on adding tests that are written as Ansible playbooks to Fedora packages. This is done by adding a tests directory in the dist-git repository for the package or module, and anyone will be able submit pull requests for any package to add tests via Pagure once the proper infrastructure is put in place, but for now, you need to create a repository in the https://upstreamfirst.fedorainfracloud.org repositories.

We were first shown a simple example for adding tests with the gzip package. All the steps are documented in the wiki at https://fedoraproject.org/wiki/CI/Tests#Testing_specific_RPMs. Since I had no previous experience with Ansible, I had to learn the very basics during the workshop itself, but that was good enough for this session’s purpose.

I started working on adding tests to one of the packages I am a maintainer for: nik4, a tool to export OpenStreetMap data to raster image formats. I expect to be able to push them very soon as I still have to figure out what input file would cover the most important corner cases and how to setup a database for the test pulling the minimum amount of packages. Since there are on tests at all in %check for this package, there is no need to migrate them to the CI infrastructure.

Fedora kernel process review

This was an open session with no actual slides or material to be presented as this was more discussion-oriented. Several issues were discussed, namely arbitrary branching (something that will not affect the kernel package), non-Red Hat participation in Fedora kernel maintenance, and alternative architectures (as of now cannot be shipped on Copr was builds take too long and timeout), but most of the time was spent on the workflow for fixing kernel bugs.

The Fedora kernel is not very different from upstream, as there are just about 80 out-of-tree patches applied to the kernel shipped by downstream. Users are welcome to request new modules to be enabled via Bugzilla, as long as they have a good reason for it.

Currently there are only two kernel maintainers who are full-time employees at Red Hat, and they have to deal with a whole bunch of very low quality bug reports, specially from ABRT, which are mostly ignored. Their main priority is to have CVS covered. A major problem is that the kernel team does not have a variety of hardware for testing, so having more users running the tests from the Kernel Testing Initiative will help to catch more hardware-specific bugs.

An idea that came out of this discussion was to organize a Kernel Test Day as part of the schedule for every Fedora release starting from F27, so we are expecting to see this happening on September after the Linux Plumbers conference.


I also attended very interesting sessions on modularity and Fedora Atomic. I found myself writing a Dockerfile for python-spyder for submission for review, that still needs some work. I also got a Fedorator that took back to Nicaragua and is ready for events, thanks to Neville Cross.

Flock 2017: trip report

Posted by Adam Williamson on September 16, 2017 03:56 AM

Better late than never, here’s my report from Flock 2017!

Thanks to my excellent foresight in the areas of ‘being white’ and ‘being Canadian’ I had no particular trouble getting through security / immigration, which was nice. The venue was kinda interesting – the whole town had this very specific flavor that seems to be shared among slightly second-class seaside towns the world over. Blackpool, White Rock or Hyannis, there’s something about them all…but the rooms were fairly clean, the hot water worked, the power worked, and the wifi worked fairly well for a conference, so all the important stuff was OK. Hyannis seriously needs to discover the crosswalk, though – I nearly got killed four times travelling about 100 yards from the hotel to a Subway right across the street and back. Unfortunately the ‘street’ was a large rotary with exactly zero accommodations for pedestrians…

Attendance seemed a bit thinner than usual, and quite heavily Red Hat-y; I’ve heard different reasons for this, from budget issues to Trump-related visa / immigration issues. It was a shame. There were definitely still enough people to make the event worthwhile, but it felt like some groups who would normally be there just weren’t.

From the QA team we had myself, Tim Flink, Sumantro Mukherjee and Lukas Brabec. We got some in-person planning / discussion done, of course, and had a team dinner. It was particularly nice to be in the same place as Sumantro for a while, as usually our time zones are awful, he gets to the office right when I’m going to bed – so we were able to talk over a lot of stuff and agree on quite a list of future projects.

The talks, as usual, were generally very practical, focused and useful – one of the nicest things about Flock is it’s a very low-BS conference. I managed to do some catch-up on modularity plans and status by following the track of modularity talks on Thursday. Aside from that, some of the talks I saw included the Hubs status update, Stef’s dist-git tests talk, the Greenwave session, the Bodhi hackfest, Sumantro’s kernel testing session, and a few others.

I gave a talk on how packagers can work with our automated test systems. As always seems to be the case I got scheduled very early in the conference, and again as always seems to be the case, I wound up writing my talk about an hour before giving it. Which was especially fun because while I still had about ten slides to write, my laptop starting suffering from a rather odd firmware bug which caused it to get stuck at the lowest possible CPU speed. Pro tip: LibreOffice does not like running at 400MHz. So I wasn’t entirely as prepared as I could have been, but I think it went OK. I had the usual thing where, once I reached the end of the talk, I realized how I should have started it, but never mind. If I ever get to give the talk again, I’ll tweak it. As a footnote, Peter Jones – being Peter Jones – naturally had all the tools and the know-how necessary to take my laptop apart and disconnect the battery, which turned out to be the only possible way to clear the CPU-throttling firmware state, so thanks very much to him for that!

As usual, though, the most productive thing about the conference was just being in the same place at the same time as lots of the folks who really make stuff happen in Fedora, and being able to work on things in real time, make plans, and pick brains. So I spent quite a lot of time bouncing around between Kevin Fenzi, Dennis Gilmore, and Peter Jones, trying to fix up Fedora 27 and Rawhide composes; we got an awful lot of bugs solved during the week. I got to talk to Ralph Bean, Pingou, Randy Barlow, Pengfei Jia, Dan Callaghan, Ryan Lerch, Jeremy Cline and various others about Bodhi, Pagure, Greenwave and various other key bits of current and future infrastructure; this was very useful in planning how we’re going to move forward with compose gating and a few other things. In the kernel testing session, Sumantro, Laura Abbott and myself came up with a plan to run regular Test Days around kernel rebases for stable releases, which should help reduce the amount of issues caused by those rebases.

We started working on a ‘rerun test’ button for automated tests in Bodhi during the Bodhi hackfest; this is still a work in progress but it’s going in interesting directions.

Discussion about the greenwave tool.

Posted by David Carlos on September 15, 2017 03:10 PM


Because of my work on Google Summer of Code this year, I was invited to attend to the Fedora Contributor Conference (Flock) as volunteer, helping the organization staff to record some sessions and writing about what was discussed on then. This year, the FLock Conference was in Cape Cod, Massachusetts. Was a incredible experience, allowing me to keep up with great discussions made by the Fedora developers. On this post I will make a resume of what was discussed on the session Gating on automated tests in Fedora - Greenwave, proposed by Pengfei Jia.

The Session

Bodhi is the Fedora service that permits developers to propose a package update for a Fedora distribution. Beyond several functionalities of Bodhi, one that is important for us its that it queries ResultsDB for automated test results and displays them on updates. Greenwave, tool presented on the session, is a service that Bodhi will query to decide if an update is ready to be pushed, based on its test results.

The main purpose of Greenwave is to improve the use of automated tests on Bodhi, mainly because now a day, the automated tests (executed by taskotron) serves only for visualization, not having a useful integration with Bodhi. This is a problem because the developer can release a new update, without check if the tests are passing, and this can break other packages on Fedora. Greenwave, to avoid packages with broken tests, defines policies, to enforce checking the result of some tests.

The main purpose is to make available to developers, an API, that they can define polices that Greenwave will use to check the result of specific tests, telling Bodhi (with fedmsg) what was the result of the tests. Based on Greenwave response for some package tests, Bodhi can decides if a new update can be released. On this link, you can find a example of the use of Greenwave API, and how it policies works. Greenwave will use ResultsDB to access test results. On the session, one of the participants, asked if would not be better if packagers manually check the policies, during the package development. The answer was that these policies are running for four years, and the participant was the first that proposed that, so enforcing checking these policies is something necessary during Fedora updates.

Flock 2017

Posted by Parag Nemade on September 15, 2017 02:20 PM

This was my second Flock conference. The last Flock I attended was in 2014. This time it happened in CapeCod, MA, US from 29th August to 1st September. This Flock was also a nice experience.

This time there was no keynote session (as per schedule and also as told on flock-planning list) but the only first talk for day 1 was “Fedora State of the Union” from Fedora Project Leader Matthew Miller.


He talked about history of Fedora releases their usage, downloads per release in graphs. One of the graph showed number of IP connections to Fedora update servers is getting increased per release, another graph Geologic Eras of Fedora showed which Fedora release series was used most by users. He then given importance that Fedora Atomic CI and Fedora Modularity are the upcoming development in Fedora.

I attended some talks and Do-sessions from which I will write here for few of them.


I attended workshop on “Become a Container Maintainer“. This was a very useful workshop. I have been doing a package reviewing for many years now. Fedora project added namespaces in PkgDB which enabled containers, modules which made adding containers and modules easy in Fedora. For the last few months I found few Container specs that is Dockerfiles have been submitted for reviews.  I was interested in trying reviewing few of Containers but did not get time but after attending this workshop, I got to know few concepts that Fedora has introduced while writing Dockerfiles for Fedora containers. During workshop we formed a group of 2 peoples and asked to write a Dockerfile for whichever software we want to containerize. Then our dockerfile has been given to another group for review. This was very helpful exercise. Hopefully I will look more into Container guidelines and try reviewing Dockerfiles getting submitted to Fedora. Thanks to Adam Miller and Josh Berkus for such a nice workshop.

Another workshop I attended was “Atomic Host 101“. For this workshop Dusty already provided needed lab files and images that I already downloaded day before. So it was easy start for me for this workshop. Dusty first gave introduction to Atomic concept. The workshop was divided into parts. Starting with preparing for this workshop as part0 to containerizing applications as part05. If someone want to follow this workshop stepwise then start with this blog post. This workshop provided how to use rpm-ostree commands, few things about container storage, how one can upgrade or rollback atomic host, some commands to try the experimental features Atomic is providing currently.


One of the talk I attended was “New Container Technologies” where Dan Walsh presented about new things coming for Containers. He discussed about Open Container Initiative (OCI) which is designing open standard for containers, Skopeo project which provides pulling container images from remote registries, images information. Then he discussed about image signing goals, System containers which can be installed by atomic command where skopeo is used to pull image and ostree to store image layers on disk. Then he discussed about standalone containers, container image development tools buildah. Then he gave some introduction to CRI-O project and shown some kpod tools which is management tools for CRI-O. This was one of the good session to know what development is happening for containers currently.

Another talk I attended was “Automate Building Custom Atomic Host with Ansible”  where Trishna Guha talked about architecture of Atomic host, conventional way of 8 steps on how to compose own tree. Then she explained how this much 8 steps work can be simplified with the usage of Ansible. With the use of Ansible it become already easy to deploy things now.

Conference Organization

This time Flock was organized differently that mean specific topic wise tracks. These tracks were having a mix of Do-Sessions and Talks (and 2 hrs lunch break daily). I like the concept of such schedule but then I saw some advantages and disadvantages. The advantage I saw that 2+ hours Do-Sessions proved to be very productive. As it was for longer duration and covered the given topics in detail, the attendees got that topic learned in detail. But for few such sessions there was another parallel good talks were going that I could not attend. The point I want to say here is that if there are only talks in parallel then one can choose one and skip others and can watch video later. But when you need to choose between one 2 hours workshop vs one or few talks then it becomes difficult. Also, one can say here that I can just attend workshops and watch videos of talks later but then you will loose chance to listen to the session presenter and if you got some queries then you can resolve it at the same time in the talk session.

Final words

Overall Flock was a nice experience. All we saw people were talking about these words only Atomic, Containers, Modularity. One of the good thing was the session “Advertise your session” on first day where I got a chance to see every session owner and also got to know about what they have planned for their session. Another good thing was on last day, last session where people talked about what they achieved and what they have planned for future. Also, first 3 days were having 2 hours of lunch break which really proved good to discuss some topics with other Flock attendees. Keep these things for next Flock also.

My suggestion for future Flock conference will be to keep only talks in parallel for say 2 or more days and rest of day workshops or daily morning session talks and post lunch workshops but not to mix them.

I want to thank to Flock organizing committee for such a nice selection of hotel and conference venue. I liked it much. Also thanks to Red Hat for sponsoring my air travel to this Flock.


Discussion about the future of fedmsg.

Posted by David Carlos on September 15, 2017 01:10 PM


Because of my work on Google Summer of Code this year, I was invited to attend to the Fedora Contributor Conference (Flock) as volunteer, helping the organization staff to record some sessions and writing about what was discussed on then. This year, the FLock Conference was in Cape Cod, Massachusetts. Was a incredible experience, allowing me to keep up with great discussions made by the Fedora developers. On this post I will make a resume of what was discussed on the session The Future of fedmsg?, proposed by Jeremy Cline.

The Session

The Fedora infrastructure have several different services that need to talk to each other. One simple example is the AutoQA service, that listen to some events triggered by the fedpkg library. If we have only two services interacting the problem is minimal, but when several applications request and response to other several applications, the problem becomes huge. FEDerated MeSsaGe bus (fedmsg), is a python package and API defining a brokerless messaging architecture to send and receive messages to and from applications.

fedmsg does not have a broker to manage the publish/subscribe process, done by the services that interacts with it. This leads to a problem of performance and reliability, because every service that consumes a message from fedmsg have to subscribe to every existent topic. Other issue with the absence of a broker, is that it is common to lose messages, that not always get to the destination. Jeremy proposed to use a broker (or several brokers) to fix these issues, and made a demo code showing the benefits of using a broker, instead of the current architecture of fedmsg.

A great discussion emerged from this demo code, including the reflection if Fedora really needs that fedmsg be reliable. Other problem pointed by Jeremy was the documentation of fedmsg, and the existent tools to consume and publish messages (fedmsg-hub have a setup that is quite confuse). This was my review of the session, and based on my work on Google Summer of Code (I had used fedmsg to consume the Anitya events), I agree with Jeremy. Adding a broker to manage the publish/subscribe process, could improve the consume of resources of fedmsg, would facilitate to add new services consuming messages from the API and would make fedmsg more reliable.

Fedora User Wiki

Posted by Paul Mellors [MooDoo] on September 15, 2017 08:41 AM

If you wasn’t aware, Fedora users can create their own user information page on a wiki.  Mine for example is https://fedoraproject.org/wiki/User:Paulmellors [please note that’s it’s currently pretty sparse]

One of the more interesting things on the page is my list of badges.  I’ll not explain what they are as you can read it here

To add a list of your badges to your wiki, all you need to add to the page is this

{{ #fedorabadges: paulmellors }}      just make sure you replace my username with your own and you’ll have all your hard earned shiny badges on your wiki. #funtimes

Alternatively, you can add an infobox that displays all your information [including badges], this is the wiki code for mine

{{Infobox user
|REAL-NAME= Paul RJ Mellors
|location= Nottingham, UK
|birthday= April 15, 1972
|fas-name= paulmellors
|irc-nick= MooDoo
|irc-channels= #fedora-ambassadors #fedora-uk #fedora-diversity #fedora-mktg #fedora-join}}

Now you can populate your wiki page with information that people might like to read, one way of getting to know you, you never know someone might like to buy you a beer one day……[hint hint] 😉

4 cool new projects to try in COPR

Posted by Fedora Magazine on September 15, 2017 08:00 AM

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

Here’s a set of new and interesting projects in COPR.


Exa is an alternative for the ls command. By default it uses colors to distinguish between different file types and access permissions. It also can show status of source code managed by git, and show directories with a tree view. Exa is written in the Rust programming language.

Installation instructions

The repo currently provides exa for Fedora 25, 26, and Rawhide. To install exa, use these commands:

sudo dnf copr enable eclipseo/exa
sudo dnf install exa


Bazel is a tool that automatically builds and tests software. It can produce packages for deployment on Android and iOS. It uses the Bazel query language to declare dependencies. Bazel can also produce dependency graphs of the entire source code. Thanks to these graphs, it rebuilds only what is necessary. Bazel supports Python, Java, C++, C and Objective C natively. It also works with any other language using its extension language, Skylark.

Installation instructions

The repo currently provides Bazel for EPEL 7 and Fedora 24, 25, 26, and Rawhide. To install bazel, use these commands:

sudo dnf copr enable vbatts/bazel
sudo dnf install bazel


Riot is a client for the decentralized messaging and data-transfer protocol Matrix. It supports voice and video conferencing, and customizable and keyword specific notifications. It allows file-sharing as well, including file archiving. Riot also can connect to other communicating systems like IRC, Slack and Gitter.

Installation instructions

The repo currently provides Riot for EPEL 7 and Fedora 25 and 26. To install the Riot package, use these commands:

sudo dnf copr enable ansiwen/Riot
sudo dnf install riot

Simon Tatham’s Portable Puzzle Collection

This package collects several simple one-player puzzle games. It includes favorites like minesweeper, sudoku, n15-puzzle, and many others. It uses its own framework to enable the games to run on multiple platforms. The games are written in C.

Installation instructions

The repo currently supports Fedora 24, 25, 26, and Rawhide. To install the puzzles, use these commands:

sudo dnf copr enable ribenakid/puzzles
sudo dnf install puzzles

PSA: If you had dnf-automatic enabled and updated to Fedora 26, it probably stopped working

Posted by Adam Williamson on September 15, 2017 02:34 AM

So the other day I noticed this rather unfortunate bug on one of my servers.

Fedora 26 included a jump from DNF 1.x to DNF 2.x. It seems that DNF 2.x came with a poorly-documented change to the implementation of dnf-automatic, the tool it provides for automatically notifying of, downloading and/or installing updates.

Simply put: if you had enabled dnf-automatic in Fedora 25 or earlier, using the standard mechanism it provided – edit /etc/dnf/automatic.conf to configure the behaviour you want, and run systemctl enable dnf-automatic.timer – then you upgraded to Fedora 26, then it probably just stopped working entirely. If you were relying on it to install updates for you…it probably hasn’t been. You can read the full details on why this is the case in the bug report.

We’ve now fixed this by sending out an update to dnf which should restore compatibility with the DNF 1.x implementation of dnf-automatic, by restoring dnf-automatic.service and dnf-automatic.timer (which function just as they did before) while preserving the new mechanisms introduced in DNF 2.x (the function-specific timers and services). But of course, you’ll have to install this update manually on any systems which need it. So if you do have any F26 systems where you’re expecting dnf-automatic to work…you probably want to log into them and run ‘dnf update’ manually to get the fixed dnf.

PSA ends!

Flock 2017@Cape Cod

Posted by Alex Eng on September 14, 2017 10:50 PM
It was another exciting year for me and another exciting conference to attend to give my talk about Zanata - Flock @ Cape Cod from 29-August to 1-Sept.

This time I am more focusing on Globalisation as a whole in Fedora and how Zanata platform can fit into the picture. Our talk between Globalisation (Pravins) + Localisation (Jibecfed) + Zanata (myself: aeng) was scheduled on the second last day of the event. Globalisation@Flock 2017

There was a lot of discussions around to challenges, improvements and solutions. Thanks to facilitation from Bex, and participation from community members, we managed to come out with few options to solve challenges in the group. For most of our discussion, it is all listed out as issues in https://pagure.io/g11n/issues thanks to Jibecfed.

The vibe in this Flock was somewhat different from what I experienced in the previous event, in a better way. I enjoyed on how the event was scheduled and planned to allow us to have more time to mingle around, talking to people, social activities and more discussion around problems and working together. I think this is one of the improvements in this year's event because after all, most of the community members that spread all around the world can only meet with each other during this kind of event, and for me, it's nothing better than being able to meet with them the in person and having face to face conversation. 

There are social activities planned out as well during the week after full on morning talks and workshop. We got Game night + candy exchange, Arcade night, and team dinner which I think a good way to bring everyone together to celebrate the effort and contribution towards Fedora project. 

For me, even though it was tiring from the long haul flight, lots of hard work, time and effort towards the project, at the end of the day, the recognition, making new friends, meeting new people, and knowing you're part of a community that appreciates you and your work, it is worth it :)  

Sight seeing in Cape Cod


Flock 2017

Posted by Rafał Lużyński on September 14, 2017 08:59 PM
Flock to Fedora, the annual Fedora users and contributors conference, was held this year from August 29 to September 1 in Hyannis, MA, a tourist resort located at the Atlantic Ocean coast. As I was privileged to participate in this event here is my report about it. Day #0 August 28, 2017 Usually this is

PHP version 7.0.24RC1 and 7.1.10RC1

Posted by Remi Collet on September 14, 2017 06:10 PM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.1.10RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26-27 or remi-php71-test repository for Fedora 24-25 and Enterprise Linux.

RPM of PHP version 7.0.23RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 25 or remi-php70-test repository for Fedora 24 and Enterprise Linux.

PHP version 5.6 is now in security mode only, so no more RC will be released.

PHP version 7.2 is in development phase, version 7.2.0RC2 is also available.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.0 as Software Collection:

yum --enablerepo=remi-test install php70

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.0:

yum --enablerepo=remi-php70,remi-php70-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.1.10RC1 is also available in Fedora rawhide (for QA).

emblem-notice-24.pngEL-7 packages are built using RHEL-7.4.

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php70, php71)

Base packages (php)

Randa Report: The Fall of KDateTime

Posted by Daniel Vrátil on September 14, 2017 03:29 PM

The main goal for me and Volker for this year Randa Meeting was to port KCalCore, our calendaring library, away from KDateTime. KDateTime/KTimeZone are classes for handling date, time and time zones which were used back in the KDE4 (and earlier) days when Qt’s QDateTime wasn’t good enough for us, mainly due to missing time zone support.

In Qt5 QDateTime finally gained all the necessary features we need which made it possible for us to leave KDateTime behind. But we couldn’t just go and replace “K” with “Q” everywhere – there are many subtle (and not-so-subtle) differences in API, behavior and some missing features of QDateTime that required us to carefully consider each step. The fact that we started working on this over 2 years ago shows how challenging task this was.

We did not expect to finish the port here, but once we dived into it, things went fairly well and I’m glad to say that after four days of intensive hacking, surrounded by Swiss Alps and mountains of chocolate, we succeeded. KCalCore is now free of KDateTime and KTimeZone and that in turn made (after some minor adjustments) the rest of KDE PIM free of KDELibs4Support. That’s a huge milestone for us.

Many thanks to John Layt for laying the initial ground for the porting and to Mario and Christian for the steady stream of chocolate and sweets to soothe our nerves :-)

If you want to help us to continue improving Kontact and other parts of KDE, please donate to the Randa fundraiser so that we can continue organizing similar productive sprints in the future. Thank you!

Keep DNF automatic updates rolling in Fedora 26

Posted by Fedora Magazine on September 14, 2017 12:37 PM

DNF Automatic is an optional Fedora component you can configure for automatic updates of packages. DNF Automatic is provided by the (aptly named) dnf-automatic package. This package has been available in previous releases of Fedora as well as in Fedora 26. As with all releases, though, Fedora 26 introduced numerous updates. These updates included DNF version 2, in which DNF Automatic configuration changed in an undocumented way.

As a result of the change, if you previously configured DNF Automatic and then upgraded to Fedora 26, your system may not be getting automatic updates. However, there’s now a DNF package that restores the previous configuration interface. But of course, you’ll need to install that update manually. After that, automatic updates will resume on your system.

Restoring automatic updates

Through updates-testing

If you don’t want to wait for a stable update through normal channels, you can use packages intended for advance testing. To update, temporarily enable the updates-testing repository with sudo and dnf:

sudo dnf --enablerepo=updates-testing update dnf\*

Through stable updates

To restore automatic updates using a stable update (available later this week), do one of the following:

  1. If you’re running Fedora Workstation, use the Software application to apply updates.
  2. Or, run dnf upgrade from the command line. You may want to run this command under screen or tmux to reduce the risk of interruption.

Checking results

After you complete either of the above, run the following command:

rpm -q dnf

You should receive this version (or higher):


Don’t stop there!

Using DNF Automatic alone isn’t enough to keep your system fully updated. Remember, DNF Automatic doesn’t reload or restart services, or reboot systems. These are often critical steps in applying security updates. Don’t forget to take these steps with the systems you care about.

Photo by mrigendra chauhan on Unsplash.d

Downloading RHEL 7 ISOs for free

Posted by Debarshi Ray on September 14, 2017 12:34 PM

A year and a half ago, frighteningly close to 1st April, Red Hat announced the availability of a gratis, self-supported, developer-only subscription for Red Hat Enterprise Linux and a series of other products. Simply put, if you went to developers.redhat.com, created an account and clicked a few buttons, you could download a RHEL ISO without paying anything to anybody. For the past few months, I have been investigating whether we can leverage this to do something exciting in Fedora Workstation. Particularly for those who might be building applications on Fedora that would eventually be deployed on RHEL.

While trying to figure out how the developers.redhat.com website and its associated infrastructure works, I discovered that its source code is actually available on GitHub. Sadly, my ineptitude with server-side applications and things like JBoss, Ruby, etc. meant that it wasn’t enough for me to write code that could interface with it. Luckily, the developers were kind enough to answer my questions, and now I know enough to write a C program that can download a free RHEL ISO.

The code is here: download-rhel.c.

I’ll leave it here in the hope that some unknown Internet traveller might find it useful. As for Fedora Workstation, I’ll let you know if we manage to come up with something concrete. 😉

Installing syslog-ng on AWS Linux AMI

Posted by Peter Czanik on September 14, 2017 08:46 AM

You do not have to live without your favorite syslog implementation even in Amazon Web Services (AWS) Linux AMI. This Linux distribution is based on Red Hat Enterprise Linux version 6 and it is minimal extra work to install syslog-ng on it.

Before you begin

There are many different Linux distributions available on AWS, many of these include syslog-ng in an easy to install way as part of the distribution. The one I am writing about is the Amazon Linux AMI, the custom Linux distribution maintained by Amazon:

  • The AWS Linux AMI is based on RHEL 6, so you can use syslog-ng built for that. This means that you can enable the EPEL repository and use syslog-ng from there. While it works, it is not recommended as it contains an ancient version (3.2). I would rather recommend to use my unofficial syslog-ng packages.
  • The latest available version for RHEL 6 is 3.9. This still needs the EPEL repository for dependencies and you will need to enable my repository as well.

If your company policy suggests to use EPEL instead of the latest version, read my blog about the new core features of syslog-ng, which include advanced message parsing and disk based buffering to think again.

Installing syslog-ng

Enter the commands below to install syslog-ng 3.9 on AWS Linux AMI:

  1. yum-config-manager –enable epel
    Enables the EPEL repository that contains some of the dependencies necessary to run syslog-ng. The repo file is already there but it is not enabled.
  2. yum-config-manager –add-repo=https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng39epel6/repo/epel-6/czanik-syslog-ng39epel6-epel-6.repo
    Enables my unofficial syslog-ng repository for RHEL 6. Skip this step only if you are not allowed to use other external repositories than EPEL.
  3. rpm -e –nodeps rsyslog
    Removes rsyslog – which conflicts with syslog-ng – without removing packages, like cronie, depending on syslog functionality.
  4. yum install -y syslog-ng
    Installs syslog-ng. The “-y” option saves you from answering a few prompts.
  5. chkconfig syslog-ng on
    Makes sure that syslog-ng started on boot.
  6. /etc/init.d/syslog-ng start
    Starts syslog-ng.

Automating syslog-ng installation

Installing applications from the command line is OK when you have a single machine. Using a private or public cloud automation is a must, if you do not want to waste a lot of time (and money). You can easily automate the above steps by adding it as a shell script while launching a new machine in AWS.

yum-config-manager --enable epel
yum-config-manager --add-repo=https://copr.fedorainfracloud.org/coprs/czanik/syslog-ng39epel6/repo/epel-6/czanik-syslog-ng39epel6-epel-6.repo
rpm -e --nodeps rsyslog
yum install -y syslog-ng
chkconfig syslog-ng on
/etc/init.d/syslog-ng start

If you use the web console to launch a new instance, you can paste the above script in step #3 (“configure instance”) in the text box under “Advanced Details”.

Of course it is even more elegant, if you turn the above commands into a cloud init script. I leave that exercise up to the reader.


By now syslog-ng is installed on your system with the default configuration. Before tailoring it to your environment make sure that everything works as expected.

You can check that syslog-ng is up and running using the /etc/init.d/syslog-ng status command, which prints the process ID of the syslog-ng application on screen.

You can check (very) basic functionality using the logger command. Enter:

logger this is a test message

And check if it is written to /var/log/messages using the tail command:

tail /var/log/messages

Unless your system is already busy serving users you should see a similar message as one of your last lines:

Sep 11 13:09:39 ip-172-x-y-z ec2-user[3395]: this is a test message

What is next?

Here I list a few resources worth reading if you want to learn more about syslog-ng and AWS or if you get stuck along the way:

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Installing syslog-ng on AWS Linux AMI appeared first on Balabit Blog.

Improving Linux laptop battery life: Testers Wanted

Posted by Hans de Goede on September 14, 2017 07:23 AM
My next project for Red Hat is to work on improving Linux laptop battery life. Part of the (hopefully) low hanging fruit here is using kernel tunables to
enable more runtime powermanagement. My first target here is SATA Link Power Management (LPM) which, as Matthew Garrett blogged about 2 years ago, can lead to a significant improvement in battery life.

There is only one small problem, there have been some reports that some disks/SSDs don't play well with Linux' min_power LPM policy and that this may lead to system crashes and even data corruption.

Let me repeat this: Enabling SATA LPM may lead to DATA CORRUPTION. So if you want to help with testing this please make sure you have recent backups! Note this happens only in rare cases (likely only with a coupe of specific SSD models with buggy firmware. But still DATA CORRUPTION may happen make sure you have BACKUPS.

As part of his efforts 2 years ago Matthew found this document which describes the LPM policy the Windows Intel Rapid Storage Technology (IRST) drivers use by default and most laptops ship with these drivers installed.

So based on an old patch from Matthew I've written a patch adding support for a new LPM policy called "med_power_with_dipm" to Linux. This saves
(almost) as much power as the min_power setting and since it matches Windows defaults I hope that it won't trip over any SSD/HDD firmware bugs.

So this is where my call for testers comes in, for Fedora 28 we would like to switch to this new SATA LPM policy by default (on laptops at least), but
we need to know that this is safe to do. So we are looking for people to help test this, if you have a laptop with a SATA drive (not NVME) and would like to help please make BACKUPS and then continue reading :)

First of all on a clean Fedora (no powertop --auto-tune, no TLP) do "sudo dnf install powertop", then close all your apps except for 1 terminal, maximimze that terminal and run "sudo powertop".

Now wait 5 minutes, on some laptops the power measurement is a moving average so this is necessary to get a reliable reading. Now look at the
power consumption shown (e.g. 7.95W), watch it for a couple of refreshes as it sometimes spikes when something wakes up to do some work, write down the lowest value you see, this is our base value for your laptops power consumption.

Next install the new kernel and try the new SATA LPM policy. I've done a scratch-build of the Fedora kernel with this patch added, which
you can download here.

After downloading all the .x86_64.rpm files there into a dir, do from this dir:
sudo rpm -ivh kernel*.x86_64.rpm

Next download a rc.local script applying the new settings from here, copy it to /etc/rc.d/rc.local, and make it executable: "sudo chmod +x /etc/rc.d/rc.local".

Now reboot and do: "cat /sys/class/scsi_host/host0/link_power_management_policy" this should return med_power_with_dipm, if not something is wrong.

Then close all your apps except for 1 terminal, maximimze that terminal and run "sudo powertop" again. Wait 5 minutes as last time, then get a couple of readings and write down the lowest value you see.

After this continue using your laptop as normal, please make sure that you keep running the special kernel with the patch adding the "med_power_with_dipm" policy. If after 2 weeks you've not noticed any bad side effects (or if you do notice bad side effects earlier) send me a mail at hdegoede@redhat.com with:

  • Report of success or bad side effects

  • The idle powerconsumption before and after the changes

  • The brand and model of your laptop

  • The output of the following commands:

  • cat /proc/cpuinfo | grep "model name"

  • cat /sys/class/scsi_device/*/device/model

I will gather the results in a table which will be part of the to-be-created Fedora 28 Changes page for this.

Did I mention already that although the chance is small something will go wrong, it is non zero and you should create backups ?

Thank you for your time.

New badge: CommOps Superstar !

Posted by Fedora Badges on September 13, 2017 07:17 PM
CommOps SuperstarYou're a proud member of the Community Operations (CommOps) team, helping tie Fedora together!

Network isolation using NetVMs and VPN in Qubes

Posted by Kushal Das on September 13, 2017 04:25 PM

In this post, I am going to talk about the isolation of network for different domains using VPN on Qubes. The following shows the default network configuration in Qubes.

The network hardware is attached to a special domain called sys-net. This is the only domain which directly talks to the outside network. Then a domain named sys-firewall connects to sys-net and all other VMs use sys-firewall to access the outside network. These kinds of special domains are also known as NetVM as they can provide network access to other VMs.

Creating new NetVMs for VPN

The easiest way is to clone the existing sys-net domain to a new domain. In my case, I have created two different domains, mynetwork and vpn2 as new NetVMs in dom0.

$ qvm-clone sys-net mynetwork
$ qvm-clone sys-net vpn2

As the next step, I have opened the settings for these VMs and marked sys-net as the NetVM for these. I have also install openvpn package in the templateVM so that both the new NetVM can find that package.

Setting up openvpn

I am not running openvpn as proper service as I want to switch to different VPN services I have access to. That also means a bit of manual work to setup the right /etc/resolv.conf file in the NetVMs and any corresponding VMs which access the network through these.

$ sudo /usr/sbin/openvpn --config connection_service_name.ovpn

So, the final network right now looks like the following diagram. The domains (where I am doing actual work) are connected into different VPN services.

F26-20170912 updated isos released

Posted by Ben Williams on September 13, 2017 03:42 PM

The Fedora Respins SIG is pleased to announce the latest release of Updated 26 Live ISOs, carrying the 4.12.11-300 kernel and 7 patched CVEs (https://bodhi.fedoraproject.org/upda…/FEDORA-2017-6764d16965). These can be found at https://dl.fedoraproject.org/pub/alt/live-respins/ (short link: http://tinyurl.com/live-respins), seeders are welcome and encouraged, however addition of additional trackers is strictly prohibited. These isos save about 800M of updates on new installs. We would also like to thank the following irc nicks for helping test these isos: ImBatman,brain83,dowdle,linuxmodder, Southern_Gentlem

Building container image with modular OS from scratch

Posted by Tomas Tomecek on September 13, 2017 02:27 PM

We were sitting at “Modularity UX feedback” session at Flock 2017. Sinny Kumari raised an interesting question: “Can I create a container image with modular OS locally myself?“. Sinny wanted to try the modular OS on different CPU architectures.

The container image can be created using Image Factory, which can be really tough to set up.

I’m so glad that platform team already solved this problem during development of Boltron in their GitHub repo fedora-modularity/baseruntime-docker-tests.

They created a neat way of creating a docker base image from scratch using mock.

In order to do this, you should follow the instructions from README of the repo. Before running avocado run setup.py, we need to change configuration of mock because by default it uses Boltron (F26) repos and targets x86_64.

I did this on my Raspberry Pi. The configuration is present in resources/base-runtime-mock.cfg:

  1. I targeted arm CPU architecture:

    -config_opts['target_arch'] = 'x86_64'
    -config_opts['legal_host_arches'] = ('x86_64',)
    +config_opts['target_arch'] = 'armhfp'
    +config_opts['legal_host_arches'] = ('armhfp', 'armv7l' )
  2. I used Modular Rawhide compose repo:


Modular dnf is still not available in mainline. Martin is providing the RPM via his COPR repo mhatina/DNF-Modules. The important note is that the modular DNF is only available for architectures x86_64, i386 and ppc64le.

Let’s build the image!

$ sudo python2 ./setup.py
Successfully built dac3c6598bef

command  'docker rmi base-runtime-smoke-scratch' succeeded with output:
Untagged: base-runtime-smoke-scratch:latest

PASS 1-./setup.py:BaseRuntimeSetupDocker.testCreateDockerImage

Test results available in /tmp/avocado_avocado.core.jobwv6R1t

Once the build is done, we can try it out:

$ sudo docker run --rm -ti base-runtime-smoke-scratch:latest bash

bash-4.4# cat /etc/system-release
Fedora modular release 27 (Twenty Seven)

bash-4.4# uname -i

Once the patches for modular DNF are in mainline, this will be a lot more interesting!

Blueborne – How to disable Bluetooth in Fedora

Posted by Luc de Louw on September 13, 2017 01:29 PM

Yesterday 2017-09-13 Redhat released infomation about the mitigation of the Blueborne vulnerability in RHEL: https://access.redhat.com/security/vulnerabilities/blueborne. For Fedora the new updates are probably still in the build queue and/or being QAed by the community. For a quick fix, you can disable … Continue reading

The post Blueborne – How to disable Bluetooth in Fedora appeared first on Luc de Louw's Blog.

Flock 2017 event

Posted by Petr Hracek on September 13, 2017 11:14 AM

Between 29 August and 1 September I attended, with other folks from Brno, the FLOCK conference, which took place in Cape Cod, Massachusetts. Flock 2017 is over, but a lot of talks will stay in our memories. The talks were exciting, fantastic, and simply awesome. Also, it was my first visit to the USA.

I attended several talks and I would like to mention some of them. Especially, I would like to write about the following talks and workshops: “Factory 2.0, future and Fedora”, “Fedora Legal”, “Gating on automated tests in Fedora - Greenwave”, and “Let’s create a test for modules/containers”.

Let me briefly introduce what the talks were about.

Flock 2017 - Keynote

The Flock began with a keynote presented by Matthew Miller. His presentation provided a brief summary across all Fedora versions. Matt showed several graphs about active community folks and the feedback, which was gathered from Fedora 25 and 26 releases. He highlighted Fedora Atomic CI and Modularity as one of the two current objectives for Fedora. After Matt’s presentation, the presenters had a two-minute introduction to their talks. This was useful to get a picture of presenters and helped me to plan attending talks and workshops that I wanted to focus on.

Factory 2.0, future and Fedora talk

Factory 2.0 team delivers several tools that automate things in the Fedora world. The tools provide a way to automate the distribution of modules, and they are used for testing, building, and updating.

Let’s have a brief look at them.

Waiver DB

WaiverDB is used together with ResultDB to record waivers against test results.

The service runs tests and the maintainer can waive it so that it does not block the other tests. The service is used together with Greenwave.


Greenwave is a service used to decide whether a software artifact can pass certain gating points in software delivery workflow. The policy defined by Greenwave defines which tests must pass in order to move to a new state.


Bodhi is a system that triggers and stores builds. The Factory 2.0 team enhanced Bodhi in order to display results from Greenwave and they also added the support for module updates, which are a part of modularity


ODCS is an abbreviation of “On Demand Compose Service”. This service can be used by other services and it updates docker images with the latest updates.

Factory 2.0 announced the support for chain rebuilds, which are important from the modularity point of view. The service responsible for chain rebuilds is called freshmaker.


Freshmaker, which waits for new fedmsg messages, is a new tool handled by Factory 2.0 team. Freshmaker is used mainly by module-build-service (MBS), which looks after modules delivered by the modularity team or Fedora module maintainers.

Fedora Legal - talk

Tom Callaway talked about legal aspects. The talk was fantastic, insightful, and full of information about software licensing. One of the important slides brought information about software patents that will expire soon, and thus the software can be used for Open Source. Tom also talked about why software should be safe and stressed that Fedora needs to remain free, which is awesome.

Gating on automated tests in Fedora - Greenwave - talk

Greenwave is a service for deciding whether the software with artifacts can pass some import points in the software delivery pipeline.

The results generated by Greenwave are stored in WaiverDB.

The checks that can be enforced in the pipeline are:

* dist.abicheck - it checks for ABI/API compatibilities

* dist.rpmdeplint - it checks for errors in RPM packages in the context of their dependency graph

* dist.updradepath - it checks whether the upgrade path was broken or not.

How to add tests to your packages - workshop

This workshop brings more issues for CI tests as well as for Module-Testing-Family (MTF).

* https://fedoraproject.org/wiki/CI/Tests

* https://fedoraproject.org/wiki/Changes/InvokingTests#Terminology

The workshop tried to test several packages by Ansible. I tried to do it with MTF

and it worked as expected, but the artifacts were missing.

Let’s create a test for your module/container

This was a workshop presented by Jan Scotka and me. In 20 minutes we presented a brief overview of what MTF does, how it works, and some basic tests. Later on, the attendees had a chance to write tests for their containers. Some of them managed to make it. During the workshop we received a positive feedback about use cases for MTF, and of course what our plans in the future are, like testing containers in OpenShift. Good point, raised by Jan Kaluza, was to include some examples how to use MTF for several test multiple-host scenarios into our documentation.

What to tell at the end. Flock 2017 was great event, which I have ever visited. There were several talks and several session during 3 days, which was fine, and after getting back to home, I slept around 15 hours.

Flock is over

And Flock 2017 is over, but Flock 2018 will be there soon…...