Fedora People

Welcome to Caleigh

Posted by Randy Barlow on June 23, 2017 08:58 PM

Last week was an exciting week for Bodhi as our summer intern Caleigh Runge-Hottman began her summer Outreachy internship. Caleigh will be adding support for batched updates over the summer, and probably some other cool stuff too. Be sure to say hi to her on Freenode in #bodhi or #fedora-apps!

Building Design Team Approved Presentations

Posted by Mary Shakshober on June 23, 2017 07:24 PM

Throughout the last month I’ve been working on creating an updated presentation template for the Fedora community to use. With Flock coming up quickly, there’s no better time to give these new templates a shot as a vehicle to present your talks!

I’ve made these templates available on three different platforms; LibreOffice Impress, reveal.js/slides.redhat, and GoogleSlides. While any of these three ways to build your slides are perfectly usable, I want to point out a few differences between the three.

  1. LibreOffice slides have SVG graphic elements, so all of the visuals will look very clear, crisp, and professional during a presentation! The only *very minor* limitation is that in using SVGs, the subtle dark blue to light blue gradient that I designed (and that is present in the other two forms) is not present here. My option with LibreOffice was to either keep the gradients but have the visuals be PNG (which resulted in a blurry appearance when enlarged) or to sacrifice the gradient but keep the quality at the high level provided with an SVG. I opted for quality over gradient use 🙂
  2. GoogleSlides… User friendly and accessible…. and apparently not accepting of SVG graphics. So this is in fact an option, but I would probably suggest one of the other formats before this one. It’s a sad day when a graphic designer’s vector art gets bimapped!
  3. reveal.js/slides.redhat is the template that I think will produce the best product for presentations, with clean SVG backgrounds AND gradients *happy dances*. I’ve embedded the reveal template below.


It doesn’t seem that I’m able to attach the actual documents here, so if you’re interested in using any of the mentioned templates feel free to check out the Design Team ticket that they all *should* be available on. The most updated files are in the last comment of the thread. If you have any issues accessing them through the ticket, feel free to reach out to me and I will manually email them to you too 🙂

Happy presenting!

Plex Media Player and MPV with CUDA

Posted by Simone Caronni on June 23, 2017 05:28 PM

The Plex Media Player is now part of the multimedia repository for Fedora 25+. I works as a standalone player and also as the main interface for an HTPC setup, where the “TV interface” starts as the main thing when you power up your system.

Plex Media Player uses MPV in the background, so any compilation option that was added to MPV, is now also part of Plex Media Player by using the same libraries that were already available in the multimedia repository.

If you are using Gnome Software, you will also find it in the software selection screens.

To install it on Fedora, just perform the following commands:

dnf -y install plex-media-player

You will then find it along with the other applications in your menu.

Normal desktop interface

To get to the normal desktop interface just look for the Plex Media Player icon in your menu. You will be greeted with the familiar Plex web interface, with the main difference being that the player is local through the MPV library.

Enabling Plex Media Player startup at boot

If you are planning to do an HTPC installation, and would like to have Plex Media Player starting instead of the login screen the moment you boot the device, execute the following commands as root:

dnf install plex-media-player-session
systemctl set-default plex-media-player
echo "allowed_users = anybody" >> /etc/X11/Xwrapper.config

The first command installs the required files (services, targets and PolicyKit overrides). The second command instructs the system to load by default the Plex Media Player target; that is X immediately followed by the player itself. The third command allows the system to start the X server as the Plex Media Player user, otherwise only users logged in through a console or root can start it.

You will be greeted with the TV interface just after boot:

If you want to go back to your normal installation (let’s say Gnome), then revert back the changes (again type the following commands as root):

systemctl set-default graphical
sed -i -e '/allowed_users = anybody/d' /etc/X11/Xwrapper.config
rpm -e plex-media-player-session


This has been already available for a long time, but with FFmpeg 3.3, CUDA dynamic support loading is enabled also in MPV, so the hard dependency on the CUDA library is gone, and the binaries load the library dynamically:

$ strings /usr/bin/mpv | grep libcuda
$ strings /usr/lib64/libmpv.so.1.25.0 | grep libcuda

So assuming you have the Nvidia driver already installed with the appropriate CUDA part, you can then play a video with the following command line:

mpv --hwdec=cuda /path/to/video.file

And then check with nvidia-smi or with the Nvidia control panel if the video engine is being utilized:

If you want to enable that by default, just make sure your configuration file has something like this inside:

$ cat ~/.config/mpv/mpv.conf 

Earn Fedora Badges designing Badges!

Posted by Fedora Community Blog on June 23, 2017 05:03 PM

Fedora Badges is a perfect place to start if you want to help out the Fedora Design Team. “I’m not a designer!” “I can’t draw!” “I’ve never opened Inkscape” – you might say. And that is totally fine! Everybody can help out, and none of those reasons will stop you from designing your first badge (and getting badges for designing badges).

Finding a Badges ticket

There are quite a few badges tickets. One might have difficulty looking for one that’s open, one that’s possible to implement, or one with a concept. So we decided to put together a list of relatively easy badges designs that are up for grabs. This post will go out about once a month and provide you with such list of badge tickets carefully selected by us!

First of all let’s look at the process of creating a badge. If you can, attend a badges workshop. If none are available, no problem! Here’s a step-by-step guide with tips. You can also ask questions on IRC (#fedora-design) or at our bi-weekly meeting every other Wednesday at 7-8 AM EST on #fedora-meeting-1 on freenode.

These badges are still up for grabs! Try designing one of the following badges and we will help you through the process:

  • #432: “I’ve been there“, for visiting a Fedora booth at any event
    For this badge design, reuse this artwork (https://badges.fedoraproject.org/badge/the-panda-is-in) and add a panda in front! (https://badges.fedoraproject.org/badge/lets-have-a-party-fedora-25)
  • #333: “Oh, wait!“, for canceling a Koji build
    This artwork just needs a little tweaking, and it will be ready! Download the svg and make the suggested changes in the comments.
  • #150: “Testing Day participant“, for contributing to a Fedora QA test day
    This badge needs original artwork, but it will be a breeze! Create a drawing of a piece of paper, and put the letters A+, B, C, D etc in separate files to create an entire series.

The post Earn Fedora Badges designing Badges! appeared first on Fedora Community Blog.

gThumb: View and manage your photos in Fedora

Posted by Fedora Magazine on June 23, 2017 08:00 AM

Fedora uses Eye of GNOME to display images, but it’s a very basic program. Out of the box, Fedora doesn’t have a great tool for managing photos. If you’re familiar with the Fedora Workstation’s desktop environment, GNOME, then you may be familiar with GNOME Photos. This is a young app available in GNOME Software that seeks to make managing photos a painless task. You may not know that there’s a more robust tool out there that packs more features and looks just as at home on Fedora. It’s called gThumb.

What is gThumb?

gThumb is hardly a new piece of software. The program has been around since 2001, though it looks very different now than it did back then. As GNOME has changed, so has gThumb. Today it’s the most feature-rich way of managing images using a GNOME 3 style interface.

While gThumb is an image viewer that you can use to replace Eye of GNOME, that’s only the beginning of what it can do. Thanks to the inclusion of features you would normally find in photo managers like digiKam or the now discontinued Picasa, I use it to view the pictures I capture with my DSLR camera.

How gThumb handles photos

At its core, gThumb is an image viewer. While it can organize your collection, its primary function is to display pictures in the folders they’re already in. It doesn’t move them around. I consider this a plus.

I download images from my camera using Rapid Photo Downloader, which organized and renames files precisely as I want them. All I want from a photo manager is the ability to easily view these images without much fuss.

That’s not to say that gThumb doesn’t offer any of the extra organizational tools you may expect from a photo manager. It comes with a few.

Labeling, grouping, and organizing

Determining your photo’s physical location on your hard drive is only one of many ways to keep up with your images. Once your collection grows, you may want to use tags. These are keywords that can help you mark and recall pictures of a certain type, such as birthdays, visits to the park, and sporting events. To remember details about a specific picture, you can leave a comment.

gThumb lets you save photos to one of three collections, indicated by three flags in the bottom right corner. These groups are color coordinated, with the options being green, red, and blue. It’s up to you to remember which collections correspond with what color.

Alternatively, you can let gThumb organize your images into catalogs. Catalogs can be based on the date images were taken, the date they were edited, or by tags.

It’s also an image editor

gThumb provides enough editing functions to meet most of my needs. It can crop photos, rotate them, and adjust aspects such as contrast, lightness, and saturation. It can also remove red-eye. I still fire up the GIMP whenever I need to do any serious editing, but gThumb is a much faster way of handing the basics.

gThumb is maintained by the GNOME Project, just like Eye of GNOME and GNOME Photos. Each offers a different degree of functionality. Before you walk away thinking that GNOME’s integrated photo viewers are all too basic, give gThumb a try. It has become my favorite photo manager for Linux.

PHP version 7.0.21RC1 and 7.1.7RC1

Posted by Remi Collet on June 23, 2017 05:09 AM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.0.21RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 25 or remi-php70-test repository for Fedora 23-24 and Enterprise Linux.

RPM of PHP version 7.1.7RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26 or remi-php71-test repository for Fedora 23-25 and Enterprise Linux.

PHP version 5.6 is now in security mode only, so no more RC will be released.

PHP version 7.2 is in development phase, version 7.2.0alpha2 is also available.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.0 as Software Collection:

yum --enablerepo=remi-test install php70

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.0:

yum --enablerepo=remi-php70,remi-php70-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.1.7RC1 is also available in Fedora rawhide (for QA).

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php70, php71)

Base packages (php)

Fournir une trace lors du plantage d'une application Android

Posted by Jean-Baptiste Holcroft on June 22, 2017 10:00 PM

Tout comme les logiciels de son ordinateur, fournir un rapport de bug est une contribution importante dans le logiciel libre. Voici les quelques étapes à mener pour un téléphone sur Android. J’utilise Fedora, mais il n’y a pas de raison que cela fonctionne très différemment sur une autre distribution.

Parfois un plantage m’est égal car c’est mineur ou tellement rare que je ne saurais pas le reproduire, d’autre c’est vraiment gênant ou fréquent, dans ce cas je dois le signaler. Dans mon cas ce sont les applications OpenFoodFacts, OpenBeautyFacts et OpenPetFoodFacts qui m’ont posé problème.

J’utilise rarement ces applications, mais j’aime leur principe et quelques fois par ans je parcours tout ce qui sort de mes sacs de course pour apporter ma petite contribution à leurs projets. Cependant, j’avais des plantages communs à plusieurs applications et triviaux à reproduire, j’ai donc décrit ce que je rencontrais sur le dépôt github du projet

La source du problème, c’est probablement le fait que mon Fairphone utilse une version désormais un peu ancienne d’Android (4.2), et même si les développeurs font attention à ne pas exclure trop rapidement ces terminaux en choisissant correctement les API sur lesquels ils s’appuient (merci !), ils peuvent ne pas avoir le matériel pour produire localement le plantage. Dans ce cas, ils demandent une trace du plantage, globalement c’est l’ensemble de la chaîne explicative, qui permet de dire : « depuis l’interface, ce bouton a appelé telle fonction qui s’appuie sur telle autre qui a provoqué telle erreur ». Les Anglais parlent aussi de backtrace.

Bon, c’est très simple :

  1. Activez l’option « Débogage USB » dans la partie Options pour les développeurs des paramètres de votre téléphone

    En cas d’absence de ce menu « Options pour les développeurs" : allez dans les Paramètres généraux, ouvrez « À propos de l’appareil ». Puis tapotez par quatre fois le « Numéro de Build » pour débloquer les « Options pour les développeurs ».

  2. Installer adb via « sudo dnf install adb »

  3. Branchez votre téléphone à votre ordinateur
  4. Lancer la comment « adb devices » devrait le détecter

    Si ne montre rien échoue, assurez-vous que votre téléphone détecte bien qu’il est branché en USB, et qu’il marque que la connexion est en mode « Déboguage USB »).

Pour extraire les journaux (logs), rien de plus simple puisqu’il faut saisir « adb logcat » dans son terminal.

Vous découvrirez probablement que votre téléphone est très bavard ! Restreindre la sortie est heureusement possible via la commande : « adb logcat "*: E" ».

Pensez à signaler vos problèmes ! (et à être gentil :))

Second batch of boards!1: The bull springs into action2:...

Posted by Angela Pagan (Outreachy) on June 22, 2017 07:47 PM

Second batch of boards!

1: The bull springs into action

2: Piloting

3: Zoom out

4: Space ship

5: Warp drive

6: Saturn

7: The ship flying in

8: Finding the rover

Writing a Babel codemod plugin

Posted by Sarup Banskota on June 22, 2017 07:00 PM

If you haven’t already, you first want to check out the Babel Handbook, a fantastic document that walks you through the way Babel is designed. There is even a section on Writing a Babel plugin, however for me a newbie, the foo === bar example was a bit too simple.

To do a quick recap, Babel goes through 3 primary stages - parse, transform and generate.

  1. Parse - Take in code to transform, identify familiar tokens, and generate an Abstract Syntax Tree
  2. Transform - Traverse the AST from the previous step, and modify it into the AST representation for the transformed code
  3. Generate - Take in the final AST and convert it into transformed code

A Babel plugin comes in as part of the transform stage. Here’s what a plugin export file looks like on the inside, just for illustration:

<script src="https://gist.github.com/9467a6bfba0cddcd469866fddeebd21b.js"> </script>

Every plugin has a visitor property that describes traversal order and modifications to the original AST. The visitor description leverages babel-types, a library that helps a plugin author identify what kind of Node one is dealing with, its properties etc, and information for Babel to process, validate, and transform it.

You can read in length about the context of the plugin we’re going to write today, but in a nutshell, we want to be able to transform the core plugins within Babel to contain a name. e.g for the plugin export file described above, we want to modify it into:

<script src="https://gist.github.com/95e91ba30d139398a244f7eb2cec7de8.js"> </script>

So in the two examples above, we want to get into the default export function, find the object being returned, and inject the name as the first property.

At least that’s what I concluded and coded out the first version. Soon enough, Henry pointed out some examples of plugin export files that don’t follow that structure.




On first glance, it appears like a crazy feat, those files look pretty different. As he hints however, we can indeed just look for the visitor property, and plug name as another property to the parent object.

Not so soon 😉 Here’s an example where this logic won’t apply:

<script src="https://gist.github.com/d3cf799c49995a644fd96fba2c2a2749.js"> </script>

Well it does look like we can apply our original idea for files that follow the structure above. Let’s work with both approaches, and hopefully they cover most file structures. Skimming through the plugin handbook quickly, let’s start with this template:

<script src="https://gist.github.com/8b6d5158008ca7f302236e2398a58a4a.js"> </script>

To make a better mental model for approaching the traversal, we want to visualise first the AST we’re dealing with. To do this, we can plug our initial source code into an AST explorer.

Some tinkering around the interface reveals AST Explorer highlights the Nodes for us.

Screenshot of an AST Explorer highlight

On the AST, click through the part that says ExportDefaultDeclaration, expanding your way until you arrive at the ObjectExpression we’re targeting. That’s exactly how I made a mental model for approaching this plugin. Next you’ll need to spend some time skimming through the Babel Types README. This will help you familiarise with the various types and properties available. Here on, it’s pretty simple.

We want to traverse the AST using Babel’s APIs, such that we arrive at the ObjectExpression we found earlier. By repeatedly plugging in some console.log() statements, you can arrive at the following:

<script src="https://gist.github.com/dd51c9653983a3573400ffb6f04ceda4.js"> </script>

We’ll make those chained .get() statements compact, and finally reading through the Transformation Operations section in the handbook, we realise that inserting a sibling node is exactly what we want to do.

We’re therefore done with our first iteration!

<script src="https://gist.github.com/e7abf9fdb230f8f5dadc8e78d9f05ae4.js"> </script>

That solved for one file structure case, and now we want to account for the (hopefully easier) case Henry pointed out, the one where we want to plugin the name right before the visitor.

On the outset, that should be much easier to visualise, and it’s easy to come up with a good first draft for the visitor, such as this:

<script src="https://gist.github.com/ade8d35fef5b97ec2828a5cba69d4f6d.js"> </script>

The observant will note that we want to prevent inserting name twice, in the event that a file structure contains both types of nodes. Therefore, we’ll do a little name check. With some standardisation of variable names, and sharing the name check, our final plugin looks as follows:

<script src="https://gist.github.com/96265e368f843f7f1fdd01be03139eff.js"> </script>

That is it! I’ve published the same code as an npm package if you’re keen to take a look. If you clone it and follow the instructions on the README, you should be able to see it in action!

The best part for me throughout this learning exercise was that I realised at the end I forgot to add a name property to this plugin 😄 Guess what I’m gonna run 😈

$ codemod --require babel-register -o index='{"pluginName": "babel-plugin-add-name-to-plugin"}' --plugin src/index.js src/index.js

Always be eating your own dog food!

Modular F27 Server Edition – initial design

Posted by Adam Samalik on June 22, 2017 03:41 PM

Recently, I have started a discussion on the Server mailing list about building the Fedora 27 Server Edition using Modularity. Langdon White is already working on a change request. If that happens, there will be a lot of work in front of us. So let’s start with writing blog posts!

To build the Fedora 27 Server Edition using Modularity, Adam thinks we need to focus on two things:

First is the initial design of the basic set of modules for the F27 Server Edition – including the Host and Platform modules, as well as other ‘application/content’ modules. To make this easier, I’m proposing a tool temporarily called The Graph Thing.

Second is a great packager UX for the build pipeline. This will lead to more content built by the community. It will include The Graph Thing and BPO, and I will be talking about it in a different post.

This post covers the first part – the initial design.

Modularizing the F27 Server Edition – introduction

To build Fedora 27 Server Edition using Modularity, we need to split the monolithic distro into smaller modules. Modules can come in multiple streams and on independent lifecycles. However, in Fedora 27, all modules will be on the same lifecycle of 13 months as the rest of Fedora 27.

Splitting Fedora 27 into modules

The Platform team defines the Host and Platform modules by deciding which packages are needed in which modules and including their dependencies into these modules. They already started defining the Host and Platform modules in the Host and Platform GitHub repository.

Other modules will be defined based on the Fedora 27 Server Edition use-cases. We need to create modules for all server roles and other components that will be part of the server. Defining these modules will also influence the Platform module as it will include some of the packages shared between other modules.

Splitting the distribution into individual modules is not easy. We need to work with hundreds of packages with complex dependencies and carefully decide what package goes into which module. To do the initial distribution design, I am proposing a tool that will help us. I temporarily call it The Graph Thing and I hope the name will change soon.

After the initial split, we expect other people adding modules to the distribution as well. Making the packager UX great is crucial for us, if we want to get a lot of content from the community. Packagers will be also able to use The Graph Thing for the initial design of their module, and Build Pipeline Overview (BPO) to monitor their builds.

The Graph Thing – a tool for the initial design

The Graph Thing is a tool that will help people with the initial design of individual modules when modularizing a distribution. It will also help packagers with adding other modules later on.

It will work with resources from the Fedora infrastructure, inputs from the user, and will produce a dependency graph as an output. An example with pictures is better than a thousand words:

Example – defining a nodejs module

The Dependency Thing - first run

In this example, there are three things available as resources:

  • Host module definition
  • Platform module definition
  • Fedora 27 repository – including all the Fedora 27 packages

A packager wants to add a new module: nodejs. The packager have specified in the input that they want to visualize the host and platform modules, as well as the new nodejs module which doesn’t exist in the infrastructure. The packager added a nodejs-6.1 package into that module and wants to see if that’s enough for the module, or if they need to add something else.

The output shows that the nodejs module will require two other packages directly, library-foo-3.6 and nodejs-doc-6.1, and that the nodejs-doc-6.1 also needs a crazy-thing-1.2.

Seeing this, the packager made the following decision:

  1. The library-foo-3.6 is a package specific for nodejs, so it should be part of the nodejs module.
  2. The nodejs module should not include documentation, as it will be shipped separately. They will modify the nodejs-6.1 package not to include the nodejs-doc-6.1.

With the decision done, the packager wants to see how it’s going to look like if they apply the changes. So they modify the input of The Graph Thing and run it again:

Great! It looks like that the module now includes everything that is needed. The packager can make the change in the nodejs-6.1 package and submit this module for a build in the Fedora infrastructure.

Value of The Graph Thing tool

As we saw in the example above, the tool can help with designing new modules by looking at the Fedora 27 packages and giving an instant view of how a certain change could look like, without rebuilding stuff.

It will also help us identify which packages need to be shared between individual modules, so we can decide if we include them in the Platform, or it we make a shared module. An obvious example of a shared module can be a language runtime, like Python or Perl. Other, not that obvious, can be identified with this tool.

Apart from the initial design of the Fedora 27 Server Edition, this tool could be also a part of the packager UX which I will be talking about soon.

Early implementation

I have already created an early partial implementation of this tool. You can find it in my asamalik/modularity-dep-viz GitHub repo.

Next steps

The tool needs to get rewritten to use libsolv instead of multiple repoquery queries – so it produces the correct output, and takes seconds rather than minutes to produce an output. It could also be merged with another similar project called depchase which has been used by the Platform team to define the Base Runtime and Bootstrap modules for the F26 Boltron release.

Problems with EPEL and Fedora mirroring: Many Root Cause Analysis

Posted by Stephen Smoogen on June 22, 2017 01:26 PM
There was a problem with EPEL and Fedora mirrors for the last 24 hours where people getting updates would get various errors like:

Updateinfo file is not valid XML: <open></open>

The problem was caused by a problem in the compose which output the XML file not as xml but as sqllite. The problem was fixed within a couple of hours on the Fedora side, but it has taken a lot longer to fix further downstream.

  • Some of the Fedora mirror containers were not updating correctly. We use a docker container on each proxy to keep the data fresh. 4? of the 14 proxies said they were updating but seem to not do so. These servers were our main ipv6 servers so people getting updates from these were more affected than other users. 
  • Some mirrors only update 1 or 2 times a day (or even slower). This means that your favourite mirror may keep the data for 12 to 48 hours. 
  • Some client plugins like to peg to a quickest mirror to try and keep downloads fast. While we may tell you that there are 20 mirrors up to date, the plugin will use the one it got stuff fastest from in the past. This means you can end up with going to a 'broken' mirror for a lot longer.
  • Some yum/dnf systems seem to have other options set to keep the bad xml file until it 'ages' out. This means that while an updated xml is there, some systems are still complaining because their box already has it.
The fixes on the Fedora side are to put in better tests to try and see that this does not happen again. The client side fixes are currently to do either one of the following:

  • yum clean all
  • yum clean metadata
Thank you all for your patience on this problem.

Here we have the first batch of storyboard sketches.1: the bull...

Posted by Angela Pagan (Outreachy) on June 22, 2017 01:22 PM

Here we have the first batch of storyboard sketches.

1: the bull checks on the rovers

2: the ice rover

3: the ice cracks

4: the crack grows

5: it falls

6: the bull gets a notification/ alert

7: the bull checks o  the situation

8: the rover is sad

All systems go

Posted by Fedora Infrastructure Status on June 22, 2017 12:59 AM
New status good: Everything seems to be working. for services: Ask Fedora, Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Darkserver, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, FedoraHosted.org Services, Fedora pastebin service, FreeMedia, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Fedora People, Package Database, Package maintainers git repositories, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

Bodhi 2.8.1 released

Posted by Bodhi on June 21, 2017 09:17 PM


  • Restore defaults for three settings back to the values they had in Bodhi 2.7.0 (#1633, #1640, and #1641).

Release contributors

The following contributors submitted patches for Bodhi 2.8.1:

  • Patrick Uiterwijk (the true 2.8.1 hero)
  • Randy Barlow

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on June 21, 2017 09:02 PM
New status scheduled: server reboots for services: Ask Fedora, Ipsilon, Badges, Blockerbugs, Package Updates Manager, Fedora Infrastructure Cloud, COPR Build System, Darkserver, Documentation website, Fedora elections, Account System, Fedora Messaging Bus, Fedora Calendar, FedoraHosted.org Services, Fedora pastebin service, FreeMedia, The Koji Buildsystem, Koschei Continuous Integration, Kerberos, Mailing Lists, Mirror List, Mirror Manager, Fedora Packages App, Fedora People, Package Database, Package maintainers git repositories, Tagger, Fedora websites, Fedora Wiki, Zodbot IRC bot

Facebook Events in KOrganizer

Posted by Daniel Vrátil on June 21, 2017 05:44 PM

Sounds like déjà vu? You are right! We used to have Facebook Event sync in KOrganizer back in KDE 4 days thanks to Martin Klapetek. The Facebook Akonadi resource, unfortunately, did not survive through Facebook API changes and our switch to KF5/Qt5.

I’m using a Facebook event sync app on my Android phone, which is very convenient as I get to see all events I am attending, interested in or just invited to directly in my phone’s calendar and I can schedule my other events with those in mind. Now I finally grew tired of having to check my phone or Facebook whenever I wanted to schedule event through KOrganizer and I spent a few evenings writing a brand new Facebook Event resource.

Inspired by the Android app the new resource creates several calendars – for events you are attending, events you are interested in, events you have declined and invitations you have not responded to yet. You can configure if you want to receive reminders for each of those.

Additionally, the resource fetches a list of all your friend’s birthdays (at least of those who have their birthday visible to their friends) and puts them into a Birthday calendar. You can also configure reminders for those separately.

The Facebook Sync resource will be available in the next KDE Applications feature release in August.

Eight years since first release and still no usable theme?

Posted by Martin Sourada on June 21, 2017 03:43 PM
Well, let me be frank. Ever since gtk-3.0 I've been skeptical of it, especially of the theming aspect. In gtk-2 we had (and still have) many themes ranging from trash to excellent, almost every kind of taste could have been satisfied. Not so in gtk-3. First issue is constant changes to theming API, meaning that despite there being hundreds of themes, only handful of them actually work right :( And among them, I still have yet to find one that would work on my fairly usual 15,6″ laptop screen with 1366×768 px resolution. Basicaly I have two issues.

  1. Almost every possible gtk-3 theme has huge (no, not just big, I really mean the word huge) paddings and there are no working compact variants. Yes there are minwaita and adwaita-slim, but they kinda break Thunar's and Whisker menu's entry boxes, and they also aren't very slick (yes, I don't like proper adwaita either, it does not look very proffessional to me).

    So far it meant I had to expand side pane in libreoffice and had less usable screen estate left for editing (the first two pics, notice how in the gtk-3 version the sidepane is not only wider, but also needs a scrollbar!), but now inkscape is getting ported as well, I'd need twice the screen resolution I have now for it to be usable (the second two pics, notice how in the gtk-3 version much more buttons and entries are not directly accessible, and the sidebar is sooo huge and cannot be made smaller) :(
  2. Scrollbars. They're small, they're ugly, they're hiding and hiding badly, meaning sometimes it's almost impossible to select either last column or last row in a list or both.
*Sigh* You at gnome/gtk – why do you try to fix something that ain't broken? Why there isn't a single proffessional looking compact theme that I could use with inkscape on my laptop much like I could with its gtk-2 releases? Are low-end machine users nothing to you? Why gtk-2 themes worked actually better at gtk-2.8+ (and probably in any version sans the very first or second) than gtk-3 themes in any version you choose?

Do you still wonder why people are complaining?

Oh, and to end on a positive note: if someone points me to really working really compact theme that does not break xfce-gtk3/libreoffice-gtk3/inkscape-gtk3 a is good looking (no, not adwaita, but something along the lines of numix, greybird, menta, zuki*, etc.) I'd be very happy to change my mind about gtk3 ;-)

Updates on my Python community work: 16-17

Posted by Kushal Das on June 21, 2017 10:56 AM

Thank you, everyone, for re-electing me to the Python Software Foundation board 2017. The results of the vote came out on June 12th. This is my third term on the board, 2014, and 2016 were the last two terms. In 2015 I was out as random module decided to choose someone else :)

Things I worked on last year

I was planning to write this in April, but somehow my flow of writing blog posts was broken, and I never managed to do so. But, better late than never

As I had written in wiki page for candidates, one of my major goal last was about building communities out of USA region. I warm welcome I have received in every upstream online community (and also in physical conferences), we should make sure that others should be able to have the same experience.

As part of this work, I worked on three things:

  • Started PyCon Pune, goal of the conference being upstream first
  • Lead the Python track at FOSSASIA in Singapore
  • Helping in the local PyLadies group (they are in the early stage)

You can read about our experience in PyCon Pune here, I think we were successful in spreading the awareness about the bigger community which stands out there on the Internet throughout the world. All of the speakers pointed out how welcoming the community is, and how Python, the programming language binds us all. Let it be scientific computing or small embedded devices. We also managed to have a proper dev sprint for all the attendees, where people did their first ever upstream contribution.

At FOSSASIA, we had many professionals attending the talks, and the kids were having their own workshops. There were various other Python talks in different tracks as well.

Our local PyLadies Pune group still has many beginner Python programmers than working members. Though we have many working on Python on their job, but never worked with the community before. So, my primary work there was not only about providing technical guidance but also try to make sure that the group itself gets better visibility in the local companies. Anwesha writes about the group in much more details than me, so you should go to her blog to know about the group.

I am also the co-chair of the grants working group. As part of this group, we review the grants proposals PSF receives. As the group members are distributed, generally we manage to get good input about these proposals. The number of grant proposals from every region has increased over the years, and I am sure we will see more events happening in the future.

Along with Lorena Mesa, I also helped as the communication officer for the board. She took charge of the board blog posts, and I was working on the emails. I was finding it difficult to calculate the amounts, so wrote a small Python3 script which helps me to get total numbers for every months’ update. This also reminds me that I managed to attend all the board meetings (they are generally between 9:30 PM to 6:30 AM for me in India) expect the last one just a week before PyCon. Even though I was in Portland during that time, I was confused about the actual time of the event, and jet lag did not help either.

I also helped our amazing GSoC org-admin team, Terri is putting countless hours to make sure that the Python community gets a great experience in this program. I am hoping to find good candidates in Outreachy too. Last year, the PSF had funds for the same but did not manage to find a good candidate.

There were other conferences where I participated in different ways. Among them the Science Hack Day India was very special, working with so many kids, learning Python together in the MicroPython environment was a special moment. Watiting for this year’s event eagerly.

I will write about my goals in the 2017-18 term in a future blog post.

Reading multiple files: wildcard file source in syslog-ng

Posted by Peter Czanik on June 21, 2017 09:23 AM

Starting with version 3.10, syslog-ng can collect messages from multiple text files. You do not have to specify file names one by one, just use a wildcard to select which files to read. This is especially useful when you do not know the file names by the time syslog-ng is started. This is often the case with web servers with multiple virtual hosts. From this blog you can learn how to get started using the wildcard-file() source.

Before you begin

Before configuring syslog-ng, you should have a web server – or any other software writing multiple log files – already up and running. In my example I use Apache HTTPD access logs, but you should be able to adopt the basic example to any software easily just by changing file names.

Configuring syslog-ng

The following configuration reads any file ending with “log” in its name from the /var/log/apache2 directory and writes all messages in JSON format into a single file. You should append these configuration snippets to your syslog-ng.conf or in a separate .conf file under /etc/syslog-ng/conf.d/ if supported by your Linux distribution.

First, define a wildcard-file() source. There are two mandatory parameters:

  • base-dir() configures the directory where syslog-ng looks for log files to read. In this case, it is the /var/log/apache2 directory.
  • filename-pattern() accepts a simple glob pattern which defines files to search for. A “*” represents zero or more characters, while a “?” a single character. In this case, it is any file name that ends with “log”.

The no-parse flag is necessary in this example, because by default syslog-ng parses messages using the syslog parser, but Apache HTTPD uses its own format for logging. For a complete list of wildcard-file() options check the documentation at https://www.balabit.com/sites/default/files/documents/syslog-ng-ose-3.10-guides/en/syslog-ng-ose-v3.10-guide-admin/html-single/index.html#configuring-sources-wildcard-file.

source s_apache2 {

Next, define a destination. Here, I’m using a JSON template, so that the different fields of the message are easy to distinguish. The “FILE_NAME” macro contains the file name with the full path name. The “MESSAGE” macro contains the whole message as it is read from the log files.

destination d_apache2 {
    template("$(format_json --key FILE_NAME --key MESSAGE)\n\n")

Finally, define a log statement that connects the source and destination together:

log { source(s_apache2); destination(d_apache2); };

Save the configuration and reload it using “syslog-ng-ctl reload”.

Verifying your configuration

If you have configured file names correctly, you should see similar entries in /var/log/web (or the destination you have configured):

{"MESSAGE":" - - [14/Jun/2017:17:17:16 +0200] \"GET / HTTP/1.0\" 200 4 \"-\" \"w3m/0.5.3+git20170102\"","FILE_NAME":"/var/log/apache2/test_log"}

In the MESSAGE field you see a message in the Apache combined log format. If you use VirtualHost names in your file names, you can use this information to identify which log message belongs to which website.

What is next

Logging as a service (LaaS) providers often recommend their agents to be installed next to syslog(-ng) just to cover this situation. Installing additional software is not necessary any more to be able to forward messages from a directory of log files. Also, using LaaS providers from syslog-ng was never easier thanks to the syslog-ng configuration library (SCL), which hides away the complexity of setting up these destinations.

Try it yourself or check my blog next week where I will add a parser and a LaaS provider to the configuration.

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Reading multiple files: wildcard file source in syslog-ng appeared first on Balabit Blog.

Controlling Windows via Ansible

Posted by Fedora Magazine on June 21, 2017 08:00 AM

For many Linux systems engineers, Ansible has become a way of life. They use Ansible to orchestrate complex deployment processes, to define multiple systems with a quick and simple configuration management tool, or somewhere in between.

However, Microsoft Windows users have generally required a different set of tools to manage systems. They also often needed a different mindset on how to handle them.

Recently Ansible has improved this situation quite a bit. The Ansible 2.3 release included a bunch of new modules for this purpose. Ansible 2.3.1 is already available for Fedora 26.

At AnsibleFest London 2017, Matt Davis, Senior Principal Software Engineer at Ansible, will lead a session covering this topic in some detail. In this article we look at how to prepare Windows systems to enable this functionality along with a few things we can do with it.

Preparing the target systems

There’s a couple of prerequisites that are required to prepare a Windows system to allow ansible to connect. The connection type used for this is “winrm” which is the Windows Remote Management protocol.

When using this ansible executes powershell on the target system. This requires a minimal version of Powershell 3.0 although it’s recommended to install the most recent version of Windows Management Framework, which at the time of writing is 5.1 and includes Powershell 5.1 as part of it.

With that in place the WinRM service needs to be configured on the Windows system. The easiest way to do this is with the ansible powershell script.

By default commands, but not scripts, can be executed by the ExecutionPolicy. However when running the script via the powershell executable this can be bypassed.

Run powershell as an an administrative user and then:

powershell.exe  -ExecutionPolicy Bypass -File ConfigureRemotingForAnsible.ps1 -CertValidityDays 3650 -Verbose

After this the WinRM service will be listening and any user with administrative privileges will be able to authenticate and connect.

Although it’s possible to use CredSSP or Kerberos for delegated (single sign-on) the simplest method just makes use of username and password via NTLM authentication.

To configure the winrm connector itself there’s a few different variables but the bare minimum to make this work for any Windows system will need:

ansible_user: 'localAdminUser'
ansible_password: 'P455w0rd'
ansible_connection: 'winrm'
ansible_winrm_server_cert_validation: 'ignore'

The last line is important with the default self-signed certificates that Windows uses for WinRM, but can be removed if using verified certificates from a central CA for the systems.

So with that in place how flexible is it? How much can really be remotely controlled and configured?

Well step one on the controlling computer is to install ansible and the winrm libraries:

dnf -y install ansible python2-winrm

With that ready there’s a fair number of the core modules avaliable but the majority of tasks are from windows specific modules.

Remote windows updates

Using ansible to define your Windows systems updates allows them to be remotely checked and deployed, whether they come directly from Microsoft or from an internal Windows Server Update Service:

ansible -i mywindowssystem, -c winrm -e ansible_winrm_server_cert_validation=ignore -u administrator -k -m win_updates

mywindowssytem | SUCCESS => {
    "changed": true, 
    "failed_update_count": 0, 
    "found_update_count": 3, 
    "installed_update_count": 3, 
    "reboot_required": true, 
    "updates": {
        "488ad51b-afca-46b9-b0de-bdbb4f56672f": {
            "id": "488ad51b-afca-46b9-b0de-bdbb4f56672f", 
            "installed": true, 
            "kb": [
            "title": "2017-06 Security Monthly Quality Rollup for Windows 8.1 for x64-based Systems (KB4022726)"
        "94e2e9ab-e2f7-4f8c-9ade-602a0511cc08": {
            "id": "94e2e9ab-e2f7-4f8c-9ade-602a0511cc08", 
            "installed": true, 
            "kb": [
            "title": "2017-06 Security Update for Adobe Flash Player for Windows 8.1 for x64-based Systems (KB4022730)"
        "ade56166-6d55-45a5-9e31-0fac924e4bbe": {
            "id": "ade56166-6d55-45a5-9e31-0fac924e4bbe", 
            "installed": true, 
            "kb": [
            "title": "Windows Malicious Software Removal Tool for Windows 8, 8.1, 10 and Windows Server 2012, 2012 R2, 2016 x64 Edition - June 2017 (KB890830)"

Rebooting automatically is also possible with a small playbook:

- hosts: windows
    - name: apply critical and security windows updates 
          - SecurityUpdates
          - CriticalUpdates
      register: wuout
    - name: reboot if required
      when: wuout.reboot_required

Package management

There’s two ways to handle package installs on windows using ansible.

The first is to use win_package which can install any msi or run an executable installer from a network share or uri. This is useful for more locked down internal networks with no internet connectivity or for applications not on Chocolatey. In order to avoid re-running an installer and keep any plays safe to run it’s important to lookup the product ID from the registry so that win_package can detect if it’s already installed.

The second is to use the briefly referenced Chocolatey. There is no setup required for this on the target system as the win_chocolatey module will automatically install the Chocolatey package manager if it’s not already present. To install the Java 8 Runtime Environment via Chocolatey it’s as simple as:

ansible -i mywindowssystem, -c winrm -e ansible_winrm_server_cert_validation=ignore -u administrator -k -m win_chocolatey -a "name=jre8"
mywindowssystem | SUCCESS => {
    "changed": true, 
    "rc": 0

And the rest…

The list is growing as ansible development continues so always check the documentation for the up-to-date set of windows modules supported. Of course it’s always possible to just execute raw powershell as well:

ansible -i mywindowssystem, -c winrm -e ansible_winrm_server_cert_validation=ignore -u administrator -k -m win_shell -a "Get-Process"
mywindowssystem | SUCCESS | rc=0 >>

Handles  NPM(K)    PM(K)      WS(K)     CPU(s)     Id  SI ProcessName          
-------  ------    -----      -----     ------     --  -- -----------          
     28       4     2136       2740       0.00   2452   0 cmd                  
     40       5     1024       3032       0.00   2172   0 conhost              
    522      13     2264       5204       0.77    356   0 csrss                
     83       8     1724       3788       0.20    392   1 csrss                
    106       8     1936       5928       0.02   2516   0 dllhost              
     84       9     1412        528       0.02   1804   0 GoogleCrashHandler   
     77       7     1448        324       0.03   1968   0 GoogleCrashHandler64 
      0       0        0         24                 0   0 Idle                 

With the collection of modules already available and the help of utilities like Chocolatey it’s already possible to manage the vast majority of the Windows estate with ansible allowing many of the same techniques and best practices already embedded in the Linux culture to make the transition over the fence, even with more complex actions such as joining or creating an Active Directory domain.

ansible -i mywindowssystem, -c winrm -e ansible_winrm_server_cert_validation=ignore -u administrator -k -m win_say -a "msg='I love my ansible, and it loves me'"

All systems go

Posted by Fedora Infrastructure Status on June 21, 2017 01:14 AM
New status good: Everything seems to be working. for services: Package Updates Manager, The Koji Buildsystem, Koschei Continuous Integration, Package maintainers git repositories

Creating screencasts on Linux

Posted by Maxim Burgerhout on June 21, 2017 12:00 AM


As this is a new blog on Fedora Planet, let me start off by introducing myself briefly. My name is Maxim Burgerhout, and I have been a Fedora contributor for quite some time. Truth be told though, I haven’t been able to spend much time maintaining my packages over the past couple of years. As a different way of giving back, I want to start sharing some experiences with open source software in a specific niche: screencast creation, and video editing.


A couple of months back, I started recording screencasts about Red Hat products.[1]. For now, it’s mostly about management products, like Satellite, Ansible Tower and things like that, but I’ll potentially also cover other products and projects as they pop up in my daily work, and as said above, I intend to start sharing some experiences.[2] about creating screencasts on Fedora.

Assuming more people are trying to figure out the same things I am, I’m starting off with a short write-up of my experiences so far, trying to work with open source software to create screencasts. Spoiler: it’s not as easy as I hoped it would be.

The below article is based on my experience using Fedora 25.

Recording video

Ever since I started doing this, I’ve been using Screencastify as my screen recorder of choice. I have tried using the EasyScreencast Gnome Shell extension in the beginning, but it had (temporarily) died, so that didn’t seem viable. It seems to have revived though, so I’ll probably try it again when my Screencastify subscription expires near the end of the calendar year.

I also tried the CTRL-ALT-SHIFT-R option to start a screencast recording in Gnome, but that records both my monitors, which makes editing the whole thing into a Youtube video quite a pain.[3].

Finally, gtk-recordmydesktop gives me all kinds of strange artifacts in my recording when I move my mouse. It also seemed to crash quite frequently, and seems to be dead upstream.

All options available from the community (the built-in one, gtk-recordmydesktop and EasyScreencast) were disqualified for various reasons, either because of lack of maintenance, quircks or instability.

However, apart from the occasional crash (which happens very seldomly), Screencastify works beautifully. I can record a window, a Chrome tab, or my whole desktop. Recording my voice over the videos also works pretty well, using a USB microphone I bought for the purpose of creating screencasts.

The downside of Screencastify is that it’s a proprietary piece of software. For now, it’s the clear winner, but in the future I’ll give EasyScreencast a go for it’s money again.

Recording audio on it’s own

Recording audio on Fedora can be done through various options, of which the two most obvious are Sound Recorder and Audacity.

Sound Recorder is the default sound recorder app in Gnome 3. It’s OK for very simple usage, but the moment you want to start editing audio or improving audio quality using filters, Sound Recorder doesn’t offer anything.

Audacity on the contrary, is very complete. It’s even a bit intimidating in the amount of options it offers, but in terms of recording quality, editing the recordings and improving their quality, Audacity is the de-facto standard in open source, on Linux as well as on various other platforms. Simply said, it’s brilliant.

Audacity is the clear winner hear, without any real competition.

Editing video

So this is where the real pain starts. Save to say, video editing on Linux was a bit of a disappointment for me.

I have tried all of the major video editing open source projects that available natively on Fedora: Kdenlive, Pitivi, Avidemux and OpenShot, as well as the commercially available Lightworks.

To start with Avidemux: it seems to lack the full broad spectrum of features one would need to edit and merge large amounts of clips into a new video, and insert transitions and background audio. I assume it would work nicely to just crop two videos and slam them together, but it doesn’t feel right for more complex things. Granted, I haven’t spent a huge amount with this program, so let me know if you think I’m dismissing Avidemux too easily. It just wasn’t enough for me.

Next up are OpenShot and Kdenlive. Both great programs, both with extensive feature sets that would suffice for me and both with the same continuous problem that disqualified them: they crash. Over and over. I’ll be filing bugs for both, but no matter how that turns out, right NOW, they are not very useful for me. Both seem to have somewhat lively upstreams tho, so who knows what the future might bring.

Sadly, I’ve spent too much time trying to get OpenShoft and Kdenlive to work, and that kept me from thoroughly evaluating my next contender: Pitivi.

Pitivi used to fall into the same category as OpenShoft and Kdenlive (crashing), but I haven’t experienced any crashes recently. I comes with a nice set of effects, just like OpenShoft and Kdenlive, and is fairly easy to use. It exports to all of the right formats, but sadly, rendering of video happens in the foreground. This blocksyou from using the program during that process. Not a big deal for a video of a couple of minutes, but annoying for anything longer than that.

The final program I just had to take a look at, is Lightworks. It’s not open source, but it really is bloody good. It’s by far the most complete of the lot, but it comes at a hefty price. Also, some of the options that are really interesting for making screencasts, like a built-in voice-over recorder, isn’t available on Linux :(

I would say for video editing, Pitivi and Lightworks are tied, with Lightworks being the more complete option, and Pitivi being the open source one.


Audio editing we have under control in open source. Audacity is great. It’s just really, really great. (There, enough Trumpianisms for today.)

Screencast recording has come a long way, but hasn’t quite reached the level of functionality I needed a couple of months back. It might have grown to that level in the meantime, though. I’ll take the time to re-evaluate EasyScreencast and post an update sometime in the Fall.

Video editing is still a bit of a problem. The commercial option is good, but pretty expensive and obviously not open source. Two of the three main contenders (OpenShot and Kdenlive) have serious stability issues, up to the point that I just gave up on them. Bugs will be filed, but that’s not helping me today. Pitivi is a little less complete than both OpenShot and Kdenlive, I think, but does show promise (and doesn’t crash that often).

As with EasyScreencast, I’ll give Pitivi as second try and hopefully find an open source solution for my video editing problem.

TL;DR if you are looking for a set of tools to record and edit screencasts on Fedora, you probably want to check out EasyScreencast, and use Screencastify as a fall-back option. For audio, there’s no way around Audacity. If you can shell out some dough and don’t mind a bit of proprietary software, go for Lightworks, otherwise Pitivi will help you overcome most video editing problems.


1. I work as a solution architect for Red Hat in the Benelux region
2. How-to’s, why this and not that, and who knows: screencasts ;)
3. If you know who to limit this to a single monitor, or even better: a single window, I’m all ears!

Giving Qutebrowser a go - a fantastic keyboard-focused browser

Posted by Ankur Sinha "FranciscoD" on June 20, 2017 11:09 PM
A screenshot showing hints in Qutebrowser on the Qutebrowser website

Years ago, I was introduce to touch typing. I knew immediately that it was a skill I must learn. I remember spending hours playing with gtypist trying to improve my typing efficiency. I'm not too bad nowadays. I can mostly type without looking at the keyboard at all, and with few errors.

I've always loved using the command line. In fact, I maintain that new programmers should start at the command line and only move to IDEs once they've learned exactly what's being done under the hood. I use the terminal as much as conveniently possible - music via ncmpcpp, IRC on irssi (there are Gitter and Slack gateways to IRC too), taskwarrior to organise my TODOs, all my writing in VIM (programming and otherwise), for example. Byobu makes it really easy.

The one effect sticking to the command line so much has had on me is that I've developed a slight aversion to the mouse/touchpad. I now feel mildly annoyed if I must move my fingers off the home-row to do something. I must use the touchpad to check my mail/calendar on Evolution, for example, but this doesn't annoy me too much because I usually check these when I've taken a break from programming (or my code is compiling). It's really on Firefox that the constant switching between keyboard and mouse used to be a real downer.

Being a VIM user, I did the expected - went looking to see if there was a way to use VIM style key-mappings on Firefox. There are multiple add-ons that permit this with different feature sets - vimperator, pentadactyl, vimium, vimFX are a few examples. Now, the different features these provide cater to different people's requirements. I went for pentadactly. Not only does it permit VIM style key mappings and navigation, it also provides a vim style command line that is incredibly handy. I've used it for years now. The issue that has troubled pentadactyl for some time now is constant breakage - it tends to break each time the Firefox addon API is updated. Recently, I read that some major changes in the API will make pentadactly pretty much unusable in the near future. This made me go looking for a more stable alternative. I tried one or two others - vimium for example, but somehow, I find vimium too simple.

So, I dug further and ran into Vimb and Qutebrowser. They're both "vim like browsers" i.e., they're designed for more advanced users and they provide VIM like key-mappings and modes. I gave vimb a short try, but Qutebrowser really impressed me a lot more.


The best thing about Qutebrowser is that it's actively maintained. I even hopped on to the IRC channel earlier today to get some help. The latest version is in Fedora already, so you can simply go sudo dnf install qutebrowser to give it a whirl. I wanted to test out the latest codebase, so I quickly set up a copr repository that you can use too. I'm tinkering with FlatPak to try and build one too, so that it becomes even easier to install, but I'm still figuring out how FlatPaks are built.

A screenshot showing hints in Qutebrowser

The screenshot shows "hinting" which is how one opens links. You press "f" and the various links in the page get labelled. Simply type the label of the link you want to visit. There's also "advanced hinting" which lets you do things like open links in a background tab, or in a new tab, or save (yank) a link URL.

A screenshot showing the command mode in Qutebrowser

This one shows the command mode - everything can be done here, including configuration of the browser or browsing related tasks.

A few tips

I did a few things to get started. First, I wanted to use the new "webengine" backend. This requires the installation of two packages: sudo dnf install python3-pyopengl python3-qt5-webengine, and then creating a new file in ~/.local/share/applications/qutebrowser.desktop with the following contents:

[Desktop Entry]
GenericName=Web Browser
Exec=qutebrowser --backend webengine %u

This new file simply ensures that picking Qutebrowser from the activities menu will run the new backend. Without this, one would have to launch it from the terminal each time.

Next, I configured it a bit to my liking - still very limited, but it's a start. The configuration file for Qutebrowser is at ~/.config/qutebrowser/qutebrowser.conf. There's so much one can modify here. I've only set up a few search engines and updated the default to Google. To do this, one needs to modify the [searchengines] section in the file:

DEFAULT = https://google.com/search?hl=en-GB&q={}
duckduckgo = https://duckduckgo.com/?q={}
github = https://github.com/search?q={}
google-scholar = https://scholar.google.co.uk/scholar?hl=en&q={}

I also enable save-session - just set it to true. There are a few other tweaks, such as updating the startpage to http://start.fedoraproject.org. There's even a built in ad-blocker that one can configure.

To get flash working, one needs to also install the ppapi bits. Assuming one already has the flash plugin repository installed, sudo dnf install flash-player-ppapi does this. I haven't gotten Netflix to work yet - it requries some Silverlight thingy. I can always run Chrome or FF for that one rare purpose anyway.

There are, obviously a few limitations in the current Qutebrowser version. The most noticeable one is probably the lack of a sync service similar to ones Firefox and Chrome provide. Google does tell me something about using syncthing but I haven't gotten down to this yet. While it would be nice to have, it isn't quite that necessary. There isn't a password manager either. There are plans to develop a plug-in system in the pipeline to implement such features already, though. (userscripts seem to provide some additional functionality too.)

Anyway, it's a great, quick, and lean browser if you're a VIM addict like me, so give it a go? If you have some cycles and are intersted in some hacking, get in touch with the devs over Github too. If not, please do at least file bugs if you see them.

Here's a quickstart to quickly get up and running with. Oh, and yeah, the mouse/touchpad works in the browser too!

There are scheduled downtimes in progress

Posted by Fedora Infrastructure Status on June 20, 2017 09:00 PM
New status scheduled: planned buildsystem outage for services: The Koji Buildsystem, Koschei Continuous Integration, Package maintainers git repositories, Package Updates Manager


Posted by Bodhi on June 20, 2017 08:08 PM

Special instructions

  • There is a new setting, ci.required that defaults to False. If you wish to use CI, you must
    add a cron task to call the new bodhi-babysit-ci CLI periodically.


The /search/packages API call has been deprecated.

New Dependencies

  • Bodhi now uses Bleach to sanitize markdown input from the user.
    python-bleach 1.x is a new dependency in this release of Bodhi.


  • The API, fedmsg messages, bindings, and CLI now support non-RPM content (#1325, #1326, #1327, #1328). Bodhi now knows about Fedora's new module format, and is able to handle everything they need except publishing (which will appear in a later release). This release is also the first Bodhi release that is able to handle multiple content types.
  • Improved OpenQA support in the web UI (#1471).
  • The type icons are now aligned in the web UI (4b6b759 and d094032).
  • There is now a man page for bodhi-approve-testing (cf8d897).
  • Bodhi can now automatically detect whether to use DDL table locks if BDR is present during migrations (059b5ab).
  • Locked updates now grey out the edit buttons with a tooltip to make the lock more obvious to the user (#1492).
  • Users can now do multi-line literal code blocks in comments (#1509).
  • The web UI now has more descriptive placeholder text (1a7122c).
  • All icons now have consistent width in the web UI (6dfe6ff).
  • The front page has a new layout (6afb6b0).
  • Bodhi is now able to use Pagure and PDC as sources for ACL and package information (5955186).
  • Bodhi's configuration loader now validates all values and centralizes defaults. Thus, it is now possible to comment most of Bodhi's settings file and achieve sane defaults. Some settings are still required, see the default production.ini file for documentation of all settings and their defaults. A few unused settings were removed (#1488, #1489, and 263b7b7).
  • The web UI now displays the content type of the update (#1329).
  • Bodhi now has a new ci.required setting that defaults to False. If enabled. updates will gate based on Continuous Integration test results and will not proceed to updates-testing unless the tests pass (0fcb73f).
  • Update builds are now sorted by NVR (#1441).
  • The backend code is reworked to allow gating on resultsdb data and requirement validation performance is improved (#1550).
  • Bodhi is now able to map distgit commits to Builds, which helps map CI results to Builds. There is
    a new bodhi-babysit-ci CLI that must be run periodically in cron if ci.required is
    True (ae01e5d).


  • A half-hidden button is now fully visible on mobile devices (#1467).
  • The signing status is again visible on the update page (#1469).
  • The edit update form will not be presented to users who are not auth'd (#1521).
  • The CLI --autokarma flag now works correctly (#1378).
  • E-mail subjects are now shortened like the web UI titles (#882).
  • The override editing form is no longer displayed unless the user is logged in (#1541).

Development improvements

  • Several more modules now pass pydocstyle PEP-257 tests.
  • The development environment has a new bshell alias that sets up a usable Python shell, initialized for Bodhi.
  • Lots of warnings from the unit tests have been fixed.
  • The dev environment cds to the source folder upon vagrant ssh.
  • There is now a bfedmsg development alias to see fedmsgs.
  • A new bresetdb development alias will reset the database to the same state as when vagrant up completed.
  • Some unused code was removed (afe5bd8).
  • Test coverage was raised significantly, from 85% to 88%.
  • The development environment now has httpie by default.
  • The default Vagrant memory was raised (#1588).
  • Bodhi now has a Jenkins Job Builder template for use with CentOS CI.
  • A new bdiff-cover development alias helps compare test coverage in current branch to the develop branch, and will alert the developer if there are any lines missing coverage.

Release contributors

The following developers contributed to Bodhi 2.8.0:

  • Ryan Lerch
  • Ralph Bean
  • Pierre-Yves Chibon
  • Matt Prahl
  • Martin Curlej
  • Adam Williamson
  • Kamil Páral
  • Clement Verna
  • Jeremy Cline
  • Matthew Miller
  • Randy Barlow

What capabilities do I really need in my container?

Posted by Dan Walsh on June 20, 2017 07:40 PM

I have written previous blogs discussing using linux capabilities in containers.

Recently I gave a talk in New York and someone in the audience asked me about how do they figure out what capabilities their containers require?

This person was dealing with a company that was shipping their software as a container image, but they had instructed the buyer, that you would have to run their container ‘fully privileged”.  He wanted to know what privileges the container actually needed.  I told him about a project we worked on a few years ago, we called Friendly Eperm.

Permission Denied!  WHY?

A few years ago the SELinux team realized that more and more applications were getting EPERM returns when a syscall requested some access.  Most operators understood EPERM (Permission Denied) inside of a log file to mean something was wrong with the Ownership of a process of the contents it was trying to access or the permission flags on the object were wrong.  This type of Access Control is called DAC (Discretionary Access Control) and under certain conditions SELinux also caused the kernel to return EPERM.  This caused Operators to get confused and is one of the reasons that Operators did not like SELinux. They would ask, why didn’t httpd report that Permission denied because of SELinux?  We realized that there was a growing list of other tools besides regular DAC and SELinux which could cause EPERM.  Things like SECCOMP, Dropped Capabilities, other LSM …   The problem was that the processes getting the EPERM had no way to know why they got EPERM.  The only one that knew was the kernel and in a lot of cases the kernel was not even logging the fact that it denied access.  At least SELinux denials usually show up in the audit log (AVCs).   The goal of Friendly EPERM was to allow the processes to figure out why they got EPERM and make it easier for admin to diagnose.

Here is the request that talks about the proposal.


The basic idea was to have something in the /proc file system which would identify why the previous EPERM happened.  You are running a process, say httpd, and it gets permission denied. Now somehow the process can get information on why it got permission denied.  One suggestion was that we enhanced the libc/kernel to provide this information. The logical place for the kernel to reveal it would be in /proc/self.  But the act of httpd attempting to read the information out of /proc/self itself could give you a permission denied.  Basically we did not succeed because it would be a race condition, and the information could be wrong.

Here is a link to the discussion https://groups.google.com/forum/#!msg/fa.linux.kernel/WQyHPUdvodE/ZGTnxBQw4ioJ

Bottom line, no one has figured a way to get this information out of the kernel.


Later I received an email discussing the Friendly EPERM product and asking if there was a way to at least figure out what capabilities the application needed.

I wondered if the audit subsystem would give us anything here.  But I contacted the Audit guys at Red Hat, Steve Grubb and Paul Moore,  and they informed me that there is no Audit messages generated when DAC Capabilities are blocked.

An interesting discussion occurred in the email chain:

DWALSH: Well I would argue most developers have no idea what capabilities their application requires.

SGRUBB: I don't think people are that naive. If you are writing a program that runs as root and then you get the idea to run as a normal user, you will immediately see your program crash. You would immediately look at where it’s having problems. Its pretty normal to lookup the errno on the syscall man page to see what it says about it. They almost always list necessary capabilities for that syscall. If you are an admin restricting software you didn't write, then it’s kind of a  puzzle. But the reason there's no infrastructure is because historically it’s never been a problem because the software developer had to choose to use capabilities and it’s incumbent on the developer to know what they are doing.  With new management tools offering to do this for you, I guess it’s new territory.

But here we had a vendor telling a customer that it needed full root, ALL Capabilities,  to run his application,

DWALSH:  This is exactly what containers are doing.  Which is why the emailer is asking.  A vendor comes to him telling him it needs all Capabilities.  The emailer does not believe them and wants to diagnose what they actually need.

DWALSH: With containers and SELinux their is a great big "TURN OFF SECURITY" button, which is too easy for software packagers to do, and then they don't have to figure out exactly what their app needs.

Paul Moore - Red Hat SELinux Kernel Engineer suggested

That while audit can not record the DAC Failures, SELinux also enforces the capability checks.  If we could put the processes into a SELinux type that had no capabilities by default, then ran the process with full capabilities and SELinux in permissive mode, we could gather the SELinux AVC messages indicating which capabilities the application required to run.

“ (Ab)using security to learn through denial messages. What could possibly go wrong?! :)

After investigating further, turns out the basic type used to run containers, `container_t`, can be setup to have no capabilities by turning off an SELinux boolean.

To turn off the capabilities via a boolean, and put the machine into permissive mode.

setsebool virt_sandbox_use_all_caps=0

setenforce 0

Now execute the application via docker with all capabilities allowed.

docker run --cap-add all IMAGE ...

Run and test the application. This should cause SELinux to generate AVC messages about capabilities used.

grep capability /var/log/audit/audit.log

type=AVC msg=audit(1495655327.756:44343): avc:  denied  { syslog } for  pid=5246 comm="rsyslogd" capability=34  scontext=system_u:system_r:container_t:s0:c795,c887 tcontext=system_u:system_r:container_t:s0:c795,c887 tclass=capability2   


Now you know your list.

Turns out the application the emailer was trying to containerize was a tool which was allowed to manipulate the syslog system, and the only capability it needed was CAP_SYSLOG.  The emailer should be able to run the container by simply adding the CAP_SYSLOG capability and everything else about the container should be locked down.

docker run --cap-add syslog IMAGE ...


After writing this blog, I was pointed to

Find what capabilities an application requires to successful run in a container

Which is similar in that it finds out the capabilities needed for a container/process by using SystemTap.

Constructor Dependency Injection in Go

Posted by Adam Young on June 20, 2017 05:17 PM

Dependency Injection

Organization is essential to scale. Compare the two images of cabling a data center:

A well organized wiring approach to the data center.

One of the less egregious cabling systems.

Obviously, the top image appears much more organized. I don’t think it is accidental that the better organized approach is visible in the larger data center. In order to scale, you need organization. If you have a small number of servers, a haphazard cabling scheme is less likely to impact your ability to trace and fix network problems. Such an approach would not work for a million-node data center.

The same is true of code. Without many of the visual cues we use to navigate the real world, tracking code can be very difficult. Thus, code can degenerate into chaos as fast or faster than physical devices. Indeed, the long standing name for poorly organized code is “Spaghetti Code” which is an analogy to the same kind of linear mess we can visualize with the network cables.

Dependency injection provides a tool to help minimize the chaos. Instead of wires run across the data center direct from one machine to another, the well organized scheme routes them to intermediate switches and routers in a standardized way. Just so, dependency injection provides an mediator between components, removing the need for one component to know the approach used to create the specific instance.

The guiding rule is that dependency injection separates object use from object construction.

Constructor Dependency Injection

Of the three forms of Dependency Injection that Martin Fowler enumerates, only the constructor form enforces that an object always meets its invariants.   The idea is that, once the constructor returns the object should be valid.  Whenever I start working with a new language, or return to an old language, I try to figure out how best to do dependency injection using constructors.

I have a second design criteria, which is that I should continue to program exclusively in that language.  Using a marshaling language like XML or YAML as a way to describe how objects interact breaks a lot of the development flow, especially when working with a debugger.  Thus, I want to be able to describe my object relationships inside the programming language.

With these two goals in mind, I started looking in to dependency injection in Go.


There is a common underlying form to the way I approach dependency injection.  The two distinct stages are:

  1. For a given Type, use the languages type management system to register a factory method that describes how to construct it.
  2. For a given type, use the languages type management system to request an instance that implements that type via a lazy load proxy that calls the factory method.
  3. When a factory method requires additional objects to fulfill dependencies it uses the same lazy load proxies to fulfill those dependencies.

This approach works well with a language that provides the ability to program using the Type system.  C++ Supports this via template meta-programming.  A comparable version can be done in Java using Generics.

Go provides minimal reflection capabilities.  The above design goals pushes them to their limits, and perhaps a bit beyond.

Golang Reflection

The API to request the type of an object in Go is


IN order to avoid creating an object just to get its type information, go allows the following workaround:


This will return an object of reflect.Type.


Proof of Concept

Here is a very minimal Dependency Injection framework. A factory is defined with a function like this:

func createRestClient(cc dependencies.ComponentCache, _ string) (interface{}, error) {
	return kubecli.GetRESTClient() //returns two values: *rest.RESTClient, error

And registered with the ComponentCache via a call that references the type:

	CC = dependencies.NewComponentCache()
        CC.Register(reflect.TypeOf((*rest.RESTClient)(nil)), createRestClient)

Code that needs to Get a rest client out of the component cache uses the same form of reflection as the registration function:

func GetRestClient(cc dependencies.ComponentCache) *rest.RESTClient {
	t, ok := cc.Fetch(reflect.TypeOf((*rest.RESTClient)(nil))).(*rest.RESTClient)
	if !ok {
	return t

Here is a rough way that the classes work together:


The rest of the code for implementing this framework is included below.


package dependencies

import "reflect"

type ComponentFactory func(CC ComponentCache, which string) (interface{}, error)

type ComponentKey struct {
	Type  reflect.Type
	which string

type ComponentCache struct {
	components map[ComponentKey]interface{}
	factories  map[ComponentKey]ComponentFactory

func NewComponentCache() ComponentCache {
	cc := ComponentCache{
		components: make(map[ComponentKey]interface{}),
		factories:  make(map[ComponentKey]ComponentFactory),
	return cc

func (cc ComponentCache) Register(Type reflect.Type, factory ComponentFactory) {
	var which string
	which = ""
	key := ComponentKey{Type, which}
	cc.factories[key] = factory

func (cc ComponentCache) RegisterFactory(Type reflect.Type, which string, factory ComponentFactory) {
	key := ComponentKey{Type, which}
	cc.factories[key] = factory

func (cc ComponentCache) FetchComponent(Type reflect.Type, which string) interface{} {
	key := ComponentKey{Type, which}
	var err error
	if component, ok := cc.components[key]; ok {
		return component
	} else if factory, ok := cc.factories[key]; ok {
		//IDEALLY locked on a per key basis.
		component, err = factory(cc, which)
		if err != nil {
		cc.components[key] = component
		return component
	} else {

func (cc ComponentCache) Fetch(Type reflect.Type) interface{} {
	return cc.FetchComponent(Type, "")

func (cc ComponentCache) Clear() {
	//Note.  I originally tried to create a new map using
	// cc.components = make(map[ComponentKey]interface{})
	// but it left the old values in place.  Thus, the brute force method below.
	for k := range cc.components {
		delete(cc.components, k)


This is a bit simplistic, as it does not support many of the use cases that we want for Dependency Injection, but implementing those do not require further investigation into the language.


Unlike structures, Go, does not expose the type information of interfaces. Thus, the technique of

reflect.TypeOf((* SomeInterface)(nil))

Will return nil, not the type of the interface. While I think this is a bug in the implementation of the language, it is a reality today, and requires a workaround. Thus far, I have been wrapping interface types with a structure. An example from my current work:

type TemplateServiceStruct struct {

func createTemplateService(cc dependencies.ComponentCache, _ string) (interface{}, error) {
	ts, err := services.NewTemplateService(launcherImage, migratorImage)
	return &TemplateServiceStruct{
	}, err

And the corresponging accessor:

func GetTemplateService(cc dependencies.ComponentCache) *TemplateServiceStruct {
	return CC.Fetch(reflect.TypeOf((*TemplateServiceStruct)(nil))).(*TemplateServiceStruct)

Which is then further unwrapped in the calling code:

var templateService services.TemplateService
templateService = GetTemplateService(CC).TemplateService

I hope to find a better way to handle interfaces in the future.

Follow on work

Code generation

This approach requires a lot of boilerplate code. This code could be easily generated using a go generate step. A template version would look something like this.

func Get{{ Tname }}(cc dependencies.ComponentCache) *{{ T }} {
	t, ok := cc.Fetch(reflect.TypeOf((*{{ T }} )(nil))).(*{{ T }})
	if !ok {
	return t

func create{{ Tname }}(cc dependencies.ComponentCache, _ string) (interface{}, error) {
	{{ Tbody }}

Separate repository

I’ve started working on this code in the context of Kubevirt. It should be pulled out into its own repository.

Split cache from factory

The factories should not be directly linked to the cache . One set of factories should be capable of composing multiple sets of components. The clear method should be replaced by dropping the cache completely and creating a whole new set of components.

In this implementation, a factory can be registered over a previous registration of that factory. This is usually an error, but makes replacing factories for unit tests possible. A better solution is to split the factory registration into stages, so and factories required for unit tests are mutually exclusive with factories that are required for live deployment. In this scheme, re-registering a component would raise a panic.

Pre-activate components

A cache should allow for activating all components in order to ensure that none of them would throw an exception upon construction. This is essential to avoid panics that happen long after an application is run triggered via uncommon code paths.

Multiple level caches

Caches and factories should be able to work at multiple levels. For example, a web framework might specify request, session, and global components. If a factory is defined at the global level, the user should still be able to access it from the request level. The resolution and creation logic is roughly:

func (cc ComponentCache) FetchComponent(Type reflect.Type, which string) interface{} {
	key := ComponentKey{Type, which}
	var err error
	if component, ok := cc.components[key]; ok {
		return component
	} else if factory, ok := cc.factories[key]; ok {
		//IDEALLY locked on a per key basis.
		component, err = factory(cc, which)
		if err != nil {
		cc.components[key] = component
		return component
	} else if (cc.parent != null ){
           return cc.parent.FetchComponent(Type, which)
        }else {

This allows caches to exist in a DAG structure. Object lifelines are sorted from shortest to longest: an object can point at another object either within the same cache, or of longer lifeline in the parent cache, chained up the ancestry.

New badge: LinuxCon Beijing 2017 !

Posted by Fedora Badges on June 20, 2017 03:02 PM
LinuxCon Beijing 2017You joined Fedora related events in LinuxCon Beijing 2017

Fedora Workstation 26 and beyond

Posted by Christian F.K. Schaller on June 20, 2017 12:41 PM

Felt it been to long since I did another Fedora Workstation update. We spend a lot of time trying to figure out how we can best spend our resources to produce the best desktop possible for our users, because even though Red Hat invests more into the Linux desktop than any other company by quite a margin, our resources are still far from limitless. So we have a continuous effort of asking ourselves if each of the areas we are investing in are the right ones that give our users the things they need the most, so below is a sampling of the things we are working on.

Improving integration of the NVidia binary driver
This has been ongoing for quite a while, but things have started to land now. Hans de Goede and Simone Caronni has been collaboring, building on the work by NVidia and Adam Jackson around glvnd. So if you set up Simones NVidia repository hosted on negativo17 you will be able to install the Nvidia driver without any conflicts with the Mesa stack and due to Hans work you should be fairly sure that even if the NVidia driver stops working with a given kernel update you will smoothly transition back to the open source Nouveau driver. I been testing it on my own Lenovo P70 system for the last week and it seems to work well under X. That said once you install the binary NVidia driver that is what your running on, which is of course not the behaviour you want from a hybrid graphics system. Fixing that last issue requires further collaboration between us and NVidia.
Related to this Adam Jackson is currently working on a project he calls glxmux. glxmux will allow you to have more than one GLX implementation on the system, so that you can switch between Mesa GLX for the Intel integrated graphics card and NVidia GLX for the binary driver. While we can make no promises we hope to have the framework in place for Fedora Workstation 27. Having that in place should allow us to create a solution where you only use the NVidia driver when you want the extra graphics power which will of course require significant work from Nvidia to enable it on their side so I can’t give a definite timeline for when all the puzzle pieces are in place. Just be assured we are working on it and talking regularly to NVidia about it. I will let you know here as soon as things come together.

On the Wayland side the Jonas Ådahl is working on putting the final touches on Hybrid Graphics support

Fleet Commander ready for take-off
Another major project we been working on for a long time in Fleet Commander. Fleet Commander is a tool to allow you to manage Fedora and RHEL desktops centrally. This is a tool targeted at for instance Universities or companies with tens, hundreds or thousands of workstation installation. It gives you a graphical browser based UI (accessible through Cockpit) to create configuration profiles and deploy across your organization. Currently it allows you to control anything that has a gsetting associated with it like enabling/disabling extensions and setting configuration settings in GTK+ and GNOME applications. It allows you to configure Network Manager settings so if your updating the company VPN or proxy settings you can easily push those changes out to all user in the organization. Or quickly migrate Evolution email settings to a new email server. The tool also allows you to control recommended applications in the Software Center and set bookmarks in Firefox. There is also support for controlling settings inside LibreOffice.

All this features can be set and controlled on either a user level or a group level or organization wide due to the close integration we have with FreeIPA suite of tools. The data is stored inside your organizations LDAP server alongside other user information so you don’t need to have the clients connect to a new service for this, and while it is not there in this initial release we will in the future also support Active Directory.

The initial release and Fleet Commander website will be out alongside Fedora Workstation 26.

I talked about PipeWire before, when it was still called Pinos, but the scope and ambition for the project has significantly changed since then. Last time when I spoke about it the goal was just to create something that could be considered a video equivalent of pulseaudio. Wim Taymans, who you might know co-created GStreamer and who has been a major PulseAudio contributor, has since expanded the scope and PipeWire now aims at unifying linux Audio and Video. The long term the goal is for PipeWire to not only provide handling of video streams, but also handle all kinds of audio. Due to this Wim has been spending a lot of time making sure PipeWire can handle audio in a way that not only address the PulseAudio usecases, but also the ones handled by Jack today. A big part of the motivation for this is that we want to make Fedora Workstation the best place to create content and we want the pro-audio crowd to be first class citizens of our desktop.

At the same time we don’t want to make this another painful subsystem transition so PipeWire so we will need to ensure that PulseAudio applications can still be run without modification.

We expect to start shipping PipeWire with Fedora Workstation 27, but at that point only have it handle video as we need this to both enable good video handling for Flatpak applications through a video portal, but also to provide an API for applications that want to do screen capture under Wayland, like web browser applications offering screen sharing. We will the bring the audio features onboard in subsequent releases as we also try to work with the Jack and PulseAudio communities to make this a joint effort. We are also working now on a proper website for PipeWire.

Red Hat developer integration
A feature we are quite excited about is the integration of support for the Red Hat developer account system into Fedora. This means that you should be able to create a Red Hat developer account through GNOME Online accounts and once you have that account set up you should be able to easily create Red Hat Enterprise Linux virtual machines or containers on your Fedora system. This is a crucial piece for the developer focus that we want the workstation to have and one that we think will make a lot of developers life easier. We where originally hoping to have this ready for Fedora Workstaton 26, but atm it looks more likely to hit Fedora Workstation 27, but we will keep you up to date as this progresses.

Fractional scaling for HiDPI systems
Fedora Workstation has been leading the charge in supporting HiDPI on Linux and we hope to build on that with the current work to enable fractional scaling support. Since we introduced HiDPI support we have been improving it step by step, for instance last year we introduced support for dealing with different DPI levels per monitor for Wayland applications. The fractional scaling work will take this a step further. The biggest problem it will resolve is that for certain monitor sizes the current scaling options either left things to small or to big. With the fractional scaling support we will introduce intermediate steps, so that you can scale your interface 1.5x times instead of having to go all the way to 2. The set of technologies we are developing for handling fractional scaling should also allow us to provide better scaling for XWayland applications as it provides us with methods for scaling that doesn’t need direct support from the windowing system or toolkit.

GNOME Shell performance
Carlos Garnacho has been doing some great work recently improving the general performance of GNOME Shell. This comes on top of his earlier performance work that was very well received. How fast/slow GNOME shell is often a subjective thing, but reducing overhead where we can is never a bad thing.

Flatpak building
Owen Taylor has been working hard on putting the pieces in place for start large scale Flatpak building in Fedora. You might see a couple of test flatpaks appear in a Fedora Workstation 26 timeframe, but the goal is to have a huge Flatpak catalog ready in time for Fedora Workstation 27. Essentially what we are doing is making it very simple for a Fedora maintainer to build a Flatpak of the application they maintain through the Fedora package building infrastructure and push that Flatpak into a central Flatpak registry. And while of course this is mainly meant to be to the benefit of Fedora users there is of course nothing stopping other distributions from offering these Flatpak packaged applications to their users also.

Atomic Workstation
Another effort that is marching forward is what we call Atomic Workstation. The idea here is to have an immutable OS image kinda like what you see on for instance Android devices. The advantage to this is that the core of the operating system gets tested and deployed as a unit and the chance of users ending with broken systems decrease significantly as we don’t need to rely on packages getting applied in the correct order or scripts executing as expected on each individual workstation out there. This effort is largely based on the Project Atomic effort, and the end goal here is to have a image based OS install and Flatpak based applications on top of it. If you are very adventerous and/or want to help out with this effort you can get the ISO image installer for Atomic Workstation here.

Firmware handling
Our Linux Firmware project is still going strong with new features being added and new vendors signing on. As Richard Hughes recently blogged about the latest vendor joining the effort is Logitech who now will upload their firmware into the service so that you can keep your Logitech peripherals updated through it. It is worthwhile pointing out here how we worked with Logitech to make this happen, with Richard working on the special tooling needed and thus reducing the threshold for Logitech to start offering their firmware through the service. We have other vendors we are having similar discussions and collaborations with so expect to see more. At this point I tend to recommend people get a Dell to run Linux, due to their strong support for efforts such as the Linux Firmware Service, but other major vendors are in the final stages of testing so expect more major vendors starting to push firmware updates soon.

High Dynamic Range
The next big thing in the display technology field is HDR (High Dynamic Range). HDR allows for deeper more vibrant colours and is a feature seen on a lot of new TVs these days and game consoles like the Playstation 4 support it. Computer monitors are appearing on the market too now with this feature, for instance the Dell UP2718Q. We want to ensure Fedora and Linux is a leader here, for the benefit of video and graphics artists using Fedora and Red Hat Enterprise Linux. We are thus kicking of an effort to make sure this technology mature as quickly as possible and be fully supported. We are not the only ones interested in this so we will hopefully be collaborating with our friends at Intel, AMD and NVidia on this. We hope to have the first monitors delivered to our office within a few weeks.

While playback these days have moved to streaming where locally installed codecs are of less importance for the consumption usecase, having a wide selection of codecs available is still important for media editing and creation usecases, so we want you to be able to load a varity of old media files into you video editor for instance. Luckily we are at a crossroads now where a lot of widely used codecs have their essential patents expire (mp3, ac3 and more) while at the same time the industry focus seems to have moved to royalty free codec development moving forward (Opus, VP9, Alliance for Open Media). We have been spending a lot of time with the Red Hat legal team trying to clear these codecs, which resulted in mp3 and AC3 now shipping in Fedora Workstation. We have more codecs on the way though, so this effort is in no way over. My goal is that over the course of this year the situation of software patents being a huge issue when dealing with audio and video codecs on Linux will be considered a thing of the past. I would like to thank the Red Hat legal team for their support on this issue as they have had to spend significant time on it as a big company like Red Hat do need to do our own due diligence when it comes to these things, we can’t just trust statements from random people on the internet that these codecs are now free to ship.

Battery life
We been looking at this for a while now and hope to be able to start sharing information with users on which laptops they should get that will have good battery life under Fedora. Christian Kellner is now our point man on battery life and he has taken up improving the Battery Bench tool that Owen Taylor wrote some time ago.

QtGNOME platform
We will have a new version of the QtGNOME platform in Fedora 26. For those of you who have not yet heard of this effort it is a set of themes and tools to ensure that Qt applications runs without any major issues under GNOME 3. With the new version the theming expands to include the accessibility and dark themes in Adwaita, meaning that if you switch to one of these themes under GNOME shell it will also switch your Qt applications over. We are also making sure things like cut’n paste and drag and drop works well. The version in Fedora Workstation 26 is a big step forward for this effort and should hopefully make Qt applications be first class citizens under your Fedora Workstation desktop.

Wayland polish
Ever since we switched the default to Wayland we have kept the pressure up and kept fixing bugs and finding solutions for corner cases. The result should be an improved Wayland experience in Fedora Workstation 26. A big thanks for Olivier Fourdan, Jonas Ådahl and the whole Wayland community for their continued efforts here. Two major items Jonas is working on for instance is improving fractional scaling, to ensure that your desktop scales to an optimal size on HiDPI displays of various sizes. What we currently have is limited to 1x or 2x, which is either to small or to big for some screens, but with this work you can also do 1.5x scaling. He is also working on preparing an API that will allow screen sharing under Wayland, so that for instance sharing your slides over video conferencing can work under Wayland.

Running CDK 3.0 on Fedora 25

Posted by RHEL Developer on June 20, 2017 11:00 AM

Red Hat Container Development Kit (CDK) provides a Container Development Environment (CDE) that allows users to build a virtualized environment for OpenShift. This environment is similar to the user’s production environment and does not need other hardware or a physical cluster. CDK is designed to work on a single user’s desktop computer.

The following instructions are to install and use CDK with Fedora 25, but can also be used for earlier versions of Fedora.

A significant difference between CDK version 2.x and 3.x is that 3.x uses Minishift as the front-end for virtualized environments while CDK 2.x used Vagrant for this purpose. As a result, the CDK 3.0 installation process is significantly simpler.

To install CDK 3.0 on Fedora:

1. Set up your Virtualization Environment

2. Install and Configure the CDK Software Components

3. Start CDK

Setup your virtualization environment
You need to first install the virtualization software, KVM/libvirt in this case, and then install additional Docker plugins to communicate with the virtualization software.

Install the software that supports virtualization on Fedora as follows:

1. Install KVM and libvirt:

~]$ sudo dnf group install with-optional virtualization

2. Download the driver plugin required for kvm support:

~]$ sudo curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.7.0/docker-machine-driver-kvm -o /usr/local/bin/docker-machine-driver-kvm
~]$ sudo chmod +x /usr/local/bin/docker-machine-driver-kvm

3. Launch the libvirt daemon and configure it to start at boot:

~]$ sudo systemctl start libvirtd
~]$ sudo systemctl enable libvirtd

4. Enable permissions required to use libvirt. Create a new group and activate it like this (logging out and back in should also have the same effect, for our purposes):

~]$ sudo gpasswd -a ${USER} libvirt
~]$ newgrp libvirt

5. Restart the libvirt and PolicyKit services for the changes to take effect:

~]$ sudo systemctl restart libvirtd
~]$ sudo systemctl restart polkit

Note: If you get an error “PolicyKit daemon disconnected from the bus”, the workstation is no longer a registered authentication agent. Run the `systemctl status polkit` command to troubleshoot this problem. If the status is active (running), you can continue the installation.

Install and configure CDK software components
Download Red Hat CDK from the Red Hat Developers website, and prepare it for installation:

1. Download the CDK software.

The following steps assume that you have downloaded CDK in the ~/Downloads directory. The filename should be ~/Downloads/cdk-3.0-minishift-linux-amd64.

2. Create a directory to store the download permanently, and copy it there:

~]$ mkdir -p ~/bin
~]$ cp ~/Downloads/cdk-3.0-minishift-linux-amd64 ~/bin/minishift
~]$ chmod +x ~/bin/minishift
~]$ export PATH=$PATH:$HOME/bin
~]$ echo ‘export PATH=$PATH:$HOME/bin’ >> ~/.bashrc

Note: `minishift` needs to be in your $PATH. If that’s not easily possible, you can run it as `./minishift` from the installation directory containing minishift.

3. Configure minishift. This will create the directory “$HOME/.minishift”, which includes the virtual machine image and configuration files

~]$ minishift setup-cdk

Starting CDK

You must start the box using minishift. CDK includes a virtualized Red Hat Enterprise Linux environment running in a kvm / qemu virtual machine. This Red Hat Enterprise Linux environment provides you with a single-user version of OpenShift and Kubernetes.

1. Add the following lines to ~/.bashrc to register the virtual machine running Red Hat Enterprise Linux. Replace and with the credentials for redhat.com that you also use to install other Red Hat Enterprise Linux systems:

~]$ echo “export MINISHIFT_USERNAME=\”$MINISHIFT_USERNAME\”” >> ~/.bashrc
~]$ echo “export MINISHIFT_PASSWORD=\”$MINISHIFT_PASSWORD\”” >> ~/.bashrc

2. Start CDK with OpenShift Container Platform setup as follows:

~]$ minishift start

Note: This returns a detailed message with tips to access the OpenShift console and CLI.

3. Verify that your kvm VM is running, to ensure you are ready to use CDK, as follows:

~]$ minishift status

Congratulations, CDK is now running on your Fedora 25 desktop!

Whether you are new to Containers or have experience, downloading this cheat sheet can assist you when encountering tasks you haven’t done lately.


Run OpenShift Locally with Minishift

Posted by Fedora Magazine on June 20, 2017 08:00 AM

OpenShift Origin is a distribution of Kubernetes optimized for continuous application development and multi-tenant deployment. OpenShift adds developer and free operations-centric tools on top of Kubernetes. This helps small and large teams rapidly develop applications, scale and deploy easily, and maintain an app throughout a long-term lifecycle. Minishift helps you run OpenShift locally by running a single-node OpenShift cluster inside a VM. With Minishift, you can try out OpenShift or develop with it daily on your local host. Under the hood it uses libmachine for provisioning VMs, and OpenShift Origin for running the cluster.

Installing and Using Minishift


Minishift requires a hypervisor to start the virtual machine on which the OpenShift cluster is provisioned. Make sure KVM is installed and enabled on your system before you start Minishift on Fedora.

First, install libvirt and qemu-kvm on your system.

sudo dnf install libvirt qemu-kvm

Then, add yourself to the libvirt group to avoid sudo.

sudo usermod -a -G libvirt <username>

Update your current session for the group change to take effect.

newgrp libvirt

Next, start and enable libvirtd and virlogd services.

systemctl start virtlogd
systemctl enable virtlogd systemctl start libvirtd systemctl enable libvirtd

Finally, install the docker-machine-kvm driver binary to provision a VM. Then make it executable. The instructions below are using version 0.7.0.

sudo curl -L https://github.com/dhiltgen/docker-machine-kvm/releases/download/v0.7.0/docker-machine-driver-kvm -o /usr/local/bin/docker-machine-driver-kvm
sudo chmod +x /usr/local/bin/docker-machine-driver-kvm


Download the archive for your operating system from the releases page and unpack it. At the time of this writing, the latest version is 1.1.0:

wget https://github.com/minishift/minishift/releases/download/v1.1.0/minishift-1.1.0-linux-amd64.tgz
tar -xvf minishift-1.1.0-linux-amd64.tgz

Copy the contents of the directory to your preferred location.

cp minishift ~/bin/minishift

If your personal ~/bin folder is not in your PATH environment variable already (use echo $PATH to check), add it:

export PATH=~/bin:$PATH

Get started

Run the following command. The output will look similar to below:

$ minishift start
Starting local OpenShift cluster using 'kvm' hypervisor...
OpenShift server started.
The server is accessible via web console at:

You are logged in as:
User:     developer
Password: developer

To login to your Minishift installation as administrator:
oc login -u system:admin

This process performs the following steps:

  • Downloads the latest ISO image based on boot2docker (~40 MB)
  • Starts a VM using libmachine
  • Downloads OpenShift client binary (oc)
  • Caches both oc and the ISO image into your $HOME/.minishift/cache folder
  • Finally, provisions OpenShift single node cluster in your workstation

Now, use minishift oc-env to display the command to add the oc binary to your PATH. The output of oc-env differs depending on the operating system and shell.

$ minishift oc-env
export PATH="/home/john/.minishift/cache/oc/v1.5.0:$PATH"
# Run this command to configure your shell:
# eval $(minishift oc-env)

Deploying an application

OpenShift provides various sample applications, such as templates, builder applications, and quickstarts. The following steps deploy a sample Node.js application from the command line.

First, create a Node.js example app.

oc new-app https://github.com/openshift/nodejs-ex -l name=myapp

Then, track the build log until the app is built and deployed.

oc logs -f bc/nodejs-ex

Next, expose a route to the service.

oc expose svc/nodejs-ex

Now access the application.

minishift openshift service nodejs-ex -n myproject

To stop the service, use the following command:

minishift stop

Refer to the official documentation for getting started with a single node OpenShift cluster.


We’d love to get your feedback. If you hit a problem, please raise an issue in the issue tracker. Please search through the listed issues, though, before creating a new one. It’s possible a similar issue is already open.


The community hangs out on the IRC channel #minishift on Freenode (https://freenode.net). You’re welcome to join, participate in the discussions, and contribute.


First Public Presentation of the Fedora + GNOME group

Posted by Julita Inca Chiroque on June 20, 2017 07:25 AM

A group of students from different universities have gathered together to learn Linux in deeply. We have started with the GNOME Peru Challenge on Fedora 25, that basically consists in fixing a bug. To achieve that, we have follow an empiric schedule that includes, installation of Fedora 25, use GNOME apps such as Pomodoro, Clock, Maps, and others such as GIMP, building some modules, working with Python to finally see GTK+.

The effort of  more than five weeks is going to be presented for the very fist time as a group at UPN. The conference is called “I Forum of Entrepreneurship and Technologies“, the ad:A special thanks to Fabio Duran from GNOME Chile, who is constantly helping kindly this Peruvian group anytime. Usually documentation online is in English or out-dating and Fabio is able to get us a solid grasp of the GNOME tech. Thanks GNOME!We also thanks Fedora because they have supported two major events in Lima that contribute this group get stronger, Linux Playa and Install Fest. Thanks Fedora!* So far, we have scheduled the lighting talks in this order:

Julita Inca to present the projects and the labor to spread the word trough 7 years
Solanch Ccasas experiences in install Fedora 25 and communication channels
Leyla Marcelo with user experience Fedora + GNOME, GIMP, Libre office, gedit, nautilus
Alex and Toto Linux basic commands, install packages and network configuration, Wifi
Felipe Moreno will explain what GTK is and his work with Python
Mario Antizana with Mechatronics Engineering projects on Fedora

Can’t wait for our first presentation as a local group! 🙂

Filed under: FEDORA, GNOME Tagged: community, conference Linux, fedora, Fedora + GNOME, Fedora + GNOME community, Fedora + GNOME group, FEDORA 25, GNOME, I Foro de Tecnologias y Emprendimiento 2017, Julita Inca, Julita Inca Chiroque, UPN

For storyboarding my process starts with making ALOT of...

Posted by Angela Pagan (Outreachy) on June 20, 2017 03:31 AM

For storyboarding my process starts with making ALOT of thumbnails which are pretty much just pages and pages of scribbles that help me figure out the composition of the final sketches. Over the course of thumbnailing they get progressively more legible with each iteration as I make composition decisions. This is the final iteration of thumbnails before I start to make truly legible, clean, and pretty sketches.

Episode 52 - You could have done it right, but you didn't

Posted by Open Source Security Podcast on June 20, 2017 02:01 AM
Josh and Kurt talk about the new Stack Clash flaw, Grenfell Tower, risk management, and backwards compatibility.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/328927519&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes

My first fedora package review

Posted by Jakub Kadlčík on June 20, 2017 12:00 AM

Recently I’ve done my first fedora package review and this very short post is about usage of fedora-review tool.


Standard usage is very straightforward. You just need to know the bug ID for review request

fedora-reivew -b 123456

It will complain that you should build the package in rawhide, so basically this is a way to go

fedora-reivew -b 123456 --mock-config fedora-rawhide-x86_64

My usage

Since this was my first review and I certainly lack lot of experience in the field of packaging, I didn’t want to just blindly give tips what should be changed and how. I wanted to test it first.

I took spec file and sources from what the fedora-review generated and put it into ~/rpmbuild/SPECS and ~/rpmbuild/SOURCES. I suppose, that it would also be possible to obtain spec directly from bugzilla, open it and see how sources should be obtained.

Then I could edit it as I wanted, build it and see whether my suggestions are helpful or not

rpmbuild -bs ~/rpmbuild/SPECS/foo.spec
fedora-review -rn ~/rpmbuild/SRPMS/foo-0.2-1.fc25.src.rpm --mock-config fedora-rawhide-x86_64

Participez à la journée de test consacrée à la version Atomic / Cloud

Posted by Charles-Antoine Couret on June 19, 2017 11:17 PM

Aujourd'hui, ce mardi 20 juin, est une journée dédiée à un test précis : sur l'image Atomic / Cloud de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

La version Atomic / Cloud de Fedora est un des trois produits officiels du projet avec Workstation et Server. Son but est d'être une image très minimale pour être instanciée de nombreuses fois dans le cadre d'un infrastructure Cloud afin de remplir le rôle d'un service. Cependant, contrairement aux deux autres produits, la version Cloud est mise à jour très régulièrement (de nouvelles images sont disponibles toutes les quelques semaines seulement, contre 6-7 mois en moyenne pour les autres).

Les tests du jour couvrent :

  • Est-ce que l'image démarre correctement, permet de se connecter et si les services se base se lancent bien ;
  • Vérifier si la gestion de Docker ou Atomic (installation, mise à jour, retour en arrière) fonctionne correctement ;
  • Lancement des applications ;
  • Vérifier la compatibilité avec le cloud Amazon et OpenStack.

Si vous êtes intéressés par l'aspect Cloud de cette image, je vous invite à la tester, elle bénéficie en effet de relativement peu de retours. La moindre aide est appréciée, merci.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

casync — A tool for distributing file system images

Posted by Lennart Poettering on June 19, 2017 10:00 PM

Introducing casync

In the past months I have been working on a new project: casync. casync takes inspiration from the popular rsync file synchronization tool as well as the probably even more popular git revision control system. It combines the idea of the rsync algorithm with the idea of git-style content-addressable file systems, and creates a new system for efficiently storing and delivering file system images, optimized for high-frequency update cycles over the Internet. Its current focus is on delivering IoT, container, VM, application, portable service or OS images, but I hope to extend it later in a generic fashion to become useful for backups and home directory synchronization as well (but more about that later).

The basic technological building blocks casync is built from are neither new nor particularly innovative (at least not anymore), however the way casync combines them is different from existing tools, and that's what makes it useful for a variety of use-cases that other tools can't cover that well.


I created casync after studying how today's popular tools store and deliver file system images. To briefly name a few: Docker has a layered tarball approach, OSTree serves the individual files directly via HTTP and maintains packed deltas to speed up updates, while other systems operate on the block layer and place raw squashfs images (or other archival file systems, such as IS09660) for download on HTTP shares (in the better cases combined with zsync data).

Neither of these approaches appeared fully convincing to me when used in high-frequency update cycle systems. In such systems, it is important to optimize towards a couple of goals:

  1. Most importantly, make updates cheap traffic-wise (for this most tools use image deltas of some form)
  2. Put boundaries on disk space usage on servers (keeping deltas between all version combinations clients might want to run updates between, would suggest keeping an exponentially growing amount of deltas on servers)
  3. Put boundaries on disk space usage on clients
  4. Be friendly to Content Delivery Networks (CDNs), i.e. serve neither too many small nor too many overly large files, and only require the most basic form of HTTP. Provide the repository administrator with high-level knobs to tune the average file size delivered.
  5. Simplicity to use for users, repository administrators and developers

I don't think any of the tools mentioned above are really good on more than a small subset of these points.

Specifically: Docker's layered tarball approach dumps the "delta" question onto the feet of the image creators: the best way to make your image downloads minimal is basing your work on an existing image clients might already have, and inherit its resources, maintaining full history. Here, revision control (a tool for the developer) is intermingled with update management (a concept for optimizing production delivery). As container histories grow individual deltas are likely to stay small, but on the other hand a brand-new deployment usually requires downloading the full history onto the deployment system, even though there's no use for it there, and likely requires substantially more disk space and download sizes.

OSTree's serving of individual files is unfriendly to CDNs (as many small files in file trees cause an explosion of HTTP GET requests). To counter that OSTree supports placing pre-calculated delta images between selected revisions on the delivery servers, which means a certain amount of revision management, that leaks into the clients.

Delivering direct squashfs (or other file system) images is almost beautifully simple, but of course means every update requires a full download of the newest image, which is both bad for disk usage and generated traffic. Enhancing it with zsync makes this a much better option, as it can reduce generated traffic substantially at very little cost of history/meta-data (no explicit deltas between a large number of versions need to be prepared server side). On the other hand server requirements in disk space and functionality (HTTP Range requests) are minus points for the use-case I am interested in.

(Note: all the mentioned systems have great properties, and it's not my intention to badmouth them. They only point I am trying to make is that for the use case I care about — file system image delivery with high high frequency update-cycles — each system comes with certain drawbacks.)

Security & Reproducibility

Besides the issues pointed out above I wasn't happy with the security and reproducibility properties of these systems. In today's world where security breaches involving hacking and breaking into connected systems happen every day, an image delivery system that cannot make strong guarantees regarding data integrity is out of date. Specifically, the tarball format is famously nondeterministic: the very same file tree can result in any number of different valid serializations depending on the tool used, its version and the underlying OS and file system. Some tar implementations attempt to correct that by guaranteeing that each file tree maps to exactly one valid serialization, but such a property is always only specific to the tool used. I strongly believe that any good update system must guarantee on every single link of the chain that there's only one valid representation of the data to deliver, that can easily be verified.

What casync Is

So much about the background why I created casync. Now, let's have a look what casync actually is like, and what it does. Here's the brief technical overview:

Encoding: Let's take a large linear data stream, split it into variable-sized chunks (the size of each being a function of the chunk's contents), and store these chunks in individual, compressed files in some directory, each file named after a strong hash value of its contents, so that the hash value may be used to as key for retrieving the full chunk data. Let's call this directory a "chunk store". At the same time, generate a "chunk index" file that lists these chunk hash values plus their respective chunk sizes in a simple linear array. The chunking algorithm is supposed to create variable, but similarly sized chunks from the data stream, and do so in a way that the same data results in the same chunks even if placed at varying offsets. For more information see this blog story.

Decoding: Let's take the chunk index file, and reassemble the large linear data stream by concatenating the uncompressed chunks retrieved from the chunk store, keyed by the listed chunk hash values.

As an extra twist, we introduce a well-defined, reproducible, random-access serialization format for file trees (think: a more modern tar), to permit efficient, stable storage of complete file trees in the system, simply by serializing them and then passing them into the encoding step explained above.

Finally, let's put all this on the network: for each image you want to deliver, generate a chunk index file and place it on an HTTP server. Do the same with the chunk store, and share it between the various index files you intend to deliver.

Why bother with all of this? Streams with similar contents will result in mostly the same chunk files in the chunk store. This means it is very efficient to store many related versions of a data stream in the same chunk store, thus minimizing disk usage. Moreover, when transferring linear data streams chunks already known on the receiving side can be made use of, thus minimizing network traffic.

Why is this different from rsync or OSTree, or similar tools? Well, one major difference between casync and those tools is that we remove file boundaries before chunking things up. This means that small files are lumped together with their siblings and large files are chopped into pieces, which permits us to recognize similarities in files and directories beyond file boundaries, and makes sure our chunk sizes are pretty evenly distributed, without the file boundaries affecting them.

The "chunking" algorithm is based on a the buzhash rolling hash function. SHA256 is used as strong hash function to generate digests of the chunks. xz is used to compress the individual chunks.

Here's a diagram, hopefully explaining a bit how the encoding process works, wasn't it for my crappy drawing skills:


The diagram shows the encoding process from top to bottom. It starts with a block device or a file tree, which is then serialized and chunked up into variable sized blocks. The compressed chunks are then placed in the chunk store, while a chunk index file is written listing the chunk hashes in order. (The original SVG of this graphic may be found here.)


Note that casync operates on two different layers, depending on the use-case of the user:

  1. You may use it on the block layer. In this case the raw block data on disk is taken as-is, read directly from the block device, split into chunks as described above, compressed, stored and delivered.

  2. You may use it on the file system layer. In this case, the file tree serialization format mentioned above comes into play: the file tree is serialized depth-first (much like tar would do it) and then split into chunks, compressed, stored and delivered.

The fact that it may be used on both the block and file system layer opens it up for a variety of different use-cases. In the VM and IoT ecosystems shipping images as block-level serializations is more common, while in the container and application world file-system-level serializations are more typically used.

Chunk index files referring to block-layer serializations carry the .caibx suffix, while chunk index files referring to file system serializations carry the .caidx suffix. Note that you may also use casync as direct tar replacement, i.e. without the chunking, just generating the plain linear file tree serialization. Such files carry the .catar suffix. Internally .caibx are identical to .caidx files, the only difference is semantical: .caidx files describe a .catar file, while .caibx files may describe any other blob. Finally, chunk stores are directories carrying the .castr suffix.


Here are a couple of other features casync has:

  1. When downloading a new image you may use casync's --seed= feature: each block device, file, or directory specified is processed using the same chunking logic described above, and is used as preferred source when putting together the downloaded image locally, avoiding network transfer of it. This of course is useful whenever updating an image: simply specify one or more old versions as seed and only download the chunks that truly changed since then. Note that using seeds requires no history relationship between seed and the new image to download. This has major benefits: you can even use it to speed up downloads of relatively foreign and unrelated data. For example, when downloading a container image built using Ubuntu you can use your Fedora host OS tree in /usr as seed, and casync will automatically use whatever it can from that tree, for example timezone and locale data that tends to be identical between distributions. Example: casync extract http://example.com/myimage.caibx --seed=/dev/sda1 /dev/sda2. This will place the block-layer image described by the indicated URL in the /dev/sda2 partition, using the existing /dev/sda1 data as seeding source. An invocation like this could be typically used by IoT systems with an A/B partition setup. Example 2: casync extract http://example.com/mycontainer-v3.caidx --seed=/srv/container-v1 --seed=/srv/container-v2 /src/container-v3, is very similar but operates on the file system layer, and uses two old container versions to seed the new version.

  2. When operating on the file system level, the user has fine-grained control on the meta-data included in the serialization. This is relevant since different use-cases tend to require a different set of saved/restored meta-data. For example, when shipping OS images, file access bits/ACLs and ownership matter, while file modification times hurt. When doing personal backups OTOH file ownership matters little but file modification times are important. Moreover different backing file systems support different feature sets, and storing more information than necessary might make it impossible to validate a tree against an image if the meta-data cannot be replayed in full. Due to this, casync provides a set of --with= and --without= parameters that allow fine-grained control of the data stored in the file tree serialization, including the granularity of modification times and more. The precise set of selected meta-data features is also always part of the serialization, so that seeding can work correctly and automatically.

  3. casync tries to be as accurate as possible when storing file system meta-data. This means that besides the usual baseline of file meta-data (file ownership and access bits), and more advanced features (extended attributes, ACLs, file capabilities) a number of more exotic data is stored as well, including Linux chattr(1) file attributes, as well as FAT file attributes (you may wonder why the latter? — EFI is FAT, and /efi is part of the comprehensive serialization of any host). In the future I intend to extend this further, for example storing btrfs sub-volume information where available. Note that as described above every single type of meta-data may be turned off and on individually, hence if you don't need FAT file bits (and I figure it's pretty likely you don't), then they won't be stored.

  4. The user creating .caidx or .caibx files may control the desired average chunk length (before compression) freely, using the --chunk-size= parameter. Smaller chunks increase the number of generated files in the chunk store and increase HTTP GET load on the server, but also ensure that sharing between similar images is improved, as identical patterns in the images stored are more likely to be recognized. By default casync will use a 64K average chunk size. Tweaking this can be particularly useful when adapting the system to specific CDNs, or when delivering compressed disk images such as squashfs (see below).

  5. Emphasis is placed on making all invocations reproducible, well-defined and strictly deterministic. As mentioned above this is a requirement to reach the intended security guarantees, but is also useful for many other use-cases. For example, the casync digest command may be used to calculate a hash value identifying a specific directory in all desired detail (use --with= and --without to pick the desired detail). Moreover the casync mtree command may be used to generate a BSD mtree(5) compatible manifest of a directory tree, .caidx or .catar file.

  6. The file system serialization format is nicely composable. By this I mean that the serialization of a file tree is the concatenation of the serializations of all files and file sub-trees located at the top of the tree, with zero meta-data references from any of these serializations into the others. This property is essential to ensure maximum reuse of chunks when similar trees are serialized.

  7. When extracting file trees or disk image files, casync will automatically create reflinks from any specified seeds if the underlying file system supports it (such as btrfs, ocfs, and future xfs). After all, instead of copying the desired data from the seed, we can just tell the file system to link up the relevant blocks. This works both when extracting .caidx and .caibx files — the latter of course only when the extracted disk image is placed in a regular raw image file on disk, rather than directly on a plain block device, as plain block devices do not know the concept of reflinks.

  8. Optionally, when extracting file trees, casync can create traditional UNIX hard-links for identical files in specified seeds (--hardlink=yes). This works on all UNIX file systems, and can save substantial amounts of disk space. However, this only works for very specific use-cases where disk images are considered read-only after extraction, as any changes made to one tree will propagate to all other trees sharing the same hard-linked files, as that's the nature of hard-links. In this mode, casync exposes OSTree-like behavior, which is built heavily around read-only hard-link trees.

  9. casync tries to be smart when choosing what to include in file system images. Implicitly, file systems such as procfs and sysfs are excluded from serialization, as they expose API objects, not real files. Moreover, the "nodump" (+d) chattr(1) flag is honored by default, permitting users to mark files to exclude from serialization.

  10. When creating and extracting file trees casync may apply an automatic or explicit UID/GID shift. This is particularly useful when transferring container image for use with Linux user name-spacing.

  11. In addition to local operation, casync currently supports HTTP, HTTPS, FTP and ssh natively for downloading chunk index files and chunks (the ssh mode requires installing casync on the remote host, though, but an sftp mode not requiring that should be easy to add). When creating index files or chunks, only ssh is supported as remote back-end.

  12. When operating on block-layer images, you may expose locally or remotely stored images as local block devices. Example: casync mkdev http://example.com/myimage.caibx exposes the disk image described by the indicated URL as local block device in /dev, which you then may use the usual block device tools on, such as mount or fdisk (only read-only though). Chunks are downloaded on access with high priority, and at low priority when idle in the background. Note that in this mode, casync also plays a role similar to "dm-verity", as all blocks are validated against the strong digests in the chunk index file before passing them on to the kernel's block layer. This feature is implemented though Linux' NBD kernel facility.

  13. Similar, when operating on file-system-layer images, you may mount locally or remotely stored images as regular file systems. Example: casync mount http://example.com/mytree.caidx /srv/mytree mounts the file tree image described by the indicated URL as a local directory /srv/mytree. This feature is implemented though Linux' FUSE kernel facility. Note that special care is taken that the images exposed this way can be packed up again with casync make and are guaranteed to return the bit-by-bit exact same serialization again that it was mounted from. No data is lost or changed while passing things through FUSE (OK, strictly speaking this is a lie, we do lose ACLs, but that's hopefully just a temporary gap to be fixed soon).

  14. In IoT A/B fixed size partition setups the file systems placed in the two partitions are usually much shorter than the partition size, in order to keep some room for later, larger updates. casync is able to analyze the super-block of a number of common file systems in order to determine the actual size of a file system stored on a block device, so that writing a file system to such a partition and reading it back again will result in reproducible data. Moreover this speeds up the seeding process, as there's little point in seeding the white-space after the file system within the partition.

Example Command Lines

Here's how to use casync, explained with a few examples:

$ casync make foobar.caidx /some/directory

This will create a chunk index file foobar.caidx in the local directory, and populate the chunk store directory default.castr located next to it with the chunks of the serialization (you can change the name for the store directory with --store= if you like). This command operates on the file-system level. A similar command operating on the block level:

$ casync make foobar.caibx /dev/sda1

This command creates a chunk index file foobar.caibx in the local directory describing the current contents of the /dev/sda1 block device, and populates default.castr in the same way as above. Note that you may as well read a raw disk image from a file instead of a block device:

$ casync make foobar.caibx myimage.raw

To reconstruct the original file tree from the .caidx file and the chunk store of the first command, use:

$ casync extract foobar.caidx /some/other/directory

And similar for the block-layer version:

$ casync extract foobar.caibx /dev/sdb1

or, to extract the block-layer version into a raw disk image:

$ casync extract foobar.caibx myotherimage.raw

The above are the most basic commands, operating on local data only. Now let's make this more interesting, and reference remote resources:

$ casync extract http://example.com/images/foobar.caidx /some/other/directory

This extracts the specified .caidx onto a local directory. This of course assumes that foobar.caidx was uploaded to the HTTP server in the first place, along with the chunk store. You can use any command you like to accomplish that, for example scp or rsync. Alternatively, you can let casync do this directly when generating the chunk index:

$ casync make ssh.example.com:images/foobar.caidx /some/directory

This will use ssh to connect to the ssh.example.com server, and then places the .caidx file and the chunks on it. Note that this mode of operation is "smart": this scheme will only upload chunks currently missing on the server side, and not re-transmit what already is available.

Note that you can always configure the precise path or URL of the chunk store via the --store= option. If you do not do that, then the store path is automatically derived from the path or URL: the last component of the path or URL is replaced by default.castr.

Of course, when extracting .caidx or .caibx files from remote sources, using a local seed is advisable:

$ casync extract http://example.com/images/foobar.caidx --seed=/some/exising/directory /some/other/directory

Or on the block layer:

$ casync extract http://example.com/images/foobar.caibx --seed=/dev/sda1 /dev/sdb2

When creating chunk indexes on the file system layer casync will by default store meta-data as accurately as possible. Let's create a chunk index with reduced meta-data:

$ casync make foobar.caidx --with=sec-time --with=symlinks --with=read-only /some/dir

This command will create a chunk index for a file tree serialization that has three features above the absolute baseline supported: 1s granularity time-stamps, symbolic links and a single read-only bit. In this mode, all the other meta-data bits are not stored, including nanosecond time-stamps, full UNIX permission bits, file ownership or even ACLs or extended attributes.

Now let's make a .caidx file available locally as a mounted file system, without extracting it:

$ casync mount http://example.comf/images/foobar.caidx /mnt/foobar

And similar, let's make a .caibx file available locally as a block device:

$ casync mkdev http://example.comf/images/foobar.caibx

This will create a block device in /dev and print the used device node path to STDOUT.

As mentioned, casync is big about reproducibility. Let's make use of that to calculate the a digest identifying a very specific version of a file tree:

$ casync digest .

This digest will include all meta-data bits casync and the underlying file system know about. Usually, to make this useful you want to configure exactly what meta-data to include:

$ casync digest --with=unix .

This makes use of the --with=unix shortcut for selecting meta-data fields. Specifying --with-unix= selects all meta-data that traditional UNIX file systems support. It is a shortcut for writing out: --with=16bit-uids --with=permissions --with=sec-time --with=symlinks --with=device-nodes --with=fifos --with=sockets.

Note that when calculating digests or creating chunk indexes you may also use the negative --without= option to remove specific features but start from the most precise:

$ casync digest --without=flag-immutable

This generates a digest with the most accurate meta-data, but leaves one feature out: chattr(1)'s immutable (+i) file flag.

To list the contents of a .caidx file use a command like the following:

$ casync list http://example.com/images/foobar.caidx


$ casync mtree http://example.com/images/foobar.caidx

The former command will generate a brief list of files and directories, not too different from tar t or ls -al in its output. The latter command will generate a BSD mtree(5) compatible manifest. Note that casync actually stores substantially more file meta-data than mtree files can express, though.

What casync isn't

  1. casync is not an attempt to minimize serialization and downloaded deltas to the extreme. Instead, the tool is supposed to find a good middle ground, that is good on traffic and disk space, but not at the price of convenience or requiring explicit revision control. If you care about updates that are absolutely minimal, there are binary delta systems around that might be an option for you, such as Google's Courgette.

  2. casync is not a replacement for rsync, or git or zsync or anything like that. They have very different use-cases and semantics. For example, rsync permits you to directly synchronize two file trees remotely. casync just cannot do that, and it is unlikely it every will.

Where next?

casync is supposed to be a generic synchronization tool. Its primary focus for now is delivery of OS images, but I'd like to make it useful for a couple other use-cases, too. Specifically:

  1. To make the tool useful for backups, encryption is missing. I have pretty concrete plans how to add that. When implemented, the tool might become an alternative to restic, Borg or tarsnap.

  2. Right now, if you want to deploy casync in real-life, you still need to validate the downloaded .caidx or .caibx file yourself, for example with some gpg signature. It is my intention to integrate with gpg in a minimal way so that signing and verifying chunk index files is done automatically.

  3. In the longer run, I'd like to build an automatic synchronizer for $HOME between systems from this. Each $HOME instance would be stored automatically in regular intervals in the cloud using casync, and conflicts would be resolved locally.

  4. casync is written in a shared library style, but it is not yet built as one. Specifically this means that almost all of casync's functionality is supposed to be available as C API soon, and applications can process casync files on every level. It is my intention to make this library useful enough so that it will be easy to write a module for GNOME's gvfs subsystem in order to make remote or local .caidx files directly available to applications (as an alternative to casync mount). In fact the idea is to make this all flexible enough that even the remoting back-ends can be replaced easily, for example to replace casync's default HTTP/HTTPS back-ends built on CURL with GNOME's own HTTP implementation, in order to share cookies, certificates, … There's also an alternative method to integrate with casync in place already: simply invoke casync as a sub-process. casync will inform you about a certain set of state changes using a mechanism compatible with sd_notify(3). In future it will also propagate progress data this way and more.

  5. I intend to a add a new seeding back-end that sources chunks from the local network. After downloading the new .caidx file off the Internet casync would then search for the listed chunks on the local network first before retrieving them from the Internet. This should speed things up on all installations that have multiple similar systems deployed in the same network.

Further plans are listed tersely in the TODO file.


  1. Is this a systemd project?casync is hosted under the github systemd umbrella, and the projects share the same coding style. However, the code-bases are distinct and without interdependencies, and casync works fine both on systemd systems and systems without it.

  2. Is casync portable? — At the moment: no. I only run Linux and that's what I code for. That said, I am open to accepting portability patches (unlike for systemd, which doesn't really make sense on non-Linux systems), as long as they don't interfere too much with the way casync works. Specifically this means that I am not too enthusiastic about merging portability patches for OSes lacking the openat(2) family of APIs.

  3. Does casync require reflink-capable file systems to work, such as btrfs? No it doesn't. The reflink magic in casync is employed when the file system permits it, and it's good to have it, but it's not a requirement, and casync will implicitly fall back to copying when it isn't available. Note that casync supports a number of file system features on a variety of file systems that aren't available everywhere, for example FAT's system/hidden file flags or xfs's projinherit file flag.

  4. Is casync stable? — I just tagged the first, initial release. While I have been working on it since quite some time and it is quite featureful, this is the first time I advertise it publicly, and it hence received very little testing outside of its own test suite. I am also not fully ready to commit to the stability of the current serialization or chunk index format. I don't see any breakages coming for it though. casync is pretty light on documentation right now, and does not even have a man page. I also intend to correct that soon.

  5. Are the .caidx/.caibx and .catar file formats open and documented?casync is Open Source, so if you want to know the precise format, have a look at the sources for now. It's definitely my intention to add comprehensive docs for both formats however. Don't forget this is just the initial version right now.

  6. casync is just like $SOMEOTHERTOOL! Why are you reinventing the wheel (again)? — Well, because casync isn't "just like" some other tool. I am pretty sure I did my homework, and that there is no tool just like casync right now. The tools coming closest are probably rsync, zsync, tarsnap, restic, but they are quite different beasts each.

  7. Why did you invent your own serialization format for file trees? Why don't you just use tar? That's a good question, and other systems — most prominently tarsnap — do that. However, as mentioned above tar doesn't enforce reproducibility. It also doesn't really do random access: if you want to access some specific file you need to read every single byte stored before it in the tar archive to find it, which is of course very expensive. The serialization casync implements places a focus on reproducibility, random access, and meta-data control. Much like traditional tar it can still be generated and extracted in a stream fashion though.

  8. Does casync save/restore SELinux/SMACK file labels? At the moment not. That's not because I wouldn't want it to, but simply because I am not a guru of either of these systems, and didn't want to implement something I do not fully grok nor can test. If you look at the sources you'll find that there's already some definitions in place that keep room for them though. I'd be delighted to accept a patch implementing this fully.

  9. What about delivering squashfs images? How well does chunking work on compressed serializations? – That's a very good point! Usually, if you apply the a chunking algorithm to a compressed data stream (let's say a tar.gz file), then changing a single bit at the front will propagate into the entire remainder of the file, so that minimal changes will explode into major changes. Thankfully this doesn't apply that strictly to squashfs images, as it provides random access to files and directories and thus breaks up the compression streams in regular intervals to make seeking easy. This fact is beneficial for systems employing chunking, such as casync as this means single bit changes might affect their vicinity but will not explode in an unbounded fashion. In order achieve best results when delivering squashfs images through casync the block sizes of squashfs and the chunks sizes of casync should be matched up (using casync's --chunk-size= option). How precisely to choose both values is left a research subject for the user, for now.

  10. What does the name casync mean? – It's a synchronizing tool, hence the -sync suffix, following rsync's naming. It makes use of the content-addressable concept of git hence the ca- prefix.

  11. Where can I get this stuff? Is it already packaged? – Check out the sources on GitHub. I just tagged the first version. Martin Pitt has packaged casync for Ubuntu. There is also an ArchLinux package. Zbigniew Jędrzejewski-Szmek has prepared a Fedora RPM that hopefully will soon be included in the distribution.

Should you care? Is this a tool for you?

Well, that's up to you really. If you are involved with projects that need to deliver IoT, VM, container, application or OS images, then maybe this is a great tool for you — but other options exist, some of which are linked above.

Note that casync is an Open Source project: if it doesn't do exactly what you need, prepare a patch that adds what you need, and we'll consider it.

If you are interested in the project and would like to talk about this in person, I'll be presenting casync soon at Kinvolk's Linux Technologies Meetup in Berlin, Germany. You are invited. I also intend to talk about it at All Systems Go!, also in Berlin.

All Systems Go! 2017 CfP Open

Posted by Lennart Poettering on June 19, 2017 10:00 PM

<large>The All Systems Go! 2017 Call for Participation is Now Open!</large>

We’d like to invite presentation proposals for All Systems Go! 2017!

All Systems Go! is an Open Source community conference focused on the projects and technologies at the foundation of modern Linux systems — specifically low-level user-space technologies. Its goal is to provide a friendly and collaborative gathering place for individuals and communities working to push these technologies forward.

All Systems Go! 2017 takes place in Berlin, Germany on October 21st+22nd.

All Systems Go! is a 2-day event with 2-3 talks happening in parallel. Full presentation slots are 30-45 minutes in length and lightning talk slots are 5-10 minutes.

We are now accepting submissions for presentation proposals. In particular, we are looking for sessions including, but not limited to, the following topics:

  • Low-level container executors and infrastructure
  • IoT and embedded OS infrastructure
  • OS, container, IoT image delivery and updating
  • Building Linux devices and applications
  • Low-level desktop technologies
  • Networking
  • System and service management
  • Tracing and performance measuring
  • IPC and RPC systems
  • Security and Sandboxing

While our focus is definitely more on the user-space side of things, talks about kernel projects are welcome too, as long as they have a clear and direct relevance for user-space.

Please submit your proposals by September 3rd. Notification of acceptance will be sent out 1-2 weeks later.

To submit your proposal now please visit our CFP submission web site.

For further information about All Systems Go! visit our conference web site.

systemd.conf will not take place this year in lieu of All Systems Go!. All Systems Go! welcomes all projects that contribute to Linux user space, which, of course, includes systemd. Thus, anything you think was appropriate for submission to systemd.conf is also fitting for All Systems Go!

Cockpit 143

Posted by Cockpit Project on June 19, 2017 02:00 PM

Cockpit with new Software Updates page

dgplug summer training 2017 is on

Posted by Kushal Das on June 19, 2017 06:38 AM

Yesterday evening we started the 10th edition of dgplug summer training program. We around 70 active participants in the session, there were a few people who informed us beforehand that they will not be available during the first session. We also knew that at the same time we had India-vs-Pakistan cricket match, that means many Indian participants will be missing the day one (though it seems the Indian cricket team tried their level best to make sure that participants stop watching the match :D ).

We started with the usual process, Sayan and /me explained the different rules related to the sessions, and also about IRC. The IRC channel #dgplug is not only a place to discuss technical things, but also to discuss about everyday things between many of the dgplug members. We ask the participants to stay online as long as possible in the initial days and ask as many questions as they want. Asking questions is a very important part of these sessions, as many are scared to do so in public.

We also had our regular members in the channel during the session, and after the session ended, we got into other discussions as usual.

One thing I noticed was the high number of students participating from the Zakir Hussain College Of Engineering, Aligarh Muslim University, Aligarh, India. When I asked how come so many of you are here, they said the credit goes to cran-cg (Chiranjeev Gupta) who motivated the first year students to take part in the session. Thank you cran-cg for not only taking part but also building a local group of Free Software users/developers. We also have Nisha, who is a fresh economics graduate, taking part in this year’s program.

As usual the day one was on Sunday, but from now on all the sessions will be on weekdays only unless it is a special guest session where a weekend is a better for our guest. Our next session is at 13:30PM UTC today, at the #dgplug channel on Freenode server. If you want to help, just be there :)

libinput 1.8 switching to git-like helper tool naming "libinput sometool"

Posted by Peter Hutterer on June 19, 2017 01:47 AM

I just released the first release candidate for libinput 1.8. Aside from the build system switch to meson one of the more visible things is that the helper tools have switched from a "libinput-some-tool" to the "libinput some-tool" approach (note the space). This is similar to what git does so it won't take a lot of adjustment for most developers. The actual tools are now hiding in /usr/libexec/libinput. This gives us a lot more flexibility in writing testing and debugging tools and shipping them to the users without cluttering up the default PATH.

There are two potential breakages here, one is that the two existing tools libinput-debug-events and libinput-list-devices have been renamed too. We currently ship compatibility wrappers for those but expect those wrappers to go away with future releases. The second breakage is of lesser impact: typing "man libinput" used to bring up the man page for the xf86-input-libinput driver. Now it brings up the man page for the libinput tool which then points to the man pages of the various features. That's probably a good thing, it puts the documentation a bit closer to the user. For the driver, you now have to type "man 4 libinput" though.

Valgrind 3.13.0 for Fedora and CentOS

Posted by Mark J. Wielaard on June 18, 2017 07:33 PM

Valgrind 3.13.0 adds support for larger processes and programs, solidifies and improves support on existing platforms, and provides new heap-use reporting facilities. There are, as ever, many smaller refinements and bug fixes. See the release notes for details.

There are binaries for Fedora 26 (beta) for aarch64, armv7hl, i686, ppc64, ppc64le, x86_64. And Copr builds for Fedora 25 (i386, ppc64le, x86_64), CentOS 6 (i386, x86_64) and CentOS 7 (ppc64le, x86_64). I’ll keep the Copr builds up to date with any updates going into Fedora 26.

Le nettoyage de la documentation francophone a débuté !

Posted by Charles-Antoine Couret on June 18, 2017 05:26 PM

Il y a 3 mois, j'avais annoncé la volonté de nettoyer la documentation francophone de Fedora pour tenir compte des évolutions de la distribution depuis 2011-2012.

Le travail a commencé, nous avons recensé le travail à effectuer dans une page wiki dédiée afin de relancer la machine, les premières contributions arrivent ce qui est bien. :-)

Afin d'inciter les contributions, nous avons mis en place une réunion hebdomadaire chaque lundi à 21h (heure de Paris), après la réunion hebdomadaire de Fedora-fr en fait. Cela se passe sur le canal IRC #fedora-doc-fr sur serveur FreeNode.

Ainsi durant la soirée nous essayons de nous coordonner sur ce qu'il y a à faire, se relire, rédiger, corriger, etc. Nous avons commencé la semaine dernière et cela s'est bien passé. Le travail devrait prendre quelques mois pour parvenir à une situation convenable en terme de mise à jour.

Vous pouvez suivre le chantier sur la liste de diffusion dédiée à la documentation, dont les archives sont disponibles. J'y fais un compte rendu chaque semaine.

Si vous souhaitez nous donner un coup de main, nous vous invitons à lire la page pour apprendre à contribuer. Nous sommes également disponibles en cas de questions ou d'une aide particulière.

[Fedora 26] Light-Locker: Entsperren von Sessions funktioniert nicht

Posted by Fedora-Blog.de on June 18, 2017 11:55 AM

Zumindest unter Fedora 26 scheint das entsperren einer automatisch gesperrten Session mit Light-Locker nicht zu funktionieren und in einem schwarzen Bildschirm zu enden (Bugreport).

Um das Problem zu umgehen, reicht es anscheinend, in der Energieverwaltung von Xfce, im Register „Sicherheit“, das automatische Sperren von Session zu deaktivieren und die Session bei Bedarf manuell zu sperren.

Thought leaders aren't leaders

Posted by Josh Bressers on June 18, 2017 02:15 AM
For the last few weeks I've seen news stories and much lamenting on twitter about the security skills shortage. Some say there is no shortage, some say it's horrible beyond belief. Basically there's someone arguing every possible side of this. I'm not going to debate if there is or isn't a worker shortage, that's not really the point. A lot of complaining was done by people who would call themselves leaders in the security universe. I then read the below article and change my thinking up a bit.

Our problem isn't a staff shortage. Our problem is we don't have any actual leaders. I mean people who aren't just "in charge". Real leaders aren't just in charge, they help their people grow in a way that accomplishes their vision. Virtually everyone in the security space has spent their entire careers working alone to learn new things. We are not an industry known for working together and the thing I'd never really thought about before was that if we never work together, we never really care about anyone or anything (except ourselves). The security people who are in charge of other security people aren't motivating anyone which by definition means they're not accomplishing any sort of vision. This holds true for most organizations since barely keeping the train on the track is pretty much the best case scenario.

If I was going to guess the existing HR people look at most security groups and see the same dumpster fire we see when we look at IoT.

In the industry today virtually everyone who is seen as being some sort of security leader is what a marketing person would call "thought leaders". Thought leaders aren't leaders. Some do have talent. Some had talent. And some just own a really nice suit. It doesn't matter though. What we end up with is a situation where the only thing anyone worries about is how many Twitter followers they have instead of making a real difference. You make a real difference when you coach and motivate someone else do great things.

Being a leader with loyal employees would be a monumental step for most organizations. We have no idea who to hire and how to teach them because the leaders don't know how to do those things. Those are skills real leaders have and real leaders develop in their people. I suspect the HR department knows what's wrong with the security groups. They also know we won't listen to them.

There is a security talent shortage, but it's a shortage of leadership talent.

This is a test

Posted by Joe Pesco on June 17, 2017 06:24 AM

Kindly forgive this intrusion.



Measuring success - l10n/language

Posted by Jean-Baptiste Holcroft on June 16, 2017 10:00 PM

As I invite each of us to use the native language when blogging, here is my first english message for a very late answer to Brian's Fedora Magazine blog post: Measuring Success.

There is many aspects we can measure in a distribution, we can measure achievents of objectives for particular kind of targets (main Fedora products, spins and specific builds), but here I would like to see something else : language support. Like packaging, in impacts every aspects of Fedora, but unless packaging, this is something we can't easily handle on our own (packaging is part of the "Distribution world", language support is part of the "upstream world"). Maybe, as a consequence, we don't have any tools to monitor/manage/... it.

Please note I use language support as a whole, including both i18n and l10n as they are bound together. You can't translate a software if it's not internalized and there isn't much interest to internalized it if you don't translate it. Also, Fedora sometimes use g11n, wich is a meta group for i18n, l10n, language testing and tooling.

Here are my assumption:

  • For easier adoption and consumption, software need to be translated and to have quality resources in local languages.
  • However, translators are undervalued, ill-equipped and insufficiently structured, which tends to make their action not very effective.
  • How to help translator communities to be efficient?
  • By setting up management tools suited for language support, by facilitating contribution and by tooling communities to give them needed autonomy.

For easier adoption and consumption, software need to be translated and to have quality resources in local languages.

Why? In its Internet Health report, the Mozilla foundation and a number of researchers evaluate that 52% of online resources are in English while only 25% of the planet’s inhabitants understand it. A restricted proportion of these can use it as a work language and be effective with it.

How? By using open standards, clear tools and processes and local communities who translate the software and evangelize it.

Where? Emblematic free and open-source software projects are translated, from the Firefox browser to the LibreOffice suite, the GNOME desktop or VLC player; all of these tools are using the same techniques and practices to reach an advanced level of localization and of exclusive local content.

However, translators are undervalued, ill-equipped and insufficiently structured, which tends to make their action not very effective.

Even when they care about it, FLOSS projects do not know how to interact with language communities.

Projects managers are often fine experts, with a high level of education and fluent in English.

Comfortable with English speaking communities, they tend to shy away from localization issues.

Focused on product delivery, the impacts of technical choices on localization are often unknown to them.

Open-source tools are structured by project, from development to inclusion in distributions, in containers and now in Flatpack. Nevertheless, users like translators use tools across contexts, consuming translations through dozens of projects.

Improvement of language support quality requires therefore improvement of various software and a strengthening of practices.

At most, 15% of (gnome) software descriptions are translated in French, whereas 10% of software might be translated in French, which has a significant community (2nd non-English Wikipedia community in number of active contributors).

How to help translator communities to be efficient?

  • By setting up management tools suited for language support, by facilitating contribution and by tooling communities to give them needed autonomy.
  • To measure the actual status of language support in Fedora and to lead/support improvement language by language (translation rate, trend over time, team activity).
  • To quantify actual need of language support in Fedora, language by language: statistics on audiences of public websites and of local communities.
  • To bolster existing tools by supplying quality tools for language support (Transvision, Pology, Dennis, Language-toolik).

What sources can we get these information from?

  • management tools suited for language support:
    • websites can gives an idea of language request (from browser settings)
    • our translation platform should give us per-month activity level
    • documentation toolchain should allow translation and be reusable for any projects so they can ask for L10N support
  • actual status of language support:
    • taskotron may be used to extract current translation and run tests on it:
      • content in package vs. content in translation platform (see Transdiff change proposal)
      • feed a global translation memory datawarehouse
    • appstream has a tools to extract translation level of software (it uses the data for gnome-software),
  • quality tools for language support:
    • Provide easy to use quality tools:
      • using per project tools like pology, language-toolkit, dennis...
      • using distribution wide consistency checks (same word, deferent translation, using wordnet and equivalents)
    • Package delivry:
      • we may try to fill the gaps with an automatic machine translation (for some part of Fedora)
      • localized documentation should be included in packages
      • we should be able to translate downstream and push back content to projects (translation patches), at least for AppData files.


This is quite a lot of subjects to discuss about at Flock! If you feel like helping, you're really welcome!

Xflock4 auf light-locker umstellen

Posted by Fedora-Blog.de on June 16, 2017 09:03 PM
Bitte beachtet auch die Anmerkungen zu den HowTos!

Wie wir bereits vor einiger Zeit geschrieben haben, kann xflock4 auch mit einem anderen Screenlocker als xscreensaver verwendet werden.

Wer light-locker nutzt, muss dafür folgendes Kommando im Terminal ausführen:

xfconf-query -c xfce4-session -p /general/LockCommand -s "dm-tool lock" --create -t string

Ab sofort dann der Bildschirm mit Hilfe von light-locker ge- und entsperrt.

Report for Day 0 of LinuxCon Beijing 2017

Posted by Zamir SUN on June 16, 2017 04:13 PM
Actually it is not the day before the event – it is the Friday before LinuxCon. Today Red Hat Beijing hold a half-day public event named ‘Educational Day’. In this event, Red Hat sponsored communities did their introduction and Fedora is of course one of them. Of all the speakers presented, there are Brian Exelbierd [...]