Fedora People

Flock 2017 – I’m waiting for you, Cape Cod!

Posted by Robert Mayr on August 19, 2017 07:55 PM

I am very happy I was able to organize my family and holidays to attend Flock again, this will be my third edition after 2013 and 2015, where I had a great experience and made a lot of friends, so I am sure this year will be even better ;)
The flight already will be very nice, because this year I will travel with Gabriele Trombini (mailga) and a new entry of Flock, Andrea Masala (veon). Cape Cod is a real nice venue and although I will be very busy during the conference, I hope we will have a couple of hours to make some sightseeing.
I will be co-speaker in a session I normally gave for the last years, but I am happy Andrea will handle that this year for me. He helped out a lot during the last two releases and I hope he will do even more in the near future. Our workshop will be rather interesting, because we will put our hands on real tickets, look how to fix them and also answer questions about how we handle, develop or debug the websites we are managing.
My talk, given with Gabriele, is a bout the Mindshare initiative, a Council objective for 2017, which aims to retool outreach teams. You will probably already understand this will not affect only ambassadors, but all outreach teams in Fedora world. If you are interested in knowing more, or give your feedback to the plans we have, then come to my talk, I will be happy to open discussions even after the talk, maybe in front of a cold beer :D
Other sessions will see me directly involved, as for example the Council session, but I will also attend the Ambassador workshop-session. Not only because it is directly related to the Mindshare talk, but because as the actual FAmSCo chair I am very interested in this session.

See you all there, and thanks to Fedora to make this possible.

KDE PIM in Randa 2017

Posted by Daniel Vrátil on August 19, 2017 01:06 PM

Randa Meetings is an annual meeting of KDE developers in a small village in Swiss Alps. The Randa Meetings is the most productive event I ever attended (since there’s nothing much else to do but hack from morning until night and eat Mario’s chocolate :-)) and it’s very focused – this year main topic is making KDE more accessible.

Several KDE PIM developers will be present as well – and while we will certainly want to hear other’s input regarding accessibility of Kontact, our main goal in Randa will be to port away from KDateTime (the KDE4 way of handling date and time in software) to QDateTime (the Qt way of handling date and time). This does not sound very interesting, but it’s a very important step for us, as afterward, we will finally be free of all legacy KDE4 code. It is no simple task, but we are confident we can finish the port during the hackfest. If everything goes smoothly, we might even have time for some more cool improvements and fixes in Kontact ;-)

I will also close the KMail User Survey right before the Randa meetings so that we can go over the results and analyze them. So, if you haven’t answered the KMail User Survey yet, please do so now and help spread the word! There are still 3 more weeks left to collect as many answers as possible. After Randa, I will be posting a series of blog posts regarding results of the survey.

And finally, please support the Randa Meetings by contributing to our fundraiser – the hackfest can only happen thanks to your support!

Konqi can't wait to go to Randa again!

You can read reports from my previous adventures in Randa Meetings in 2014 and 2015 here:

<iframe class="wp-embedded-content" data-secret="0hsystnfNv" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="http://www.dvratil.cz/2014/08/hacking-my-way-through-randa/embed/#?secret=0hsystnfNv" title="“Hacking my way through Randa” — Daniel Vrátil's blog" width="600"></iframe>

<iframe class="wp-embedded-content" data-secret="0EOiwzLHIS" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="http://www.dvratil.cz/2015/08/kde-pim-in-randa/embed/#?secret=0EOiwzLHIS" title="“KDE PIM in Randa” — Daniel Vrátil's blog" width="600"></iframe>

Post-GUADEC distractions

Posted by Matthias Clasen on August 18, 2017 09:25 PM

Like everybody else, I had a great time at GUADEC this year.

One of the things that made me happy is that I could convince Behdad to come, and we had a chance to finally wrap up a story that has been going on for much too long: Support for color Emoji in the GTK+ stack and in GNOME.

Behdad has been involved in the standardization process around the various formats for color glyphs in fonts since the very beginning. In 2013, he posted some prototype work for color glyph support in cairo.

This was clearly not meant for inclusion, he was looking for assistance turning this into a mergable patch. Unfortunately, nobody picked this up until I gave it a try in 2016. But my patch was not quite right, and things stalled again.

We finally picked it up this year. I produced a better cairo patch, which we reviewed, fixed and merged during the unconference days at GUADEC. Behdad also wrote and merged the necessary changes for fontconfig, so we can have an “emoji” font family, and made pango automatically choose that font when it finds Emoji.

After guadec, I worked on the input side in GTK+. As a first result, it is now possible to use Control-Shift-e to select Emoji by name or code.

<video class="wp-video-shortcode" controls="controls" height="147" id="video-1879-1" preload="metadata" width="400"><source src="https://blogs.gnome.org/mclasen/files/2017/08/c-s-e.webm?_=1" type="video/webm">https://blogs.gnome.org/mclasen/files/2017/08/c-s-e.webm</video>

This is a bit of an easter egg though, and only covers a few Emoji like ❤. The full list of supported names is here.

A more prominent way to enter Emoji is clearly needed, so i set out to implement the design we have for an Emoji chooser. The result looks like this:

As you can see, it supports variation selectors for skin tones, and lets you search by name. The clickable icon has to be enabled with a show-emoji-icon property on GtkEntry, but there is a context menu item that brings up the Emoji chooser, regardless.

I am reasonably happy with it, and it will be available both in GTK+ 3.92 and in GTK+ 3.22.19. We are bending the api stability rules a little bit here, to allow the new property for enabling the icon.

Working on this dialog gave me plenty of opportunity to play with Emoji in GTK+ entries, and it became apparent that some things were not quite right.  Some Emoji just did not appear, sometimes. This took me quite a while to debug, since I was hunting for some rendering issue, when in the end, it turned out to be insufficient support for variation selectors in pango.

Another issue that turned up was that pango did place the text caret in the middle of Emoji’s sometimes, and Backspace deleted them piece-meal, one character at a time, instead of all at once. This required fixes in pango’s implementation of the Unicode segmentation rules (TR29). Thankfully, Peng Wu had already done much of the work for this, I just fixed the remaining corner cases to handle all Emoji correctly, including skin tone variations and flags.

So, what’s still missing ? I’m thinking of adding optional support for completion of Emoji names like :grin: directly in the entry, like this:

<video class="wp-video-shortcode" controls="controls" height="450" id="video-1879-2" preload="metadata" width="474"><source src="https://blogs.gnome.org/mclasen/files/2017/08/emoji-completion.webm?_=2" type="video/webm">https://blogs.gnome.org/mclasen/files/2017/08/emoji-completion.webm</video>

But this code still needs some refinement before it is ready to land. It also overlaps a bit with traditional input method functionality, and I am still pondering the best way to resolve that.

To try out color Emoji, you can either wait for GNOME 3.26, which will be released in September, or you can get:

  • cairo from git master
  • fontconfig from git master
  • pango 1.40.9 or .10
  • GTK+ from the gtk-3-22 branch
  • a suitable Emoji font, such as EmojiOne or Noto Color Emoji

It was fun to work on this, I hope you enjoy using it! ❤

New badge: FrOSCon 2017 Attendee !

Posted by Fedora Badges on August 18, 2017 07:55 PM
FrOSCon 2017 AttendeeYou visited the Fedora booth at FrOSCon 2017!

Shipping PKCS7 signed metadata and firmware

Posted by Richard Hughes on August 18, 2017 04:28 PM

Over the last few days I’ve merged in the PKCS7 support into fwupd as an optional feature. I’ve done this for a few reasons:

  • Some distributors of fwupd were disabling the GPG code as it’s GPLv3, and I didn’t feel comfortable saying just use no signatures
  • Trusted vendors want to ship testing versions of firmware directly to users without first uploading to the LVFS.
  • Some firmware is inherently internal use only and needs to be signed using existing cryptographic hardware.
  • The gpgme code scares me.

Did you know GPGME is a library based around screen scraping the output of the gpg2 binary? When you perform an action using the libgpgme APIs you’re literally injecting a string into a pipe and waiting for it to return. You can’t even use libgcrypt (the thing that gpg2 uses) directly as it’s way too low level and doesn’t have any sane abstractions or helpers to read or write packaged data. I don’t want to learn LISP S-Expressions (yes, really) and manually deal with packing data just to do vanilla X509 crypto.

Although the LVFS instance only signs files and metadata with GPG at the moment I’ve added the missing bits into python-gnutls so it could become possible in the future. If this is accepted then I think it would be fine to support both GPG and PKCS7 on the server.

One of the temptations for X509 signing would be to get a certificate from an existing CA and then sign the firmware with that. From my point of view that would be bad, as any firmware signed by any certificate in my system trust store to be marked as valid, when really all I want to do is check for a specific (or a few) certificates that I know are going to be providing certified working firmware. Although I could achieve this to some degree with certificate pinning, it’s not so easy if there is a hierarchical trust relationship or anything more complicated than a simple 1:1 relationship.

So this is possible I’ve created a LVFS CA certificate, and also a server certificate for the specific instance I’m running on OpenShift. I’ve signed the instance certificate with the CA certificate and am creating detached signatures with an embedded (signed-by-the-CA) server certificate. This seems to work well, and means we can issue other certificates (or CRLs) if the server ever moves or the trust is compromised in some way.

So, tl;dr: (should have been at the top of this page…) if you see a /etc/pki/fwupd/LVFS-CA.pem appear on your system in the next release you can relax. Comments, especially from crypto experts welcome. Thanks!

Bodhi 2.10.0 released

Posted by Bodhi on August 18, 2017 02:49 PM

Compatibility changes

This release of Bodhi has a few changes that are technically backward incompatible in some senses, but it was determined that each of these changes are justified without raising Bodhi’s major version, often due to features not working at all or being unused. Justifications for each are given inline.

  • dnf and iniparse are now required dependencies for the Python bindings. Justification: Technically, these were needed before for some of the functionality, and the bindings would traceback if that functionality was used without these dependencies being present. With this change, the module will fail to import without them, and they are now formal dependencies.
  • Support for EL 5 has been removed in this release. Justification: EL 5 has become end of life.
  • The pkgtags feature has been removed. Justification: It did not work correctly and enabling it was devastating (#1634).
  • Some bindings code that could log into Koji with TLS certificates was removed. Justification: It was unused (b4474676).
  • Bodhi’s short-lived ci_gating feature has been removed, in favor of the new Greenwave integration feature. Thus, the ci.required and ci.url settings no longer function in Bodhi. The bodhi-babysit-ci utility has also been removed. Justification: The feature was never completed and thus no functionality is lost (#1733).


  • There are new search endpoints in the REST API that perform ilike queries to support case insensitive searching. Bodhi’s web interface now uses these endpoints (#997).
  • It is now possible to search by update alias in the web interface (#1258).
  • Exact matches are now sorted first in search results (#692).
  • The CLI now has a --mine flag when searching for updates or overrides (#811, #1382).
  • The CLI now has more search parameters when querying overrides (#1679).
  • The new case insensitive search is also used when hitting enter in the search box in the web UI (#870).
  • Bodhi is now able to query Pagure for FAS groups for ACL info (f9414601).
  • The Python bindings’ candidates() method now automatically intiializes the username (6e8679b6).
  • CLI errors are now printed in red text (431b9078).
  • The graphs on the metrics page now have mouse hovers to indicate numerical values (#209).
  • Bodhi now has support for using Greenwave to gate updates based on test results. See the new test_gating.required, test_gating.url, and greenwave_api_url settings in production.ini for details on how to enable it. Note also that this feature introduces a new server CLI tool, bodhi-check-policies, which is intended to be run via cron on a regular interval. This CLI tool communicates with Greenwave to determine if updates are passing required tests or not (#1733).

Bug fixes

  • The autokarma check box’s value now persists when editing updates (#1692, #1482, and #1308).
  • The CLI now catches a variety of Exceptions and prints user readable errors instead of tracebacks (#1126, #1626).
  • The Python bindings’ get_releases() method now uses a GET request (#784).
  • The HTML sanitization code has been refactored, which fixed a couple of issues where Bodhi didn’t correctly escape things like e-mail addresses (#1656, #1721).
  • The bindings’ docstring for the comment() method was corrected to state that the email parameter is used to make anonymous comments, rather than to enable or disable sending of e-mails (#289).
  • The web interface now links directly to libravatar’s login page instead of POSTing to it (#1674).
  • The new/edit update form in the web interface now works with the new typeahead library (#1731).

Development improvements

  • Several more modules have been documented with PEP-257 compliant docblocks.
  • Several new tests have been added to cover various portions of the code base, and Bodhi now has
    89% line test coverage. The goal is to reach 100% line coverage within the next 12 months, and
    then begin to work towards 100% branch coverage.

Release contributors

The following developers contributed to Bodhi 2.10.0:

  • Ryan Lerch
  • Matt Jia
  • Matt Prahl
  • Jeremy Cline
  • Ralph Bean
  • Caleigh Runge-Hottman
  • Randy Barlow

F26-20170815 Updated ISOs released

Posted by Ben Williams on August 18, 2017 02:43 PM

We the Fedora Respins-SIG are happy to announce new F26-20170815 Updated Lives. (with Kernel 4.12.5-300).
This will be the First Set of updated isos for Fedora 26. 

With this release we include F26-MD-20170815 which is a Multi-Desktop iso in support of Fosscon (FOSSCON is a Free and Open Source software conference held annually in Philadelphia PA. )

With F26 we are still using Livemedia-creator to build the updated lives.

To build your own please look at  https://fedoraproject.org/wiki/Livemedia-creator-_How_to_create_and_use_a_Live_CD

With this new build of F26 Updated Lives will save you about 600 M of updates after install.

As always the isos can be found at http://tinyurl.com/Live-respins2

Report for COSCUP 2017

Posted by Tong Hui on August 18, 2017 10:02 AM

In the early of the month,as a GNOME Foundation member, I participated in the 12th COSCUP (Conference for Open Source Coder, User & Promoter). From 1996 to 2017, COSCUP has made significant contribution for promoting free and open source in Taiwan. This dozen years or FOSS promoting made Taiwan as much as contributor grows faster than any other Asia country, so I would like to learn what make Taiwan FOSS career successfully, and also advocate GNOME in this conference.

There were thousands participants join COSCUP 2017, and more than 80 talks and workshop by hundreds of free and open source communities contributors and promoters.

As a GNOME Foundation member, together with Bin Li, we have a task to promote GNOME and collaborate with Local Free Desktop community in this COSCUP.

I also gave a short talk together with Mandy Wang in this COSCUP. We talked about how to recruit my girlfriend into FOSS and ‘train’ her become a GNOME contributor.

<figure class="wp-caption aligncenter" id="attachment_1329" style="width: 840px"><figcaption class="wp-caption-text">My talk with Mandy Wang (Photo by Vagabond, CC BY-SA 2.0)</figcaption></figure>

China-Taiwan contributors Meet-up

At BoF (Birds of Feather) session in this COSCUP, Mandy and me from mainland of China, together with Franklin Weng (KDE-TW), zerng07 and freedomknight from Taiwan who works much more on localization of GNOME and KDE. We had a local free desktop meet-up that night.

Firstly we reviewed what we done past years, and communicate what difficulties we met and how we solved. And then we chatted what we should do and need to do to promoting free desktop in China and Taiwan.

By chat with Taiwan contributor I learned so many experience, so it could help us to do more than before.

<figure class="wp-caption aligncenter" id="attachment_1328" style="width: 840px"><figcaption class="wp-caption-text">With some staff of COSCUP 2017 (Photo by Vagabond CC BY-SA 2.0)</figcaption></figure>

Finally, Thanks all hundreds of volunteers who working in COSCUP, make this event wonderful and awesome!

Fedora Design Interns 2017

Posted by Maria Leonova on August 18, 2017 09:02 AM

Here’s an update on internships. Older post linked to here. Quick recap: there’s been 2 long-term interns for Fedora design team since February, and one short-term guy, who came for 2 weeks at the beginning of June. Guys have been doing an amazing job, I can’t stress enough how happy I am to have them around.

So let me give you a short overview of their work:

Martin Modry


Martin has created some lovely designs before he moved on to pursue other endeavors in life 😉 Here are some examples of his work:



He’s created several designs for L10N roles, his work is now continued by Mary in this ticket. He’s shown true understanding of the design issues, and worked directly with ticket creators.


Martin Petr

Martin Petr worked with us for 2 weeks 6 hours a day, which allowed him to tackle many projects for Fedora Design and different teams at Red Hat. As always we started of with badges work, soon moving on to other design issues.



He’s created really cool icons for Lightning talks group; they chose the red one in top row for their page. It does work best when resized to be smaller and incorporates references eg to lightning, as well as a neat design solution.lightning_all.png

He also helped create Fedora Release Party poster, which has been widely used. For example, see here. Martin worked on a Fedora telegram theme, and even started to mock up an updated graphics for this year’s devconf.cz site. Martin has an eye for latest trends in design and is super-creative.

Me and many other people are looking forward for him to come back and stay with us for 2 more weeks at the end of September!

Tereza Hlavackova

Terka has been around the longest  – since the end of February and going strong! She’s done an impressive amount of work and I really love her designs. She’s a great help with badges, as well as with some other artwork issues.



Some of her designs include FAF, podcast and Fedora diversity icons. She’s done a great job working with requestors and going through design iterations. Terka’s been away for some time, and I’m looking forward for her to come back, too!

Conclusions and future projects

Altogether I find the Internship program extremely helpful for myself, for Fedora Design team and for some Red Hat teams as well. Both Martins and Terka are great designers, and I hope, they in their turn, only benefit from working in a professional environment, using open source products and communicating with real customers. Not every design issue can be solved easily, some require discussions and iterations, and these guys have been handling them beautifully.

Installing Ring in Fedora 26

Posted by Fedora Magazine on August 18, 2017 08:00 AM

Many communication platforms promise to link people together by video, voice, and data. But almost none of them promise or respect user privacy and freedom to a useful extent.

Ring is a universal communication system for any platform. But it is also a fully distributed system that protects users’ confidentiality. One protective feature is that it doesn’t store users personal data in a centralized location. Instead, it decentralizes this data through a combination of OpenDHT and Ethereum blockchain technology. In addition to being distributed, it has other unique features for communication:

  • Cross platform (works on Linux, Windows, MacOS, and Android)
  • Uses only free and open source software
  • Uses standard security protocols and end-to-end encryption
  • Works with desktop applications (like GNOME Contacts)

In July the Savoir-faire Linux team released the stable 1.0 version of Ring. Although it isn’t included in Fedora due to some of its requirements, the Savoir-faire team graciously provides a package for the Fedora community.

How to install Ring

To install, open a terminal and run the following commands:

sudo dnf config-manager --add-repo https://dl.ring.cx/ring-nightly/fedora_26/ring-nightly.repo
sudo dnf install ring

If you’re using an older version of Fedora, or an entirely different platform, check out the download page.

How to setup a RingID

Now that it’s installed, you’re ready to create an account (or link pre-existing one). The RingID allows other users to locate and contact you while still protecting your privacy. To create one:

  1. First, click on Create Ring Account.
  2. Next, add the required information.
  3. Finally, click Next.
Ring welcome screen Ring register user name RingID

The tutorial page offers more information on setting up this useful app. For example, you can learn how to secure your account and add devices which all notify you on a call. To learn more, check out the tutorial page.


All systems go

Posted by Fedora Infrastructure Status on August 18, 2017 05:37 AM
Service 'Fedora Wiki' now has status: good: Everything seems to be working.

Minor service disruption

Posted by Fedora Infrastructure Status on August 18, 2017 05:30 AM
Service 'Fedora Wiki' now has status: minor: Recovering database server connectivity issues, expect some slowness

Major service disruption

Posted by Fedora Infrastructure Status on August 18, 2017 05:23 AM
Service 'Fedora Wiki' now has status: major: Looking into database server connectivity issues

Light - when xbacklight doesn't work

Posted by Jakub Kadlčík on August 18, 2017 12:00 AM

Do you have any issues with controlling backlight on your laptop? Try light!

I’ve recently upgraded my laptop from F24 to F26, checked new Gnome features, killed it and switched to Qtile like I always do. Everything worked, so I moved to another things. Later that day I’ve put my laptop to my nightstand and went to bed. After a while of scrolling down the facebook page I decided to sleep, repeatedly pressed the function button to turn the backlight off, but nothing happened. WTF? Maybe I haven’t committed my key bindings with xbacklight, so they got lost during the reinstall? Nah, they are here. Well maybe I can just restart the Qtile session. Nah, still doesn’t work … It took only a little while for me to … get off the bed, take my laptop and while cursing, sit back to the desk.

Long story short, I figured out, that xbacklight was the problem.

[jkadlcik@chromie ~]$ xbacklight
No outputs have backlight property

Never encounter this error before so I googled it. From results you might learn that it is completely normal and you just need to symlink something with cryptic name in /sys/devices and add some lines to /etc/X11/xorg.conf. Eh, I don’t want to do that? Besides, I don’t have a xorg.conf for like half a decade. Also you can find an opened bug report from 2016, so waiting for fix might take a while.

Then I finally found a blog post describing solution that I liked most. It suggest using a handy little tool called light as a xbacklight alternative. It worked like a magic!


The only problem was, that light has not been packaged for Fedora yet. Since I was so happy about the tool, I decided to do my part and package it. Now you can easily install it from Copr by

dnf copr enable frostyx/light
dnf install light

There is also a pending package review so you might be able to install it directly from Fedora repositories soon.


# Increasing brightness
xbacklight -inc 10
light -A 10

# Decreasing brightness
xbacklight -dec 10
light -U 10

my solution to zeno's paradox

Posted by Frank Ch. Eigler on August 17, 2017 10:01 PM

You've probably heard of Zeno's Paradox - the famous one about Achilles and the tortoise. It's a 2000+ year old puzzle about the nature of infinity. An equivalent formulation is roughly this:

  • Imagine someone running from point A to Z. At some time t, the person will be half way between A and Z, let's call it B.
  • The person will run from point B to Z. After time t/2, the person will be half way between B and Z, let's call it C.
  • The person will run from point C to Z. After time t/4, the person will be half way between C and Z, let's call it D.
  • One can continue this pattern of subdivision infinitely.
  • Therefore, the person will never reach Z.

It's hard to believe that this little puzzle was taken too seriously by those clever Greeks. Formally modeling it in math is easy - arithmetic of infinite convergent series is taught in high schools, so it's clear that at time 2t, the runner will reach Z. But the infinity is bothersome enough that even 2000 years later we take the problem seriously. Some even bring up silly stuff like quantum mechanics and uncertainty principles to try to work around it.

But I came across another way to approach the problem - to sever the Gordian Knot, so to speak. That is to recognize an implication of the basic fact that argumentation about a situation is not the same thing as the situation itself.

In this case, the argumentation can indeed go on infinitely, as one talks about shorter and shorter distances & time intervals. But the error in logic is the last step of the list above. The "therefore" doesn't hold, because the only thing that's infinite is all this argumentation. The situation is quite simple and evolves independently of how a goofy observer might want to talk about it - or to imagine breaking it up.

In other words, just because someone chooses a degenerate, infinite, useless way to talk about a situation, the situation itself can be perfectly finite, reasonable, intuitive. There is no paradox.

In other words, the map (argumentation) is not the same thing as the territory (subject of the argument).

GUADEC 2017 Notes

Posted by Petr Kovar on August 17, 2017 05:08 PM

With GUADEC 2017 and the unconference days over, I wanted to share a few conference and post-conference notes with a broader audience.

First of all, as others have reported, at this year’s GUADEC, it was great to see an actual increase in numbers of attendees compared to previous years. This shows us that 20 years later, the community as a whole is still healthy and doing well.

<figure class="wp-caption aligncenter" id="attachment_405" style="width: 660px"><figcaption class="wp-caption-text">At the conference venue.</figcaption></figure>

While the Manchester weather was quite challenging, the conference was well-organized and I believe we all had a lot of fun both at the conference venue and at social events, especially at the awesome GNOME 20th Birthday Party. Kudos to all who made this happen!

<figure class="wp-caption aligncenter" id="attachment_406" style="width: 660px"><figcaption class="wp-caption-text">At the GNOME 20th Birthday Party.</figcaption></figure>

As I reported at the GNOME Foundation AGM, the docs team has been slightly more quiet recently than in the past and we would like to reverse this trend going forward.

<figure class="wp-caption aligncenter" id="attachment_411" style="width: 660px"><figcaption class="wp-caption-text">At the GNOME 20th Birthday Party.</figcaption></figure>
  • We held a shared docs and translation session for newcomers and regulars alike on the first two days of the post-GUADEC unconference. I was happy to see new faces showing up as well as having a chance to work a bit with long-time contributors. Special thanks goes to Kat for managing the docs-feedback mailing list queue, and Andre for a much needed docs bug triage.

    <figure class="wp-caption aligncenter" id="attachment_413" style="width: 660px"><figcaption class="wp-caption-text">Busy working on docs and translations at the unconference venue.</figcaption></figure>

  • Shaun worked on a new publishing system for help.gnome.org that could replace the current library-web scripts requiring release tarballs to get the content updated. The new platform would be a Pintail-based website with (almost) live content updates.
  • Localization-wise, there was some discussion around language packs, L10n data installation and initial-setup, spearheaded by Jens Petersen. While in gnome-getting-started-docs, we continue to replace size-heavy tutorial video files with lightweight SVG files, there is still a lot of other locale data left that we should aim to install on the user’s machine automatically when we know the user’s locale preference, though this is not quite what the user’s experience looks like nowadays. Support for that is something that I believe will require more input from PackageKit folks as well as from downstream installer developers.
  • The docs team also announced a change of leadership, with Kat passing the team leadership to me at GUADEC.
  • In other news, I announced a docs string freeze pilot that we plan to run post-GNOME 3.26.0 to allow translators more time to complete user docs translations. Details were posted to the gnome-doc-list and gnome-i18n mailing list. Depending on the community feedback we receive, we may run the program again in the next development cycle.
  • The docs team also had to cancel the planned Open Help Conference Docs Sprint due to most core members being unavailable around that time. We’ll try to find a better time for a docs team meetup some time later this year or early 2018. Let me know if you want to attend, the docs sprints are open to everybody interested in GNOME documentation, upstream or downstream.
<figure class="wp-caption aligncenter" id="attachment_412" style="width: 660px"><figcaption class="wp-caption-text">At the closing session.</figcaption></figure>

Last but not least, I’d like to say thank you to the GNOME Foundation and the Travel Committee for their continuous support, for sponsoring me again this year.

PHP version 7.0.23RC1 and 7.1.9RC1

Posted by Remi Collet on August 17, 2017 12:12 PM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.1.9RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26 or remi-php71-test repository for Fedora 23-25 and Enterprise Linux.

RPM of PHP version 7.0.23RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 25 or remi-php70-test repository for Fedora 23-24 and Enterprise Linux.

PHP version 5.6 is now in security mode only, so no more RC will be released.

PHP version 7.2 is in development phase, version 7.2.0beta3 is also available.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.0 as Software Collection:

yum --enablerepo=remi-test install php70

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.0:

yum --enablerepo=remi-php70,remi-php70-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.1.7RC1 is also available in Fedora rawhide (for QA).

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php70, php71)

Base packages (php)

5 apps to install on your Fedora Workstation

Posted by Fedora Magazine on August 17, 2017 08:00 AM

A few weeks ago, Fedora 26 was released. Every release of Fedora brings new updates and new applications into the official software repositories. Whether you were already a Fedora user and upgraded or you are a first-time user, you might be looking for some cool apps to try out on your Fedora 26 Workstation. In this article, we’ll round up five apps that you might not have known were available in Fedora.

Try out a different browser

By default, Fedora includes the Firefox web browser. But in Fedora 25, Chromium (the open source version of Chrome) was packaged into Fedora. You can learn how to start using and install Chromium below.

How to install Chromium in Fedora

<iframe class="wp-embedded-content" data-secret="tnjeTElMB9" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/install-chromium-fedora/embed/#?secret=tnjeTElMB9" title="“How to install Chromium in Fedora” — Fedora Magazine" width="600"></iframe>

Sort and categorize your music

Do you have a Fedora Workstation filled with local music files? When you open it in a music player, is there missing or just straight out wrong metadata? MusicBrainz is the Wikipedia of music metadata, and you can take back control of your music by using Picard. Picard is a tool that works with the MusicBrainz database to pull in correct metadata to sort and organize your music. Learn how to get started with Picard in Fedora Workstation below.

Picard brings order to your music library

<iframe class="wp-embedded-content" data-secret="lljIa7PX0q" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/picard-brings-order-music-library/embed/#?secret=lljIa7PX0q" title="“Picard brings order to your music library” — Fedora Magazine" width="600"></iframe>

Get ready for the eclipse

August 21st is the big day for the total solar eclipse in North America. Want to get a head start by knowing the sky before it starts? You can map out the sky by using Stellarium, an open source planetarium application available in Fedora now. Learn how to install Stellarium before the skies go dark in this article.

Track the night sky with Stellarium on Fedora

<iframe class="wp-embedded-content" data-secret="YgCLgONqF0" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/stellarium-on-fedora/embed/#?secret=YgCLgONqF0" title="“Track the night sky with Stellarium on Fedora” — Fedora Magazine" width="600"></iframe>

Control your camera from Fedora

Have an old camera lying down? Or maybe do you want to upgrade your webcam by using an existing camera? Entangle lets you take control of your camera all from the comfort of your Fedora Workstation. You can even adjust aperture, shutter speed, ISO settings, and more. Check out how to get started with it in this article.

Tether a digital camera using Entangle

<iframe class="wp-embedded-content" data-secret="pUjGLOPXP7" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/tether-digital-camera-fedora/embed/#?secret=pUjGLOPXP7" title="“Tether a digital camera using Entangle” — Fedora Magazine" width="600"></iframe>

Share Fedora with a friend

One of the last things you might need to do with your Fedora Workstation is extend it! With the Fedora Media Writer, you can create a USB stick loaded with any Fedora edition or spin of your choice and share it with a friend. Learn how to start burning your own USB drives in this how-to article below.

How to make a Fedora USB stick

<iframe class="wp-embedded-content" data-secret="YKp7rYathj" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://fedoramagazine.org/make-fedora-usb-stick/embed/#?secret=YKp7rYathj" title="“How to make a Fedora USB stick” — Fedora Magazine" width="600"></iframe>

Creating heat maps using the new syslog-ng geoip2 parser

Posted by Peter Czanik on August 17, 2017 06:26 AM

The new geoip2 parser of syslog-ng 3.11 is not only faster than its predecessor, but can also provide a lot more detailed geographical information about IP addresses. Next to the usual country name and longitude/latitude information, it also provides the continent, time zone, postal code and even county name. Some of these are available in multiple languages. Learn how you can utilize this information by parsing logs from iptables using syslog-ng, storing them to Elasticsearch, and displaying the results in Kibana!

Before you begin

First of all, you need some iptables log messages. In my case, I used logs from my Turris Omnia router. You could use logs from another device running iptables. Alternatively, with a small effort, you can replace iptables with an Apache web server or any other application that saves IP addresses as part of its log message.

You will also need a syslog-ng version that has the new geoip2 parser. The new geoip2 parser was released as part of version 3.11.1.

As syslog-ng packages in Linux distributions do not include the Elasticsearch destination of syslog-ng, you either need to compile it yourself or use one of the unofficial packages, as listed at https://syslog-ng.org/3rd-party-binaries/.

Last but not least, you will also need Elasticsearch and Kibana installed. I used version 5.5.1 of the Elastic stack, but any other version should work just fine.

What is new in GeoIP

The geoip2 parser of syslog-ng uses the maxminddb library to look up geographical information. It is considerably faster than its predecessor and also provides a lot more detailed information.

As usual, the packaging of maxminddb tools is different on different Linux distributions. You need to make sure that a tool to download / update database files is installed, together with the mmdblookup tool. On most distributions you need to use the former at least once as usually only the old type of databases are available packaged. The latter application can help you list what kind of information is available in the database.

Here is a shortened example:

[root@localhost-czp ~]# mmdblookup --file /usr/share/GeoIP/GeoLite2-City.mmdb --ip

          3054643 <uint32>
              "Budapest" <utf8_string>
              "Budapest" <utf8_string>
              "Budapest" <utf8_string>
              "Budapest" <utf8_string>
              "ブダペスト" <utf8_string>
              "Budapeste" <utf8_string>
              "Будапешт" <utf8_string>
              "布达佩斯" <utf8_string>
          100 <uint16>
          47.500000 <double>
          19.083300 <double>
          "Europe/Budapest" <utf8_string>

As you can see from the above command line, I use the freely available GeoLite2-City database. The commercial variant is also supported by syslog-ng, which is more precise and up-to-date.

In my configuration example below, I chose to simply store all available geographical data, but normally that is a waste of resources. You can figure out the hierarchy of names based on the JSON output of mmdblookup.

Configure Elasticsearch

The installation and configuration of Elasticsearch and Kibana are beyond the scope of this blog. The only thing I want to note here is that before sending logs from syslog-ng to Elasticsearch, you have to configure mapping for geo information.

If you follow my configuration examples below, you can use the following mapping. I use “syslog-ng” as the index name.

   "mappings" : {
      "_default_" : {
         "properties" : {
            "geoip2" : {
               "properties" : {
                  "location2" : {
                     "type" : "geo_point"

Configure syslog-ng

Complete these steps to get your syslog-ng ready for creating heat maps:

1. First of all, you need some logs. In my test environment I receive iptables logs from my router over a TCP connection to port 514. These are filtered on the sender side, so no other logs are included. If you do not have filtered logs, in most cases you can filter for firewall logs based on the program name.

source s_tcp {
  tcp(ip("") port("514"));

2. Process log messages. The first step of processing is using the key-value parser. It creates name-value pairs from the content of the message. You can store all or part of these name-value pairs in a database and search them at a field level instead of the whole message. A prefix for the name is used to make sure that the names do not overlap.

parser p_kv {kv-parser(prefix("kv.")); };

The source IP of the attacker is stored into the kv.SRC name-value pair.

3. Let’s analyze the kv.SRC name-value pair further, using the geoip2 parser. As usual, we use a prefix to avoid any naming problems. Note that the location of the database might be different on your system.

parser p_geoip2 { geoip2( "${kv.SRC}", prefix( "geoip2." ) database( "/usr/share/GeoIP/GeoLite2-City.mmdb" ) ); };

4. The next step is necessary to ensure that location information is in the form expected by Elasticsearch. It looks slightly more complicated than for the first version of the GeoIP parser as there is more information available and information is now structured.

rewrite r_geoip2 {
        value( "geoip2.location2" ),
        condition(not "${geoip2.location.latitude}" == "")

5. In the Elasticsearch destination we assume that both the cluster and index names are “syslog-ng”. We set the flush-limit to a low value as we do not expect a high message rate. A low flush-limit makes sure that we see logs in Kibana in near real-time. By default, it is set to a much higher value, which is perfect for performance. Unfortunately, timeout is not implemented in the Java destinations so with the default setting and low message rate, you might need to wait an hour before anything shows up in Elasticsearch.

destination d_elastic {
 elasticsearch2 (
  template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")

6. Finally we need a log statement which connects all of these building blocks together:

log {

Configuration to copy & paste

To make your life easier, I compiled these configuration snippets in one place for better copy & paste experience. You should append it to your syslog-ng.conf or place it in a separate .conf file under /etc/syslog-ng/conf.d/ if supported by your Linux distribution.

source s_tcp {
  tcp(ip("") port("514"));

parser p_kv {kv-parser(prefix("kv.")); };

parser p_geoip2 { geoip2( "${kv.SRC}", prefix( "geoip2." ) database( "/usr/share/GeoIP/GeoLite2-City.mmdb" ) ); };

rewrite r_geoip2 {
        value( "geoip2.location2" ),
        condition(not "${geoip2.location.latitude}" == "")

destination d_elastic {
 elasticsearch2 (
  template("$(format-json --scope rfc5424 --scope nv-pairs --exclude DATE --key ISODATE)")

log {

Visualize your data

By now you have configured syslog-ng to parse iptables logs, added geographical information to them, and stored the results in Elasticsearch. The next step is to verify if logs arrive to Elasticsearch. You should see messages in Kibana where many field names start with “kv.” and “geoip2.”

Once you verified that logs are arriving to Elasticsearch, you can start creating some visualizations. There are numerous tutorials on how to do it by Elastic and others.

You can see a world map below visualizing the IP addresses that attempt to connect to my router. You can easily create such a map just by clicking on the “geoip2.location2” field in the “Available fields” list in Kibana, and then clicking on the “Visualize” button when it appears below the field name.

<figure class="wp-caption aligncenter" id="attachment_2415" style="width: 600px">world map<figcaption class="wp-caption-text">Map of IP addresses from attempted connections.</figcaption></figure>

Even if I left out many details, this blog is now quite lengthy so I am going to point you to some further reading:

If you have questions or comments related to syslog-ng, do not hesitate to contact us. You can reach us by email or you can even chat with us. For a long list of possibilities, check our contact page at https://syslog-ng.org/contact-us/. On Twitter, I am available as @PCzanik.

The post Creating heat maps using the new syslog-ng geoip2 parser appeared first on Balabit Blog.

LxQT Test Day: 2017-08-17

Posted by Alberto Rodriguez (A.K.A bt0) on August 17, 2017 02:29 AM

Tuesday, 2017-08-17, is the DNF 2.0 Test Day! As part of this planned Change for Fedora 26, we need your help to test LxQT!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Original note here:

LxQT Test Day: 2017-08-17

<iframe class="wp-embedded-content" data-secret="oCw5tobFpO" frameborder="0" height="338" marginheight="0" marginwidth="0" sandbox="allow-scripts" scrolling="no" security="restricted" src="https://communityblog.fedoraproject.org/lxqt-test-day-2017-08-17/embed/#?secret=oCw5tobFpO" title="“LxQT Test Day: 2017-08-17” — Fedora Community Blog" width="600"></iframe>


Posted by Casper on August 16, 2017 11:22 PM

Pour ceux qui ont des freebox, j'ai trouvé un petit tour rigolo. Si vous êtes comme moi et que vous voulez faire à l'occasion (petites coupures, autres...) des diagnostics rapides de l'ensemble de tous les composants réseau, afficher l'uptime de la freebox dans un terminal en une seule commande va être assurément intéressant.

Vous connaissez sans doute l'adresse pour afficher le rapport complet :


Donc on peut déjà afficher le rapport complet dans un terminal :

casper@falcon ~ % export FBX=http://mafreebox.free.fr/pub/fbx_info.txt
casper@falcon ~ % curl $FBX

C'est un début mais ça floode encore pas mal le terminal, on peut faire mieux...

casper@falcon ~ % curl $FBX 2>/dev/null | grep "mise en route" | cut -d " " -f10,11,12,13
4 heures, 33 minutes

Bon, j'ai rien inventé, mais j'espère que cette astuce vous sera utile un jour. N'hésitez pas à mettre un pouce vert, un com', tout ce que vous voulez, et surtout de vous abonner pour être automatiquement averti de la sortie d'une nouvelle vidéo !

Going to retire Fedora's OmegaT package

Posted by Ismael Olea on August 16, 2017 10:00 PM

OmegaT logo

Well, time has come and I must face my responsability on this.

My first important package in Fedora was for OmegaT. AFAIK OmegaT is the best FLOSS computer aid translator tool available. With the time OmegaT has been enjoying a very active development with a significant (to me) handicap: new releases adds new features with new dependencies on java libraries not available in Fedora. As you perfectly know, updating the package requires to add each one of those libraries as new packages. But I can’t find the time for such that effort. That’s the reason the last Fedora version is 2.6.3 and the lasts at upstream are 3.6.0 / 4.1.2.

So, I give up. I want to retire the package from Fedora because I’m sure I will not be able to update it anymore.

I’ll wait some days waiting someone expressing their interest on taking ownership. Otherwise I’ll start the retirement process.

PS: OTOH I plan to publish OmegaT as a flatpak package via Flathub. Seems to me it would be a lot easier to maintain that way. I’m aware Flathub is out of the scope of Fedora :-/

PPS: I send an announcement to the Fedora devel mailing list.

Recordando el planeta Chitón

Posted by Ismael Olea on August 16, 2017 10:00 PM

Planeta Chitón, tus amigos no te olvidan.

New badge: Badger Padawan !

Posted by Fedora Badges on August 16, 2017 04:03 PM
Badger PadawanYou attended a Fedora Badges workshop! May the badger be with you...

FreeNAS and check_mk

Posted by Jens Kuehnel on August 16, 2017 12:06 PM


I’m setting up two FreeNAS Server for Backup and Archiving and I really like FreeNAS 11. Thank good I didn’t have time to update it to FreeNAS Coral. 🙂

But I’m using check_mk for monitoring and I would like to use it to monitor FreeNAS as well. There is a check_mk agent for FreeBSD so the only problem is to run it.

I created this script to run it as a Init/Shutdown Script (both pre-init and post-init) . It will create everything you need, only define the BASEDIR at the beginning and put the check_mk_agent for FreeBSD in this directory. Make sure this script (check_mk_setup) and check_mk_agent are executable.

You also need to make sure inetd is running. I enable tftpd for that. Maybe some other service are possible as well. But I only tested it with tftpd.

if grep checkmk /conf/base/etc/inetd.conf &> /dev/null
  echo checkmk stream tcp nowait root $BASEDIR/check_mk_agent check_mk_agent >> /conf/base/etc/inetd.conf

if grep checkmk /conf/base/etc/services &> /dev/null
  echo "checkmk 6556/tcp #check_mk" >> /conf/base/etc/services

if grep checkmk /etc/services &> /dev/null
  echo "checkmk 6556/tcp #check_mk" >> /etc/services

killall -1 inetd

After the next reboot the system can be monitored by check_mk. It even survived the upgrade from FreeNAS 10 to 11.

LxQT Test Day: 2017-08-17

Posted by Fedora Community Blog on August 16, 2017 09:06 AM

Tuesday, 2017-08-17, is the DNF 2.0 Test Day! As part of this planned Change for Fedora 26, we need your help to test LxQT!

Why test LxQT?

LXQt is the Qt port and the upcoming version of LXDE, the Lightweight Desktop Environment. It is the product of the merge between the LXDE-Qt and the Razor-qt projects: A lightweight, modular, blazing-fast and user-friendly desktop environment.

We hope to see whether it’s working well enough and catch any remaining issues.

We need your help!

All the instructions are on the wiki page, so please read through and come help us test! As always, the event will be in #fedora-test-day on Freenode IRC.

Share this!

Help promote the Test Day and share the article in your own circles! Use any of the buttons below to help spread the word.

The post LxQT Test Day: 2017-08-17 appeared first on Fedora Community Blog.

IoT Security for Developers

Posted by Russel Doty on August 15, 2017 10:33 PM

Previous articles focused on how to securely design and configure a system based on existing hardware, software, IoT Devices, and networks. If you are developing IoT devices, software, and systems, there is a lot more you can do to develop secure systems.

The first thing is to manage and secure communications with IoT Devices. Your software needs to be able to discover, configure, manage and communicate with IoT devices. By considering security implications when designing and implementing these functions you can make the system much more robust. The basic guideline is don’t trust any device. Have checks to verify that a device is what it claims to be, to verify device integrity, and to validate communications with the devices.

Have a special process for discovering and registering devices and restrict access to it. Do not automatically detect and register any device that pops up on the network! Have a mechanism for pairing devices with the gateway, such as a special pairing mode that must be invoked on both the device and the gateway to pair or a requirement to manually enter a device serial number or address into the gateway as part of the registration process. For industrial applications adding devices is a deliberate process – this is not a good operation to fully automate!

A solid approach to gateway and device identity is to have a certificate provisioned onto the device at the factory, by the system integrator, or at a central facility. It is even better if this certificate is backed by a HW root of trust that can’t be copied or spoofed.

Communications between the gateway and the device should be designed. Instead of a general network connection, which can be used for many purposes, consider using a specialized interface. Messaging interfaces are ideal for many IoT applications. Two of the most popular messaging interfaces are MQTT (Message Queued Telemetry Transport) and CoAP. In addition to their many other advantages, these messaging interfaces only carry IoT data, greatly reducing their capability to be used as an attack vector.

Message based interfaces are also a good approach for connecting the IoT Gateway to backend systems. An enterprise message bus like AMQP is a powerful tool for handling asynchronous inputs from thousands of gateways, routing them, and feeding the data into backend systems. A messaging system makes the total system more reliable, more robust, and more efficient – and makes it much easier to implement large scale systems! Messaging interfaces are ideal for handling exceptions – they allow you to simply send the exception as a regular message and have it properly processed and routed by business logic on the backend.

Messaging systems are also ideal for handling unreliable networks and heavy system loads. A messaging system will queue up messages until the network is available. If a sudden burst of activity causes the network and backend systems to be overloaded the messaging system will automatically queue up the messages and then release them for processing as resources become available. Messaging systems allow you to ensure reliable message delivery, which is critical for many applications. Best of all, messaging systems are easy for a programmer to use and do the hard work of building a robust communications capability for you.

No matter what type of interface you are using it is critical to sanitize your inputs. Never just pass through information from a device – instead, check it to make sure that is properly formatted, that it makes sense, that it does not contain a malicious payload, and that the data has not been corrupted. The overall integrity of an IoT system is greatly enhanced by ensuring the quality of the data it is operating on. Perhaps the best example of this is Little Bobby Tables from XKCD (XKCD.com):

Importance of sanitizing your input.

Importance of sanitizing your input.

On a more serious level, poor input sanitization is responsible for many security issues. Programmers should assume that users can’t be trusted and all interactions are a potential attack.

Bodhi 2.9.1 released

Posted by Bodhi on August 15, 2017 09:22 PM

2.9.1 is a security release for CVE-2017-1002152.

Release contributors

Thanks to Marcel for reporting the issue. Randy Barlow wrote the fix.

Fedora Council Summer 2017 Election Results

Posted by Till Maas on August 15, 2017 09:06 PM


The results for the Fedora Council Summer 2017 Election are published. Congratulations to Justin W. Flory for winning! He is very committed and I am looking forward to his efforts to improve communication in Fedora.

Also, I would like to thank everyone who voted for me. Thank you very much for the trust you put into me! Since the FESCo election was restarted you have to vote again in case you voted last week. On a related note, my candidate interview is now available at the Community Blog. Please let me know if you have any questions or remarks.

A proposal: Ambassadors and Fedora strategy

Posted by Fedora Community Blog on August 15, 2017 05:06 PM

Fedora is big. We are a huge community of people with diverse interests. We have different ideas for what we want to build, and we want different things in return from our collective effort. At the same time, we are one project with shared goals and limited resources. We are more effective in this competitive world when we agree on common goals and work towards those, rather than everyone going in the direction each person thinks is best individually.¹

The Fedora Council is tasked with taking community input and shaping this shared strategy. As part of this, we’ve written a new mission statement and have a draft overview page presenting it. We’ve said for a while that we want the work of Fedora Ambassadors to align with this mission directly. We’re getting feedback, though, that this is easier to say than to put into practice, which is understandable because, by nature, mission statements are high-level.

So, I have a proposal. As part of the Fedora Council’s charter, we have Fedora Objectives:

On an ongoing basis, including sessions at Flock and in public online meetings, the Council will identify two to four key community objectives with a timeframe of approximately eighteen months, and appoint Objective Leads for each goal. […]

Each objective will be documented with measurable goals, and the objective lead is responsible for coordinating efforts to reach those goals, evaluating and reporting on progress, and working regularly with all relevant groups in Fedora to ensure that progress is made.

I propose that from now forward, all events and spending by Ambassadors should be directly related to  the target audience of a Fedora Edition or to a current Objective.²

Each Edition has a Product Requirements Document which describes the specific use-cases it is meant to address and gives a target audience for each — Atomic Host, Server, and Workstation. We should not aim scattershot at general audiences and hope some aspect of Fedora resonates. Instead, we should go to events centered around these specific groups of people and demonstrate the solutions we have for their real-world problems.

Unlike the mission, Objectives are scoped to a 12-18 month timeframe, and are concrete and immediately actionable. Each has an Objective Lead who is a subject-matter expert on the topic and who can be a resource for identifying related conferences and outreach opportunities. And, by definition, these Objectives will be aligned with the mission and broader project goals.

You might be, at this point, saying “But wait! I personally don’t care about any of the Editions or Modularity or Continuous Integration! Am I left out, now?”

Actually, not at all. We do have many different interests, and there is room for 2-4 concurrent Objectives. Anyone in the community can put together a proposal, and if we collectively agree that it’s important, anyone can be the Objective Lead. So, if many Ambassadors feel there’s something Fedora should be doing that isn’t covered currently, there is a straightforward path — form an Objective around it.

An Objective is a statement of a goal that is achievable in a year or year and a half, along with a plan to measure the results. Objectives could be technical advances, but they wouldn’t have to be. Examples³ might include:

Fedora for Students:

  • We increase Fedora’s popularity among university students through Install Days and new Fedora User Groups.
    • Measurable Result: We will have 100 install days at Universities in the next 12 months, with Fedora installed on 10,000 new systems.
    • Measurable Result: We will have 10 new Fedora User Groups with regular attendance in the next 12 months.

Fedora Python Classroom (For the Win):

  • We get Fedora’s Python Classroom Lab into classrooms worldwide.
    • Measurable Result: 10 professors or teachers new to Python Classroom using it in the next 12 months.
    • Measurable Result: 10,000 views on YouTube tutorials based around Python Classroom.

Release Parties (for New Contributors):

  • We will raise awareness of Fedora by holding well-publicized release-day parties committed to attracting and onboarding new contributors.
    • Measurable Result: 10 parties held at locations across the globe with  consistent branding and collective marketing.
    • Measurable Result: 10 new Fedora accounts from each party.
    • Measurable Result: 10 new active contributors at the end of 12 months.

Leading an Objective is work and a real commitment, but I don’t think that’s a problem for this proposal. In fact, it’s a strength — if there isn’t enough community interest to support an Objective, it’s probably not something we should be focusing hundreds of other people on, either.⁴

I suggest that Ambassadors as an organization focus on covering our Objectives and the Editions every year, worldwide. Let’s discuss this idea, and if we generally agree, I would like FAmSCo to adopt this as policy going forward. I’m posting this to the Fedora Community Blog, to the Fedora Ambassador’s Mailing List, and to the Fedora Council Discussion List. Since the Ambassador’s list isn’t open to the public, let’s use the Council list as the primary place for this conversation — thanks!

— Matthew Miller, Fedora Project Leader

FPL Badge





  1. That doesn’t mean we all have to do the same thing, or even completely agree. Recommended reading: this great site on consensus-based decision-making: http://www.consensusdecisionmaking.org/
  2. Although this location may change soon, the current list is at https://fedoraproject.org/wiki/Objectives. Currently, Modularity Prototype (objective, docs)  is the only active Objective, but we also are considering a proposal for Fedora Atomic Continuous Integration (objective, docs).
  3.  Thanks to Langdon for suggesting non-technical Objective ideas. I’ve given one example focused on growing a certain user audience, one on promoting a particular solution Fedora contributors have built, and one on growing the Fedora contributor community itself. If you are particularly inspired by any of these, I’d be happy to work on fleshing out a full Objective proposal.
  4. None of this means that people are blocked from anything constructive they want to work on, even if it’s not something we collectively identify as a focus. We will have more success creating and sustaining momentum with a directed official effort, but as always in open source, I expect individual people to put effort towards what they personally find interesting — that’s as it should be!

The post A proposal: Ambassadors and Fedora strategy appeared first on Fedora Community Blog.

Google - all features and options .

Posted by mythcat on August 15, 2017 04:48 PM
Not all Google options are available for all countries.
You should make a selection option depending on country and availability.
This will relieve us of unsuccessful attempts and queries to Google.
Here are all the Google options available now .

My first Keynote at CONECIT 2017 in Tingo Maria

Posted by Julita Inca Chiroque on August 15, 2017 04:33 PM

Yesterday, I open the KeyNote session at CONEISC 2017 with a talk in an hour and a half. I have presented some experiences I had with HPC (High Performance Computing) in universities and during the ISC 2016 to show what is going on in the world regarding HPC, not only in architecture matters, also in programming. Comming soon the video 🙂It was a large audicence that congregates more than 1000 students and professionals in Computer Science and all the Engineering School in Peru.People participated with question I did and also it seems that are so interested in the topic.I want to thanks all people who helped me backstage, this is not only my effort, this is a community effort! Thanks Leyla Marcelo and Toto Cabezas, part of the GNOME Lima! ❤Thanks so much CONECIT 2017 – Tingo Maria 😀

Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: CONECIT, CONECIT 2017, CONECIT TGI, CONECIT Tingo Maria, fedora, GNO, GNOME, High Performance, HPC, HPC in the jungle, Julita Inca, Julita Inca Chiroque, KeyNote, Selva Peru

Episode 59 - The VPN Episode

Posted by Open Source Security Podcast on August 15, 2017 03:14 PM
Josh and Kurt talk about VPNs and the upcoming eclipse.

<iframe allowfullscreen="" height="90" mozallowfullscreen="" msallowfullscreen="" oallowfullscreen="" scrolling="no" src="http://html5-player.libsyn.com/embed/episode/id/5644794/height/90/width/640/theme/custom/autonext/no/thumbnail/yes/autoplay/no/preload/no/no_addthis/no/direction/backward/render-playlist/no/custom-color/6e6a6a/" style="border: none;" webkitallowfullscreen="" width="640"></iframe>

Show Notes

The workshop on Fedora Hubs at Flock 2017 will be awesome

Posted by Aurélien Bompard on August 15, 2017 03:08 PM

TL;DR: come to the Hubs workshop at Flock! 🙂

This is a shameless plug, I admit.

In a couple weeks, a fair number of people from the Fedora community will gather near Boston for the annual Flock conference. We’ll be able to update each other and work together face-to-face, which does not happen so often in Free Software.

For some months I’ve been working on the Fedora Hubs project, a web interface to make communication and collaboration easier for Fedora contributors. I really has the potential to change the game for a lot of us who still find some internal processes a bit tedious, and to greatly help new contributors.

The Fedora Hubs page is a personalized user or group page composed of many widgets which can individually inform you, remind you or help you tackle any part of you contributor’s life in the Fedora project. And it updates in realtime.

I’ll be giving a workshop on Wednesday 30th at 2:00PM to introduce developers to Hubs widgets. In half an hour, I’ll show you how to make a basic widget that will be already directly useful to you if you’re a packager. Then you’ll be able to join us in the following hackfest and contribute to Hubs. Maybe you have a great idea of a widget that would simplify your workflow. If so, that will be the perfect time to design and/or write it.

You need to know Python, and be familiar with basic web infrastructure technologies: HTML and CSS, requests and responses, etc. No Javascript knowledge needed at that point, but if you want to make a complex widget you’ll probably need to know how to write some JS (jQuery or React). The Hubs team will be around to help and guide you.

The script of the workshop is here: https://docs.pagure.org/fedora-hubs-widget-workshop/. Feel free to test it out and tell me if something goes wrong in your environment. You can also play with our devel Hubs instance, that will probably give you some ideas for the hackfest.

Remember folks: Hubs is a great tool, it will (hopefully) be central to contributors’ worflows throughout the Fedora project, and it’s the perfect time to design and write the widgets that will be useful for everyone. I hope to see you there! 🙂

ANNOUNCE: virt-viewer 6.0 release

Posted by Daniel Berrange on August 15, 2017 02:20 PM

I am happy to announce a new bugfix release of virt-viewer 6.0 (gpg), including experimental Windows installers for Win x86 MSI (gpg) and Win x64 MSI (gpg). The virsh and virt-viewer binaries in the Windows builds should now successfully connect to libvirtd, following fixes to libvirt’s mingw port.

Signatures are created with key DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF (4096R)

All historical releases are available from:


Changes in this release include:

  • Mention use of ssh-agent in man page
  • Display connection issue warnings in main window
  • Switch to GTask API
  • Add support changing CD ISO with oVirt foreign menu
  • Update various outdated links in README
  • Avoid printing password in debug logs
  • Pass hostname to authentication dialog
  • Fix example URLs in man page
  • Add args to virt-viewer to specify whether to resolve VM based on ID, UUID or name
  • Fix misc runtime warnings
  • Improve support for extracting listening info from XML
  • Enable connecting to SPICE over UNIX socket
  • Fix warnings with newer GCCs
  • Allow controlling zoom level with keypad
  • Don’t close app during seemless migration
  • Don’t show toolbar in kiosk mode
  • Re-show auth dialog in kiosk mode
  • Don’t show error when cancelling auth
  • Change default screenshot name to ‘Screenshot.png’
  • Report errors when saving screenshot
  • Fix build with latest glib-mkenums

Thanks to everyone who contributed towards this release.

My first GUADEC

Posted by Patrick Griffis on August 15, 2017 12:00 PM

This year marks the 20th anniversary of the GNOME project and I had the opportunity to attend GUADEC for the first time which gave some great insight into both the past and the future of GNOME.

The Past

The first talk by Allan Day “The GNOME Way” was a perfect openining for the conference as it established the core principles that make GNOME the project it is today. It reminds us that there are reasons behind what we do and ideals we should keep striving for. The mentioned Havoc blog post about preferences was also a good read for those who believe adding settings is a solution to problems.

Jonathan Blandford’s talk “The History of GNOME” established more of the why behind the principles of the project and put it in to context what events lead to GNOME being what it is today and some of the key players in that.

The Future


Firstly Flatpak continues to improve greatly (sadly Alex’s video was lost) and Flathub has now officially launched which I’ve been helping with. There was a BoF where we discussed a varity of issues that still need solving but everything seems on track to be the official way users will get GNOME software in the future.


Last year at LAS I met Jussi and learned a lot about Meson and since then I have been actively contributing to it so I was extremely pleased how in such a short time the GNOME community has really championed it as their build system of choice with dozens of projects ported.

I also had the chance to fix a few small bugs users had and get some lingering PRs merged. One thing I sadly didn’t get closure on was finishing automatic post-install steps. I made a work in progress branch but there were some edge-cases it doesn’t handle (see FIXME comments).


Tristian spoke about BuildStream which finally seems to be picking up some steam as a potential replacement for JHBuild and possibly usable by Continuous and Flatpak. Considering how broad its usage is I really need to get a chance to try it out with some real projects. I did add Meson support to it though!


A major problem that GNOME has is that there is a lack of developers and it is difficult for new developers to get into developing for the platform. Carlos Soriano and Bastian Ilsø spoke about their progress on the Newcomer initiative. I’ve been following this for a while from within the Builder project and am very exciting to see where it goes.

I recently just landed new project templates in Builder which are using Meson and now have JavaScript as a language option. My hope is that with GJS’ recent improvements which Philip spoke about including the recent documentation which I helped get hosted that we can get a a easy path for newcomers to start writing new software. I hope to help out with a guide or tutorial for this in the near future.

The Conclusion

This GUADEC turned out well and I had the chance to meet tons of great people from the project and I am even more excited to see where the project goes from here with the Flatpak and Newcomer initiatives as well as Ubuntu support having the potential to greatly increase the number of contributors.

GNOME Foundation Sponsored Badge

ANNOUNCE: libosinfo 1.1.0 release

Posted by Daniel Berrange on August 15, 2017 11:09 AM

I am happy to announce a new release of libosinfo version 1.1.0 is now available, signed with key DAF3 A6FD B26B 6291 2D0E 8E3F BE86 EBB4 1510 4FDF (4096R). All historical releases are available from the project download page.

Changes in this release include:

  • Force UTF-8 locale for new glib-mkenums
  • Avoid python warnings in example program
  • Misc test suite updates
  • Fix typo in error messages
  • Remove ISO header string padding
  • Disable bogus gcc warning about unsafe loop optimizations
  • Remove reference to fedorahosted.org
  • Don’t hardcode /usr/bin/perl, use /usr/bin/env
  • Support eject-after-install parameter in OsinfoMedia
  • Fix misc warnings in docs
  • Fix error propagation when loading DB
  • Add usb.ids / pci.ids locations for FreeBSD
  • Don’t include private headers in gir/vapi generation

Thanks to everyone who contributed towards this release.

Introducing InfluxDB: Time-series database stack

Posted by Justin W. Flory on August 15, 2017 08:30 AM
Introducing InfluxDB: Time-series database stack

Article originally published on Opensource.com.

The needs and demands of infrastructure environments changes every year. With time, systems become more complex and involved. But when infrastructure grows and becomes more complex, it’s meaningless if we don’t understand it and what’s happening in our environment. This is why monitoring tools and software are often used in these environments, so operators and administrators see problems and fix them in real-time. But what if we want to predict problems before they happen? Collecting metrics and data about our environment give us a window into how our infrastructure is performing and lets us make predictions based on data. When we know and understand what’s happening, we can prevent problems before they happen.

But how do we collect and store this data? For example, if we want to collect data on the CPU usage of 100 machines every ten seconds, we’re generating a lot of data. On top of that, what if each machine is running fifteen containers? What if you want to generate data about each of those individual containers too? What about by the process? This is where time-series data becomes helpful. Time-series databases store time-series data. But what does that mean? We’ll explain all of this and more and introduce you to InfluxDB, an open source time-series database. By the end of this article, you will understand…

  • What time-series data / databases are
  • Quick introduction to InfluxDB and the TICK stack
  • How to install InfluxDB and other tools

Introducing time-series concepts

Example of table, or how a RDBMS like MySQL stores data

Example of table, or how a RDBMS like MySQL stores data. Image from DevShed.

If you’re familiar with relational database management software (RDBMS), like MySQL, tables, columns, and primary keys are familiar terms. Everything is like a spreadsheet, with columns and rows. Some data might be unique, other parts might be the same as other rows. RBDMS’s like MySQL are widely used and are great for reliable transactions that follow ACID (Atomicity, Consistency, Isolation, Durability) compliance.

With relational database software, you’re usually working with data that is something you could model in a table. You might update certain data by overwriting and replacing it. But what if you’re collecting on data on something that generates a lot of data and you want to watch change over time? Take a self-driving car. The car is constantly collecting information about its environment. It takes this data and it analyzes changes over time to behave correctly. The amount of data might be tens of gigabytes an hour. While you could use a relational database to collect this data, they’re not built for this. When it comes to scaling and usability of the data you’re collecting, an RBDMS isn’t the best tool for the job.

Why time-series is a good fit

And this is where time-series data makes sense. Let’s say you’re collecting data about a city traffic, temperature from farming equipment, or the production rate of an assembly line. Instead of going into a table with rows and columns, imagine pushing multiple rows of data that are uniquely sorted by a timestamp. This visual might help make more sense of this.

Imagine rows and rows of data, uniquely sorted by timestamps

Imagine rows and rows of data, uniquely sorted by timestamps. Image from Timescale.

Having the data in this format makes it easier to track and watch change over time. When data accumulates, you can see how something behaved in the past, how it’s behaving now, and how it might behave in the future. Your options to make smarter data decisions expands!

Curious how the data is stored and formatted? It depends on the time-series database (TSDB) you use. InfluxDB stores the data in the Line Protocol format. Queries return the data in JSON.

How InfluxDB stores time-series data in JSON

How InfluxDB stores time-series data in Line Protocol. Image from Roberto Gaudenzi.

If you’re still confused or trying to understand time-series data or why you would want to use it over another solution, you can read an excellent, in-depth explanation from Timescale’s blog or InfluxData’s blog.

InfluxDB: A time-series database

InfluxDB is an open source time-series database software developed by InfluxData. It’s written in Go (a compiled language), which means you can start using it without installing any dependencies. It supports multiple data ingestion protocols, such as Telegraf (also from InfluxData), Graphite, collectd, and OpenTSDB. This leaves you with flexible options for how you want to collect data and where you’re pulling it from. It’s also one of the fastest-growing time-series database software available. You can find the source code for InfluxDB on GitHub.

This article will focus on three tools in InfluxData’s TICK stack for how you can build a time-series database and begin collecting and processing data.

TICK stack

InfluxData creates a platform based on four open source projects that work and play well with each other for time-series data. When used together, you can collect, store, process, and view the data easily. The four pieces of the platform are known as the TICK stack. This stands for…

  • Telegraf: Plugin-driven server agent for collecting / reporting metrics
  • InfluxDB: Scalable data store for metrics, events, and real-time analytics
  • Chronograf: Monitoring / visualization UI for TICK stack (not covered in this article)
  • Kapacitor: Framework for processing, monitoring, and alerting on time-series data

These tools work and integrate well with the other pieces by design. However, it’s also easy to substitute one piece out for another tool of your choice. For this article, we’ll explore three parts of the TICK stack: InfluxDB, Telegraf, and Kapacitor.

Diagram of how the different components of the InfluxDB TICK stack connect with each other

Diagram of how the different components of the TICK stack connect with each other. From influxdata.com.


As mentioned before, InfluxDB is the time-series database (TSDB) of the TICK stack. Data collected from your environment is stored into InfluxDB. There are a few things that stand out about InfluxDB from other time-series databases.

Emphasis on performance

InfluxDB is designed with performance as one of the top priorities. This allows you to use data quickly and easily, even under heavy loads. To do this, InfluxDB focuses on quickly ingesting the data and using compression to keep it manageable. To query and write data, it uses an HTTP(S) API.

The performance notes are noteworthy standing up the amount of data InfluxDB is capable of handling. It can handle up to a million points of data per second, at a precise level even to the nanosecond.

SQL-like queries

If you’re familiar with SQL-like syntax, querying data from InfluxDB will feel familiar. It uses its own SQL-like syntax, InfluxQL, for queries. As an example, imagine you’re collecting data on used disk space on a machine. If you wanted to see that data, you could write a query that might look like this.

SELECT mean(diskspace_used) as mean_disk_used
FROM disk_stats
WHERE time() >= 3m
GROUP BY time(10d)

If you’re familiar with SQL syntax, this won’t feel too different. The above statement will pull the mean values of used disk space from a three-month period and group them by every ten days.

Downsampling / data retention

When working with large amounts of data, storing it becomes a concern. Over time, it can accumulate to huge sizes. With InfluxDB, you can downsample into less precise, but smaller metrics that you can store for longer periods of time. Data retention policies for your data enable you to do this.

For example, pretend you have sensors collecting data on the amount of RAM in a number of machines. You might collect metrics on the amount of memory in use by multiple users, the system, cached memory, and more. While it might make sense to hang on to that data for thirty days to watch what’s happening, after thirty days, you might not need it that precise. Instead, you might only want the ratio of total memory to memory in use. Using data retention policies, you can tell InfluxDB to hang on to the precise data for all the different usages for thirty days. After thirty days, you can average data to be less precise, and you can hold on to that data for six months, forever, or however long you like. This compromise meets in the middle between keeping historical data and reducing disk usage.


If InfluxDB is where all of your data is going, you need a way to collect and gather the data first. Telegraf is a metric collection daemon that gathers various metrics from system components, IoT sensors, and more. It’s open source and written completely in Go. Like InfluxDB, Telegraf is also written by the InfluxData team and is built to work with InfluxDB. It also includes support for different databases, such as MySQL / MariaDB, MongoDB, Redis, and more. You can read more about it on InfluxData’s website.

Telegraf is modular and heavily based on plugins. This means that Telegraf is either lean and minimal or as full and complex as you need it. Out of the box, it supports over a hundred plugins for various input sources. This includes Apache, Ceph, Docker, IPTables, Kubernetes, NGINX, and Varnish, just to name a few. You can see all the plugins, including processing and output plugins in their README.

Even if you’re not using InfluxDB as a data store, you may find Telegraf useful as a way to collect this data and information about your systems or sensors.


Now we have a way to collect and store our data. But what about doing things with it? Kapacitor is the piece of the stack that lets you process and work with the data in a few different ways. It supports both stream and batch data. Stream data means you can actively work and shape the data in real-time, even before it makes it to your data store. Batch data means you retroactively perform actions on samples, or batches, of the data.

One of the biggest pluses for Kapacitor is that it enables you to have real-time alerts for events happening in your environment. CPU usage overloading or temperatures too high? You can set up several different alert systems, including but not limited to email, triggering a command, Slack, HipChat, OpsGenie, and many more. You can see the full list in the documentation.

Like the previous tools, Kapacitor is also open source and you can read more about the project in their README.

Installing the TICK stack

Packages are available for nearly every distribution. You can install these packages from the command line. Use the instructions for your distribution.


sudo dnf install https://dl.influxdata.com/influxdb/releases/influxdb-1.3.1.x86_64.rpm \
https://dl.influxdata.com/telegraf/releases/telegraf-1.3.4-1.x86_64.rpm \

CentOS 7 / RHEL 7

sudo yum install https://dl.influxdata.com/influxdb/releases/influxdb-1.3.1.x86_64.rpm \
https://dl.influxdata.com/telegraf/releases/telegraf-1.3.4-1.x86_64.rpm \

Ubuntu / Debian

wget https://dl.influxdata.com/influxdb/releases/influxdb_1.3.1_amd64.deb \
https://dl.influxdata.com/telegraf/releases/telegraf_1.3.4-1_amd64.deb \
sudo dpkg -i influxdb_1.3.1_amd64.deb telegraf_1.3.4-1_amd64.deb kapacitor_1.3.1_amd64.deb

Other distributions

For help with other distributions, see the Downloads page.

See the data, be the data

Now that you have the tools installed, you can experiment with some of these tools. There’s plenty of upstream documentation on all three projects. You can the docs here:

Additionally, for more help, you can visit the InfluxData community forums. Happy hacking!

The post Introducing InfluxDB: Time-series database stack appeared first on Justin W. Flory's Blog.

Calling all UX peeps

Posted by Suzanne Hillman (Outreachy) on August 15, 2017 01:27 AM

Yesterday I mentioned a discussion I was involved with on Facebook in which someone on the board of UXPA Boston suggested that I could organize a program for UX newbies and career changers.

I’m really pleased by this idea, and very glad she suggested it. However, before I bring my ideas to the board and get advice and help, I want to have slightly more clue than I currently have.

So, research!

The best way I can think of to get more clue is to talk to people in the UX space. I’d like to talk to other people who are new, people who do the hiring, and people who are working in UX with other UX team members.

UX Job Seekers

Based on my instincts and some of the suggestions on the FB discussion, I suspect people trying to get into UX full-time struggle with:

  1. Getting experience
  2. How to best structure their portfolio and resume
  3. Becoming known to companies

Some off-the cuff ideas of ways to help with these:

  1. Internships, co-ops, programs like Outreachy/Google’s Summer of Code/akamai’s technical academy, mentorship, apprenticeship, small multi-person design projects, and UX hackathons
  2. Finding mentors, having get-togethers to review portfolios and resumes (among each other), and developing sustainable ways to get feedback from hiring managers
  3. Things that I listed in option #1, company visits, and informational interviews

UX Hiring Managers

I have a lot of interesting ideas above, but I would need to know more about what hiring managers are looking for to understand what would be most useful.

For example, in an ideal world, what do hiring managers want to see from candidates? What would be most useful to determine if they want to take a chance on someone? What do they want to see them do, have done, or be interested in doing? What do they _not_ want to see? What do they struggle with figuring out, but very much want in their employees?

People currently on UX teams

Of course, not only do I need to know what hiring managers look for, but I’d like to better understand what people look for in their co-workers.

Such as, what do UXers find most useful when working with other UXers? What do they especially dislike? How well do their hiring practices seem to tease these out? What do you most appreciate in your co-workers?

How can you help?

If you are in UX, or trying to get into UX, talk to me! Comment or email me!

Lire le journal système du dernier démarrage avec systemd

Posted by Jean-Baptiste Holcroft on August 14, 2017 10:00 PM

Écrire un rapport de bug est une bonne chose, mais accéder aux journaux système n’est pas toujours évident…

Grâce à cet article sur systemd, j’ai compris comment trouver les traces liés à chaque démarrage de la machine. Comme cela fait plusieurs années que je galère sans, je me …

Fedora Classroom Session 4

Posted by Fedora Magazine on August 14, 2017 08:43 PM

The Fedora Classroom sessions continues this week. You can find the general schedule for sessions on the wiki. You can also find resources and recordings from previous sessions there.

Here are details about this week’s session on Friday, August 18 at 1300 UTC.


Eduard Lucena is an IT Engineer and an Ambassador from the LATAM region. He started working with the community by publishing a simple article in the Magazine. Right now he actively works in the Marketing group and aims to be a FAmSCO member for the Fedora 26 release. He works in the telecommunication industry and uses the Fedora Cinnamon Spin as his main desktop, both at work and home. He isn’t a mentor, but tries to on-board people into the project by teaching them how to join the community in any area. His motto is: “Not everything is about the code.”

Topic: Vim 101

Like many classic utilities developed during UNIX’s early years, vi has a reputation for being hard to navigate. Bram Moolenaar’s enhanced and optimized clone, Vim (“vi Improved“), is the default editor in almost all UNIX-like systems. The world’s come a long way since Vim was written. Even though the system resources have grown, many still stick with the Vim editor, including Fedora.

This hands-on session will teach you about the different Vim versions packaged in Fedora. Then, we’ll go deeper into how to use this powerful tool. We’ll also teach you how not to flounder trying to close the editor!

Joining the session

Since this is a hands-on session, you’ll want to have a Linux installation to follow it properly. Preferably you’ll have Vim installed with full features. If you don’t have it, don’t worry — you’ll learn how to install it and what the differences are. No prior knowledge of the Vim editor is required.

This session will be held via IRC. The following information will help you join the session:

We hope you can attend and enjoy this experience from some of the people that work in the Fedora Project.

Photograph used in feature image is San Simeon School House by Anita Ritenour — CC-BY 2.0.

Downloading all the 78rpm rips at the Internet Archive

Posted by Richard W.M. Jones on August 14, 2017 08:43 PM

I’m a bit of a fan of 1930s popular music on gramophone records, so much so that I own an original early-30s gramophone player and an extensive collection of discs. So the announcement that the Internet Archive had released a collection of 29,000 records was pretty amazing.

[Edit: If you want a light introduction to this, I recommend this double CD]

I wanted to download it … all!

But apart from this gnomic explanation it isn’t obvious how, so I had to work it out. Here’s how I did it …

Firstly you do need to start with the Advanced Search form. Using the second form on that page, in the query box put collection:georgeblood, select the identifier field (only), set the format to CSV. Set the limit to 30000 (there are about 25000+ records), and download the huge CSV:

$ ls -l search.csv
-rw-rw-r--. 1 rjones rjones 2186375 Aug 14 21:03 search.csv
$ wc -l search.csv
25992 search.csv
$ head -5 search.csv

A bit of URL exploration found a fairly straightforward way to turn those identifiers into directory listings. For example:


What I want to do is pick the first MP3 file in the directory and download it. I’m not fussy about how to do that, and Python has both a CSV library and an HTML fetching library. This turns the CSV file of links into a list of MP3 URLs. You could easily adapt this to download FLAC files instead.


import csv
import re
import urllib2
import urlparse
from BeautifulSoup import BeautifulSoup

with open('search.csv', 'rb') as csvfile:
    csvreader = csv.reader(csvfile, delimiter=',', quotechar='"')
    for row in csvreader:
        if row[0] == "identifier":
        url = "https://archive.org/download/%s/" % row[0]
        page = urllib2.urlopen(url).read()
        soup = BeautifulSoup(page)
        links = soup.findAll('a', attrs={'href': re.compile("\.mp3$")})
        # Only want the first link in the page.
        link = links[0]
        link = link.get('href', None)
        link = urlparse.urljoin(url, link)
        print link

When you run this it converts each identifier into a download URL:

Edit: Amusingly WordPress turns the next pre section with MP3 URLs into music players. I recommend listening to them!

$ ./download.py | head -10
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-11" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_jeannine-i-dream-of-you-lilac-time_bar-harbor-society-orch.-irving-kaufman-shilkr_gbia0010841b/Jeannine%20I%20Dream%20Of%20You%20%22Lilac%20%20-%20Bar%20Harbor%20Society%20Orch..mp3?_=11" type="audio/mpeg">https://archive.org/download/78_jeannine-i-dream-of-you-lilac-time_bar-harbor-society-orch.-irving-kaufman-shilkr_gbia0010841b/Jeannine%20I%20Dream%20Of%20You%20%22Lilac%20%20-%20Bar%20Harbor%20Society%20Orch..mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-12" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_a-prisoners-adieu_jerry-irby-modern-mountaineers_gbia0000549b/A%20Prisoner%27s%20Adieu%20-%20Jerry%20Irby%20-%20Modern%20Mountaineers.mp3?_=12" type="audio/mpeg">https://archive.org/download/78_a-prisoners-adieu_jerry-irby-modern-mountaineers_gbia0000549b/A%20Prisoner%27s%20Adieu%20-%20Jerry%20Irby%20-%20Modern%20Mountaineers.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-13" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_if-i-had-the-heart-of-a-clown_bobby-wayne-joe-reisman-rollins-nelson-kane_gbia0004921b/If%20I%20Had%20The%20Heart%20of%20A%20Clown%20-%20Bobby%20Wayne.mp3?_=13" type="audio/mpeg">https://archive.org/download/78_if-i-had-the-heart-of-a-clown_bobby-wayne-joe-reisman-rollins-nelson-kane_gbia0004921b/If%20I%20Had%20The%20Heart%20of%20A%20Clown%20-%20Bobby%20Wayne.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-14" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_how-many-times-can-i-fall-in-love_patty-andrews-and-tommy-dorsey-victor-young-an_gbia0013066b/How%20Many%20Times%20%28Can%20I%20Fal%20-%20Patty%20Andrews%20And%20Tommy%20Dorsey.mp3?_=14" type="audio/mpeg">https://archive.org/download/78_how-many-times-can-i-fall-in-love_patty-andrews-and-tommy-dorsey-victor-young-an_gbia0013066b/How%20Many%20Times%20%28Can%20I%20Fal%20-%20Patty%20Andrews%20And%20Tommy%20Dorsey.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-15" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_ill-forget-you_alan-dean-ball-burns-joe-lipman_gbia0002540a/I%27ll%20Forget%20You%20-%20Alan%20Dean%20-%20Ball%20-%20Burns.mp3?_=15" type="audio/mpeg">https://archive.org/download/78_ill-forget-you_alan-dean-ball-burns-joe-lipman_gbia0002540a/I%27ll%20Forget%20You%20-%20Alan%20Dean%20-%20Ball%20-%20Burns.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-16" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_it-aint-gonna-rain-no-mo-ya-no-va-a-llover_international-novelty-orchestra-wend_gbia0014114a/It%20Ain%27t%20Gonna%20Rain%20No%20M%20-%20International%20Novelty%20Orchestra.mp3?_=16" type="audio/mpeg">https://archive.org/download/78_it-aint-gonna-rain-no-mo-ya-no-va-a-llover_international-novelty-orchestra-wend_gbia0014114a/It%20Ain%27t%20Gonna%20Rain%20No%20M%20-%20International%20Novelty%20Orchestra.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-17" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_i-still-keep-dreaming_leroy-holmes-and-his-orchestra-sourwine-johnny-corva_gbia0004815b/I%20Still%20Keep%20Dreaming%20-%20Leroy%20Holmes%20and%20his%20Orchestra.mp3?_=17" type="audio/mpeg">https://archive.org/download/78_i-still-keep-dreaming_leroy-holmes-and-his-orchestra-sourwine-johnny-corva_gbia0004815b/I%20Still%20Keep%20Dreaming%20-%20Leroy%20Holmes%20and%20his%20Orchestra.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-18" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_it-aint-nobodys-bizness_lulu-belle--scotty-browne-sampsel-markowitz_gbia0010017a/It%20Ain%27t%20Nobody%27s%20Bizness%20-%20Lulu%20Belle%20%26%20Scotty.mp3?_=18" type="audio/mpeg">https://archive.org/download/78_it-aint-nobodys-bizness_lulu-belle--scotty-browne-sampsel-markowitz_gbia0010017a/It%20Ain%27t%20Nobody%27s%20Bizness%20-%20Lulu%20Belle%20%26%20Scotty.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-19" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_i-still-get-a-thrill-thinking-of-you_art-lund-johnny-thompson-coots-davis_gbia0002767a/I%20Still%20Get%20A%20Thrill%20%28Thinking%20Of%20You%29%20-%20Art%20Lund.mp3?_=19" type="audio/mpeg">https://archive.org/download/78_i-still-get-a-thrill-thinking-of-you_art-lund-johnny-thompson-coots-davis_gbia0002767a/I%20Still%20Get%20A%20Thrill%20%28Thinking%20Of%20You%29%20-%20Art%20Lund.mp3</audio>
<audio class="wp-audio-shortcode" controls="controls" id="audio-7432-20" preload="none" style="width: 100%;"><source src="https://archive.org/download/78_in-the-gloaming_art-hickmans-orchestra-logan_gbia0006430a/In%20The%20Gloaming%20-%20Art%20Hickman%27s%20Orchestra.mp3?_=20" type="audio/mpeg">https://archive.org/download/78_in-the-gloaming_art-hickmans-orchestra-logan_gbia0006430a/In%20The%20Gloaming%20-%20Art%20Hickman%27s%20Orchestra.mp3</audio>

And after that you can download as many 78s as you can handle 🙂 by doing:

$ ./download.py > downloads
$ wget -nc -i downloads


I only downloaded about 5% of the tracks, but it looks as if downloading it all would be ~ 100 GB. Also most of these tracks are still in copyright (thanks to insane copyright terms), so they may not be suitable for sampling on your next gramophone-rap record.

Update #2

Don’t forget to donate to the Internet Archive. I gave them $50 to continue their excellent work.

Servidor LAMP en Fedora 26

Posted by Ivan Fernandez Cid on August 14, 2017 08:12 PM
Configurar un servidor LAMP en Fedora es una tarea bastante sencilla. A continuación describo cómo hacerlo. Instalamos el servidor web(apache httpd) la forma más sencilla(todos estos comandos como root) dnf groupinstall "Web Server" **si muestra algún error de versiones (workstation, nonproduct) usar: dnf groupinstall "Web Server" --skip-broken Después el servidor mariadb(mysql) dnf

RHEL 7.4 multimedia packages and Skype repository removal

Posted by Simone Caronni on August 14, 2017 06:33 PM

The upgrade path from Red Hat Enterprise Linux 7.3 to 7.4 is a bit of a pain if you have the multimedia repository configured. This is because I’m rebuilding a few components for an upgraded libwebp package and because a lot of stuff has been rebased to versions that are in Fedora. Judging by the logs, I see that most of the downloads come from CentOS systems, so I just decided to hold on some updates that are required for the various package rebases for Red Hat Enterprise Linux 7.4. So until also CentOS releases version 7.4, I can’t make everyone happy and something (like Gstreamer plugin updates) will be stuck with 7.3 versions. Hopefully the new CentOS release will come quickly enough.

Also, I decided to stop rebuilding the base packages to use a newer libwebp version. This really had very few benefits and just a lot of pain due to the huge amount of packages involved in both x86_64 and i686 variants. The amount of packages affected by this weigh at around 3 gb.

In RHEL 7.4 there are additional WebKit variants that also would require a rebuild. So, as of today, to update the packages from the EPEL 7 multimedia repository you should run this command:

rpm -e --nodeps GraphicsMagick && yum distro-sync && yum -y install GraphicsMagick

Hopefully you would get an output similar to this:

Dependencies Resolved

 Package                         Arch        Version                  Repository         Size
 compat-ffmpeg-libs              x86_64      1:2.8.12-2.el7           epel-multimedia   5.6 M
 ffmpeg                          x86_64      1:3.3.3-2.el7            epel-multimedia   1.5 M
 ffmpeg-libs                     i686        1:3.3.3-2.el7            epel-multimedia   6.1 M
 ffmpeg-libs                     x86_64      1:3.3.3-2.el7            epel-multimedia   6.3 M
 gstreamer1-plugins-bad          x86_64      1:1.4.5-5.el7            epel-multimedia   1.8 M
 libavdevice                     x86_64      1:3.3.3-2.el7            epel-multimedia    63 k
 leptonica                       i686        1.72-2.el7               epel-multimedia   881 k
 leptonica                       x86_64      1.72-2.el7               epel              928 k
 libwebp                         i686        0.3.0-3.el7              base              169 k
 libwebp                         x86_64      0.3.0-3.el7              base              170 k
 lz4                             x86_64      1.7.3-1.el7              epel               82 k
 python-pillow                   x86_64      2.0.0-19.gitd1c6db8.el7  base              438 k
 webkitgtk                       x86_64      2.4.9-1.el7              epel               12 M
 webkitgtk3                      x86_64      2.4.9-6.el7              base               11 M
Installing for dependencies:
 libwebp0.6                      i686        0.6.0-1.el7              epel-multimedia   255 k
 libwebp0.6                      x86_64      0.6.0-1.el7              epel-multimedia   250 k

Transaction Summary
Install               ( 2 Dependent packages)
Upgrade    6 Packages
Downgrade  8 Packages

Total download size: 47 M
Is this ok [y/d/N]:

Basically libwebp should come again from the main CentOS/RHEL channels and the libwebp0.6 package should come from the multimedia repository. All the packages which were rebuilt for the previous libwebp 0.5 update should become synced again to their proper versions.

If you don’t get this output, but still get some dependency errors you have to do some debugging. For example, ffmpeg-libs.i686 requires libssh.i686, but the version of libssh in CentOS extras is different from the one in EPEL (it really depends on what kind of packages you have installed and with which repositories enabled) so I’m providing here the same version that is in CentOS extras but in both variants.

Update 16th August 2017

If you get many qt5 errors during the transactions, keep in mind that RHEL 7.4 has been rebased massively, and everyone else (including EPEL) is catching up. As of today, if you have the following errors (trimmed down) in a Yum transaction:

Error: Package: gvfs-1.30.4-3.el7.x86_64 (rhel-x86_64-server-7)
Error: Package: qt5-qtwebkit-5.6.1-3.b889f46git.el7.x86_64 (epel)
Error: Package: qt5-qtquickcontrols2-5.6.1-2.el7.x86_64 (@epel)
Error: Package: qt5-qtquickcontrols2-5.6.1-2.el7.x86_64 (@epel)
Error: Package: GraphicsMagick-1.3.26-3.el7.x86_64 (@epel-multimedia)
Error: Package: kf5-kdeclarative-5.36.0-1.el7.x86_64 (epel)
Error: Package: qt5-qtquickcontrols2-5.6.1-2.el7.x86_64 (@epel)
Error: Package: qt5-qtwebkit-5.6.1-3.b889f46git.el7.x86_64 (epel)
Transaction check error:
  file /usr/lib64/gstreamer-1.0/libgstopus.so from install of gstreamer1-plugins-bad-1:1.4.5-5.el7.x86_64 conflicts with file from package gstreamer1-plugins-base-1.10.4-1.el7.x86_64

You can do the following. For this:

Error: Package: GraphicsMagick-1.3.26-3.el7.x86_64 (@epel-multimedia)


rpm -e --nodeps GraphicsMagick && yum -y install GraphicsMagick

All of the QT7KDE 5 stuff:

Error: Package: qt5-qtwebkit-5.6.1-3.b889f46git.el7.x86_64 (epel)
Error: Package: qt5-qtquickcontrols2-5.6.1-2.el7.x86_64 (@epel)
Error: Package: qt5-qtquickcontrols2-5.6.1-2.el7.x86_64 (@epel)
Error: Package: kf5-kdeclarative-5.36.0-1.el7.x86_64 (epel)

Are in EPEL testing updates, so:

yum --enablerepo=epel-testing update


Error: Package: gvfs-1.30.4-3.el7.x86_64 (rhel-x86_64-server-7)Transaction check error:
  file /usr/lib64/gstreamer-1.0/libgstopus.so from install of gstreamer1-plugins-bad-1:1.4.5-5.el7.x86_64 conflicts with file from package gstreamer1-plugins-base-1.10.4-1.el7.x86_64

are some of the packages that are rebased in RHEL 7.4. I’ve created a temporary repository for those, it will disappear once CentOS 7.4 is released as the packages will be integrated in the main multimedia repository. You can install it through:

yum-config-manager \

With the above repository it is possible to install all the other multimedia packages.

Skype repository removal

Skype 4.3 is 32 bit only, is now obsolete and has been superseded by a package that actually lists proper dependencies. It is also one of the packages that required one of the above WebKit rebuilds in i686 form for RHEL/CentOS 7 x86_64.

If you have it installed, just remove it with:

yum remove webkitgtk.i686

The repository has been deleted; to install the new Skype provided version, just head to the following official link.

Summer 2017 Red Hat Intern Expo

Posted by Mary Shakshober on August 14, 2017 03:34 PM

Now wrapping up summer #2 as a Red Hat intern, the 2017 Intern Expo was a relatively familiar environment. This event this year for the Boston/Westford interns was held in the Westford office on August 17th, in the same “classic middle school science fair” manner as 2016. This year, though, I came prepared with visuals, visuals, and yes, more visuals (I’m a graphic designer, it’s in my blood)! I created a site, from scratch, that I had been working on in small bits and pieces throughout the course of the summer consisting of tutorials for getting involved in the Fedora Design-Team and Fedora-Badges groups, Fedora style basics, and a library of my entire summer of work. My original hope was to create the site using Fedora Bootstrap, but because of time constraints the static-HTML-to-Bootstrap conversion didn’t happen. Because I don’t have hosting for this site and cannot attach zip folders here, I’ve attached screenshots of the site!

My setup overall was my website running on my laptop as well as printouts of more of my print media designs for easy viewing. It was great to see a few familiar faces, to show fellow Red Hatters my adventures through Fedora designing, and to see what other interns have been up to throughout the course of the summer.

Tootaloo summer 2017 *insert a royalty wave here* 🙂


FESCo Elections: Interview with Till Maas (till)

Posted by Fedora Community Blog on August 14, 2017 01:07 PM
Fedora Engineering Steering Council badge, awarded after Fedora Elections - read the Interviews to learn more about candidates

Fedora Engineering Steering Council badge

This is a part of the FESCo Elections Interviews series. Voting is open to all Fedora contributors. The voting period starts on Tuesday, August 8th and closes promptly at 23:59:59 UTC on Monday, August 14th. Please read the responses from candidates and make your choices carefully. Feel free to ask questions to the candidates here (preferred) or elsewhere!

Interview with Till Maas (till)

  • Fedora Account: till
  • IRC: tyll (found in #fedora-releng #fedora #fedora-devel #fedora-admin #fedora-apps  #fedora-social #fedora-de  #epel )
  • Fedora User Wiki Page


What is your background in engineering?

Linux is my favourite operating system since 1999, when I got my first PC as a pupil. I started with SuSE 6.0 back then, switched to Gentoo and tried Ubuntu. In 2005 I tried Fedora Core 4. Thanks to the welcoming Fedora community I quickly became a contributor, starting as a packager. Nowadays, I am a sponsor, provenpackager, help release engineering with cleanup tasks and occasionally patch something in Fedora infrastructure. My Open Hub profile contains an overview of most of my FLOSS contributions in general: https://www.openhub.net/accounts/tillmaas

Formally I acquired the degree of a Diplom-Informatiker (Master of Science in Computer Science) at the RWTH Aachen University, Germany. In my dayjob I work as a penetration tester.

Why do you want to be a member of FESCo?

I would like to use my skills, knowledge and experience to help Fedora continuing to excel as a great FLOSS project.

Describe some of the important technical issues you foresee affecting the Fedora community. What insight do you bring to these issues?

The modularity initiative and the introduction of flatpaks into Fedora introduce new challenges in ensuring that our users get timely security updates. As a penetration tester I have a strong security background and as a packager I know the struggles in preparing upstream releases as consumable Fedora packages.

What are three personal qualities that you feel would benefit FESCo if you are elected?

  • I like to learn new technologies and therefore become quickly familiar with them. This will help me to quickly understand change requests and their implications.
  • I have an eye for details and often see or find connections, implications and issues that others miss. Therefore I will make founded decisions.
  • I am constantly trying to improve and therefore am open to change and see mistakes as an opportunity to learn. As a leading Linux distribution it is important for Fedora to introduce new technologies.

What is your strongest point as a candidate? What is your weakest point?

I am a long time Fedora contributor and contributed to several groups and projects in Fedora. Therefore I have a good insight into many details. Since I am contributing to Fedora in my free time, time might be an issue.

Currently, how do you contribute to Fedora? How does that contribution benefit the community?

I am a packager, help with release engineering and infrastructure projects. My focus is primarily on making Fedora more secure and making it easier to contribute to Fedora. On a non-technical level I represent Fedora as an Ambassador at conferences.

What objectives or goals should FESCo focus on to help keep Fedora on the cutting edge of open source development?

In my opinion it is important to make contributing to Fedora as easy as possible. The tools to contribute to Fedora should be as straight-forward as possible. This is for example the reason I wrote fedora-easy-karma: It streamlined the process of submitted feedback about package updates. The less time we spend with our tools the more time we have to focus on the quality of the products we deliver.

Do you believe the Modularity objective is important to Fedora’s success? Is there anything you wish to bring to the modularity efforts?

Yes, I believe the Modularity objective is a great framework for Fedora to try new paths and add more value to the individual products. For me it is important to keep security and usability in mind.

What is the air-speed velocity of an unladen swallow?

It depends on the bikeshed it is flying over – what color is it?

Closing words

Thank you for your time reading this. Please do not forget to vote!

The post FESCo Elections: Interview with Till Maas (till) appeared first on Fedora Community Blog.

Slice of Cake #18

Posted by Brian "bex" Exelbierd on August 14, 2017 10:00 AM

A slice of cake

In the last week as FCAIC I:

  • So much Flock with the fantastic help of Kristyna, Jen, Stephen and the entire team.
  • Docs work continues onward. We should begin the staging this week.

À la mode

  • Finally moved my homedir and parts of my setup to a new F26 laptop. So far my compose key is broken and my sadistic need to only do setup via Ansible is slowing me down :).

Cake Around the World

I’ll be traveling some and hope you’ll ping me for coffee if you’re nearby.

  • Flock on Cape Cod, Massachusetts, USA from 29 August - 1 September.

Benchmarking small file performance on distributed filesystems

Posted by Jonathan Dieter on August 14, 2017 07:41 AM

The actual benches

As I mentioned in my last post, I’ve spent the last couple of weeks doing benchmarks on the GlusterFS, CephFS and LizardFS distributed filesystems, focusing on small file performance. I also ran the same tests on NFSv4 to use as a baseline, since most Linux users looking at a distributed filesystem will be moving from NFS.

The benchmark I used was compilebench, which was designed to emulate real-life disk usage by creating a kernel tree, simulating a compile of the tree, reading all the files in the tree, and finally deleting the tree. I chose this benchmark because it does a lot of work with small files, very similar to what most file access looks like in our school. I did modify the benchmark to only do one read rather than the default of three to match the single creation, compilation simulation and deletion performed on each client.

The benchmarks were run on three i7 servers with 32GB of RAM, connected using a gigabit switch, running CentOS 7. GlusterFS is version 3.8.14, CephFS is version 10.2.9, and LizardFS is version 3.11.2. For GlusterFS, CephFS and LizardFS, the three servers operated as distributed data servers with three replicas per file. I first had one server connect to the distributed filesystem and run the benchmark, giving us the single-client performance. Then, to emulate 30 clients, each server made ten connections to the distributed filesystem and ten copies of the benchmark were run simultaneously on each server.

For the NFS server, I had to do things differently because there are apparently some major problems with connecting NFS clients to a NFS server on the same system. For this one, I set up a fourth server that operated just as a NFS server.

All of the data was stored on XFS partitions on SSDs for speed. After running the benchmarks with one distributed filesystem, it was shut down and its data deleted, so each distributed filesystem had the same disk space available to it.

The NFS server was setup to export its shares async (also for speed). The LizardFS clients used the recommended mount options, while the other clients just used the defaults (I couldn’t find any recommended mount options for GlusterFS or CephFS). CephFS was mounted using the kernel module rather than the FUSE filesystem.

So, first up, let’s look at single-client performance (click for the full-size chart):

Initial creation didn’t really have any surprises, though I was really impressed with CephFS’s performance. It came really close to matching the performance of the NFS server. Compile simulation also didn’t have many surprises, though CephFS seemed to start hitting performance problems here. LizardFS initially surprised me in the read benchmark, though I realized later that the LizardFS client will prioritize a local server if the requested data is on it. I have no idea why NFS was so slow, though. I was expecting NFS reads to be the fastest. LizardFS also did really well with deletions, which didn’t surprise me too much. LizardFS was designed to make metadata operations very fast. GlusterFS, which did well through the first three benchmarks, ran into trouble with deletions, taking almost ten times longer than LizardFS.

Next, let’s look at multiple-client performance. With these tests, I ran 30 clients simultaneously, and, for the first three tests, summed up their speeds to give me the total speed that the server was giving the clients. CephFS ran into problems during its test, claiming that it had run out of disk space, even though (at least as far as I could see) it was only using about a quarter of the space on the partition. I went ahead and included the numbers generated before the crash, but I would take them with a grain of salt.

Once again, initial creation didn’t have any major surprises, though NFS did really well, giving much better aggregate performance than it did in the earlier single-client test. LizardFS also bettered its single-client speed, while GlusterFS and CephFS both were slower creating files for 30 clients at the same time.

LizardFS started to do very well with the compile benchmark, with an aggregate speed over double that of the other filesystems. LizardFS flew with the read benchmark, though I suspect some of that is due to the client preferring the local data server. GlusterFS managed to beat NFS, while CephFS started running into major trouble.

The delete benchmark seemed to be a continuation of the single-client delete benchmark with LizardFS leading the way, NFS just under five times slower, and GlusterFS over 25 times slower. The CephFS benchmarks had all failed by this point, so there’s no data for it.

I would be happy to re-run these tests if someone has suggestions on optimizations especially for GlusterFS and CephFS.

Installing FreeIPA with an Active Directory subordinate CA

Posted by Fraser Tweedale on August 14, 2017 06:04 AM

FreeIPA is often installed in enterprise environments for managing Unix and Linux hosts and services. Most commonly, enterprises use Microsoft Active Directory for managing users, Windows workstations and Windows servers. Often, Active Directory is deployed with Active Directory Certificate Services (AD CS) which provides a CA and certificate management capabilities. Likewise, FreeIPA includes the Dogtag CA, and when deploying FreeIPA in an enterprise using AD CS, it is often desired to make the FreeIPA CA a subordinate CA of the AD CS CA.

In this blog post I’ll explain what is required to issue an AD sub-CA, and how to do it with FreeIPA, including a step-by-step guide to configuring AD CS.

AD CS certificate template overview

AD CS has a concept of certificate templates, which define the characteristics an issued certificate shall have. The same concept exists in Dogtag and FreeIPA except that in those projects we call them certificate profiles, and the mechanism to select which template/profile to use when issuing a certificate is different.

In AD CS, the template to use is indicated by an X.509 extension in the certificate signing request (CSR). The template specifier can be one of two extensions. The first, older extension has OID and allows you to specify a template by name:

CertificateTemplateName ::= SEQUENCE {
   Name            BMPString

(Note that some documents specify UTF8String instead of BMPString. BMPString works and is used in practice. I am not actually sure if UTF8String even works.)

The second, Version 2 template specifier extension has OID and allows you to specify a template by OID and version:

CertificateTemplate ::= SEQUENCE {
    templateID              EncodedObjectID,
    templateMajorVersion    TemplateVersion,
    templateMinorVersion    TemplateVersion OPTIONAL

TemplateVersion ::= INTEGER (0..4294967295)

Note that some documents also show templateMajorVersion as optional, but it is actually required.

When submitting a CSR for signing, AD CS looks for these extensions in the request, and uses the extension data to select the template to use.

External CA installation in FreeIPA

FreeIPA supports installation with an externally signed CA certificate, via ipa-server-install --external-ca or (for existing CA-less installations ipa-ca-install --external-ca). The installation takes several steps. First, a key is generated and a CSR produced:

$ ipa-ca-install --external-ca

Directory Manager (existing master) password: XXXXXXXX

Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes
  [1/8]: configuring certificate server instance
The next step is to get /root/ipa.csr signed by your CA and re-run /sbin/ipa-ca-install as:
/sbin/ipa-ca-install --external-cert-file=/path/to/signed_certificate --external-cert-file=/path/to/external_ca_certificate

The installation program exits while the administrator submits the CSR to the external CA. After they receive the signed CA certificate, the administrator resumes the installation, giving the installation program the CA certificate and a chain of one or more certificates up to the root CA:

$ ipa-ca-install --external-cert-file ca.crt --external-cert-file ipa.crt
Directory Manager (existing master) password: XXXXXXXX

Configuring certificate server (pki-tomcatd). Estimated time: 3 minutes
  [1/29]: configuring certificate server instance
  [29/29]: configuring certmonger renewal for lightweight CAs
Done configuring certificate server (pki-tomcatd).

Recall, however, that if the external CA is AD CS, a CSR must bear one of the certificate template specifier extensions. There is an additional installation program option to add the template specifier:

$ ipa-ca-install --external-ca --external-ca-type=ms-cs

This adds a name-based template specifier to the CSR, with the name SubCA (this is the name of the default sub-CA template in AD CS).

Specifying an alternative AD CS template

Everything discussed so far is already part of FreeIPA. Until now, there is no way to specify a different template to use with AD CS.

I have been working on a feature that allows an alternative AD CS template to be specified. Both kinds of template specifier extension are supported, via the new --external-ca-profile installation program option:

$ ipa-ca-install --external-ca --external-ca-type=ms-cs \

(Note: huge OIDs like the above are commonly used by Active Directory for installation-specific objects.)

To specify a template by name, the --external-ca-profile value should be:


To specify a template by OID, the OID and major version must be given, and optionally the minor version too:


Like --external-ca and --external-ca-type, the new --external-ca-profile option is available with both ipa-server-install and ipa-ca-install.

With this feature, it is now possible to specify an alternative or custom certificate template when using AD CS to sign the FreeIPA CA certificate. The feature has not yet been merged but there an open pull request. I have also made a COPR build for anyone interested in testing the feature.

The remainder of this post is a short guide to configuring Active Directory Certificate Services, defining a custom CA profile, and submitting a CSR to issue a certificate.

Appendix A: installing and configuring AD CS

Assuming an existing installation of Active Directory, AD CS installation and configuration will take 10 to 15 minutes. Open Server Manager, invoke the Add Roles and Features Wizard and select the AD CS Certification Authority role:


Proceed, and wait for the installation to complete…


After installation has finished, you will see AD CS in the Server Manager sidebar, and upon selecting it you will see a notification that Configuration required for Active Directory Certificate Services.


Click More…, and up will come the All Servers Task Details dialog showing that the Post-deployment Configuration action is pending. Click the action to continue:


Now comes the AD CS Configuration assistant, which contains several steps. Proceed past the Specify credentials to configure role services step.

In the Select Role Services to configure step, select Certification Authority then continue:


In the Specify the setup type of the CA step, choose Enterprise CA then continue:


The Specify the type of the CA step lets you choose whether the AD CS CA will be a root CA or chained to an external CA (just like how FreeIPA lets you create root or subordinate CA!) Installing AD CS as a Subordinate CA is outside the scope of this guide. Choose Root CA and continue:


The next step lets you Specify the type of the private key. You can use an existing private key or Create a new private key, the continue.

The Specify the cryptographic options step lets you specify the Key length and hash algorithm for the signature. Choose a key length of at least 2048 bits, and the SHA-256 digest:


Next, Specify the name of the CA. This sets the Subject Distinguished Name of the CA. Accept defaults and continue.

The next step is to Specify the validity period. CA certificates (especially root CAs) typically need a long validity period. Choose a value like 5 Years, then continue:


Accept defauts for the Specify the database locations step.

Finally, you will reach the Confirmation step, which summarises the chosen configuration options. Review the settings then Configure:


The configuration will take a few moments, then the Results will be displayed:


AD CS is now configured and you can begin issuing certificates.

Appendix B: creating a custom sub-CA certificate template

In this section we look at how to create a new certificate template for sub-CAs by duplicating an existing template, then modifying it.

To manage certificate templates, from Server Manager right-click the server and open the Certification Authority program:


In the sidebar tree view, right-click Certificate Templates then select Manage.


The Certificate Templates Console will open. The default profile for sub-CAs has the Template Display Name Subordinate Certification Authority. Right-click this template and choose Duplicate Template.


The new template is created and the Properties of New Template dialog appears, allowing the administrator to customise the template. You can set a new Template display name, Template name and so on:


You can also change various aspects of certificate issuance including which extensions will appear on the issued certificate, and the values of those extensions. In the following screenshot, we see a new Certificate Policies OID being defined for addition to certificates issued via this template:


Also under Extensions, you can discover the OID for this template by looking at the Certificate Template Information extension description.

Finally, having defined the new certificate template, we have to activate it for use with the AD CA. Back in the Certification Authority management window, right-click Certificate Templates and select Certificate Template to Issue:


This will pop up the Enable Certificate Templates dialog, containing a list of templates available for use with the CA. Select the new template and click OK. The new certificate template is now ready for use.

Appendix C: issuing a certificate

In this section we look at how to use AD CS to issue a certificate. It is assumed that the CSR to be signed exists and Active Directory can access it.

In the Certification Authority window, in the sidebar right-click the CA and select All Tasks >> Submit new request…:


This will bring up a file chooser dialog. Find the CSR and Open it:


Assuming all went well (including the CSR indicating a known certificate template), the certificate is immediately issued and the Save Certificate dialog appear, asking where to save the issued certificate.

radv on SI and CIK GPU - update

Posted by Dave Airlie on August 14, 2017 03:16 AM
I recently acquired an r7 360 (BONAIRE) and spent some time getting radv stable and passing the same set of conformance tests that VI and Polaris pass.

The main missing thing was 10-bit integer format clamping for a bug in the SI/CIK fragment shader output hardware, where it truncates instead of clamps. The other missing piece was code for handling f16->f32 conversions according to the vulkan spec that I'd previously fixed for VI.

I also looked at a trace from amdgpu-pro and noticed it was using a ds_swizzle for the derivative calculations which avoided accessing LDS memory. I wrote support to use this path for radv/radeonsi since LLVM supported the intrinsic for a while now.

With these fixed CIK is pretty much in the same place as VI/Polaris.

I then plugged in my SI (Tahiti), and got lots of GPU hangs and crashes. I fixed a number of SI specific bugs (tiling and MSAA handling, stencil tiling). However even with those fixed I was getting random hangs, and a bunch of people on a bugzilla had noticed the same thing. I eventually discovered adding a shader pipeline and cache flush at the end of every command buffer (this took a few days to narrow down exactly). We aren't 100% sure why this is required on SI only, it may be a kernel bug, or a command processor bug, but it does mean radv on SI now can run games without hanging.

There are still a few CTS tests outstanding on SI only, and I'll probably get to them eventually, however I also got an RX Vega and once I get a newer BIOS for it from AMD I shall be spending some time fixing the radv support for it.