Fedora People

Dear Lazyweb: No video for VLC on Fedora 26?

Posted by Jeroen van Meeuwen on May 28, 2017 11:35 AM
Dear Lazyweb, even though Fedora 26 is not yet released, I’ve upgraded — now, VideoLAN isn’t displaying video any longer. If I start it with —no–embedded–video I do get video, but it doesn’t seem to be able to run or switch to/from fullscreen. Resetting my preferences has not helped so far. I would appreciate to… Continue reading Dear Lazyweb: No video for VLC on Fedora 26?

Updated Fedora Lives Available (4.10.16-200) Memorial Weekend Run

Posted by Corey ' Linuxmodder' Sheldon on May 28, 2017 10:13 AM


We in the Respins SIG are pleased to mention the latest series of Updated Live Respins carrying the 4.10.16-200 Kernel.  These respins use the livemedia-creator tool packaged in the default Fedora repo and following the guide here as well as using the scripts located here.

As Always  there are available @  http://tinyurl.com/live-respins2

For those needing a non-shortened url that expands to https://dl.fedoraproject.org/pub/alt/live-respins/

This round will be noticeably missing from it’s usual gpg clearsigned CHECKSUM|HASHSUM files hosted on https://community.ameridea.net due to a key cycling operation.  This post will be updated with the  new KeyID|Fingerprint next week however, next run will be the first run with that key in play.


Filed under: Community, F25, F25 Torrents, Fedora, PSAs, Volunteer Tagged: Community, Fedora, Open Source, Torrents

A year of no more chasing - 2017

Posted by Sarup Banskota on May 28, 2017 04:03 AM

Starting with 2012, until 2016, I’ve had a fantastic 4 years - I managed to level up skills, personal relationships, professional relationships, income, travel exposure - all sorts of things.

However 2017 has been a year of the chase, and I’ve found myself dissatisfied so far. 3 more days for half the year to end and I don’t feel like I’ve accomplished much. Personal projects, finances, workplace, relationships, travel - they haven’t quite had the bang I’ve been seeing in the previous 4 years.

Better late than never, so I’ve decided to take charge and ensure I make the remaining 6 months worthwhile. Here are some of the things I’ve been chasing in 2017:

  • Social: I’ve been trying to make friends. Most of my interaction happens online on Facebook or Tinder, and I don’t quite meet a lot of people in person
  • Freelance jobs: I’m beginning to feel like I need to try more things and not just focus on my $dayjob. Working on the same problem for an extended period can feel demotivating, and I feel siloed from the rest of the tech world. However, I don’t hate anything as much as I do wading through job postings and similar crap and writing in
  • Personal projects: Probably just to look cool, I’ve started several personal projects with a clear focus on making money. I start excitedly, but as it turns out making money out of something is hard, and then I lose focus on the fun behind solving the problem itself
  • OSS projects: I’ve been messaging some of my OSS heroes telling them I’m gonna free time to contribute, and often they even find things for me, but I just never get to it
  • Design work: I’d like to artsy and pretty things, but apart from the two days of sketching I did enthusiastically I couldn’t go much further
  • Sports and Fitness: I had a fantastic February maintaining my diet plan and even wrote an article about willpower inspired by that. Yet again, in March, there was a crazy outburst of work, and I couldn’t keep up

That’s a lot of things and it can be difficult to remember all of them, so I’ve decided that I’ll keep the following guidelines in mind for the rest of the year:

  • Avoid virtual talk. Either there’s time alone for hobbies, or there’s time spent with a real person talking or doing things. I’ve already let go of my Facebook account and my Tinder
  • No more wading through job postings for work. Over time I’ve realised that there is a basic amount of money I need for my lifestyle expenses (private condo, uber, nice food, monthly flights, sketching courses or similar), and quite frankly, that’s not a lot. Beyond this, a few thousand extra dollars a month isn’t going to buy me a private jet, so might as well spend the time doing something that I enjoy. I enjoy the company of smart makers because to me they’re cool, so I’m going to try and be more involved with OSS projects
  • Same with design work. At my dayjob, doing design work isn’t feasible right now. Outside, finding the kind of design work I want to do involves wading through crap. Therefore, once again, I’m going to spend time sketching, and on https://99designs.com/ without any expectations of winning
  • I’m going to contribute $10 to a fund for every day I don’t do any form of workout. I’ll use that as a scholarship for students later on. Very often, I pick the unhealthy option because it’s cheaper, so I’ll contribute $20 for days I skip > 1 meal or consume 2 unhealthy meals

So, to summarise, my guidelines for the rest of 2017:

  • Avoid virtual talk
  • Try and be more involved with OSS projects
  • Spend time sketching, and on https://99designs.com/ without any expectations of winning
  • Contribute $10 to a fund for every day I don’t do any form of workout
  • contribute $20 for days I skip > 1 meal or consume 2 unhealthy meals

Learn Python & Selenium Automation in 8 weeks

Posted by Alexander Todorov on May 26, 2017 09:36 PM

Couple of months ago I conducted a practical, instructor lead training in Python and Selenium automation for manual testers. You can find the materials at GitHub.

The training consists of several basic modules and practical homework assignments. The modules explain

  1. The basic structure of a Python program and functions
  2. Commonly used data types
  3. If statements and (for) loops
  4. Classes and objects
  5. The Python unit testing framework and its assertions
  6. High-level introduction to Selenium with Python
  7. High-level introduction to the Page Objects design pattern
  8. Writing automated tests for real world scenarios without any help from the instructor.

Every module is intended to be taken in the course of 1 week and begins with links to preparatory materials and lots of reading. Then I help the students understand the basics and explain with more examples, often writing code as we go along. At the end there is the homework assignment for which I expect a solution presented by the end of the week so I can comment and code-review it.

All assignments which require the student to implement functionality, not tests, are paired with a test suite, which the student should use to validate their solution.

What worked well

Despite everything I've written below I had 2 students (from a group of 8) which showed very good progress. One of them was the absolute star, taking active participation in every class and doing almost all homework assignments on time, pretty much without errors. I think she'd had some previous training or experience though. She was in the USA, training was done remotely via Google Hangouts.

The other student was in Sofia, training was done in person. He is not on the same level as the US student but is the best from the Bulgarian team. IMO he lacks a little bit of motivation. He "cheated" a bit on some tasks providing non-standard, easier solutions and made most of his assignments. After the first Selenium session he started creating small scripts to extract results from football sites or as helpers to be applied in the daily job. The interesting fact for me was that he created his programs as unittest.TestCase classes. I guess because this was the way he knew how to run them!?!

There were another few students which had had some prior experience with programming but weren't very active in class so I can't tell how their careers will progress. If they put some more effort into it I'm sure they can turn out to have decent programming skills.

What didn't work well

Starting from the beginning most students failed to read the preparatory materials. Some of the students did read a little bit, others didn't read at all. At the times when they came prepared I had the feeling the sessions progressed more smoothly. I also had students joining late in the process, which for the most part didn't participate at all in the training. I'd like to avoid that in the future if possible.

Sometimes students complained about lack of example code, although Dive into Python includes tons of examples. I've resorted to sending them the example.py files which I produced during class.

The practical part of the training was mostly myself programming on a big TV screen in front of everyone else. Several times someone from the students took my place. There wasn't much active participation on their part and unfortunately they didn't want to bring personal laptops to the training (or maybe weren't allowed)! We did have a company provided laptop though.

When practicing functions and arithmetic operations the students struggled with basic maths like breaking down a number into its digits or vice versa, working with Fibonacci sequences and the like. In some cases they cheated by converting to/from strings and then iterating over them. Also some hard-coded the first few numbers of the Fibonacci sequence and returned it directly. Maybe an in-place explanation of the underlying maths would have been helpful but honestly I was surprised by this. Somebody please explain or give me an advise here!

I am completely missing examples of the datetime and timedelta classes which tuned out to be very handy in the practical Selenium tasks and we had to go over them on the fly.

The OOP assignments went mostly undone, not to mention one of them had bonus tasks which are easily solved using recursion. I think we could skip some of the OOP practice (not sure how safe that is) because I really need classes only for constructing the tests and we don't do anything fancy there.

Page Object design pattern is also OOP based and I think that went somewhat well granted that we are only passing values around and performing some actions. I didn't put constraints nor provided guidance on what the classes should look like and which methods go where. Maybe I should have made it easier.

Anyway, given that Page Objects is being replaced by Screenplay pattern, I think we can safely stick to the all-in-one functional based Selenium tests. Maybe utilize helper functions for repeated tasks (like login). Indeed this is what I was using last year with Rspec & Capybara!

What students didn't understand

Right until the end I had people who had troubles understanding function signatures, function instances and calling/executing a function. Also returning a value from a function vs. printing the (same) value on screen or assigning to the same global variable (e.g. FIB_NUMBERS).

In the same category falls using method parameters vs. using global variables (which happened to have the same value), using the parameters as arguments to another function inside the body of the current function, using class attributes (e.g. self.name) to store and pass values around vs. local variables in methods vs. method parameters which have the same names.

I think there was some confusion about lists, dictionaries and tuples but we did practice mostly with list structures so I don't have enough information.

I have the impression that object oriented programming (classes and instances, we didn't go into inheritance) are generally confusing to beginners with zero programming experience. The classical way to explain them is by using some abstractions like animal -> dog -> a particular dog breed -> a particular pet. OOP was explained to me in a similar way back in school so these kinds of abstractions are very natural for me. I have no idea if my explanation sucks or students are having hard time wrapping their heads around the abstraction. I'd love to hear some feedback from other instructors on this one.

I think there is some misunderstanding between a class (a definition of behavior) and an instance/object of this class (something which exists into memory). This may also explain the difficulty remembering or figuring out what self points to and why do we need to use it inside method bodies.

For unittest.TestCase we didn't do lots of practice which is my fault. The homework assignments request the students to go back to solutions of previous modules and implement more tests for them. Next time I should provide a module (possibly with non-obvious bugs) and request to write a comprehensive test suite for it.

Because of the missing practice there was some confusion/misunderstanding about the setUpClass/tearDownClass and the setUp/tearDown methods. Also add to the mix that the first are @classmethod while the later are not. "To be safe" students always defined both as class methods!

I have since corrected the training materials but we didn't have good examples (nor practiced) explaining the difference between setUpClass (executed once aka before suite) and setUp (possibly executed multiple times aka before test method).

On the Selenium side I think it is mostly practice which students lack, not understanding. The entire Selenium framework (any web test framework for that matter) boils down to

  • Load a page
  • Find element(s)
  • Click or hover (that one was tricky) element
  • Get element's attribute value or text
  • Wait for the proper page to load (or worst case AJAX calls)

IMO finding the correct element on the page is on-par with waiting (which also relies on locating elements) and took 80% of the time we spent working with Selenium.

Thanks for reading and don't forget to comment and give me your feedback!

Image source: https://www.udemy.com/selenium-webdriver-with-python/

Measuring things only makes sense if you know what you’re measuring, why, and what you intend to do…

Posted by Suzanne Hillman (Outreachy) on May 26, 2017 08:50 PM

Measuring things only makes sense if you know what you’re measuring, why, and what you intend to do with it. Otherwise, even if you _have_ numbers, they don’t actually have any meaning. So what’s the point?

The story of tunables

Posted by Siddhesh Poyarekar on May 26, 2017 03:33 PM
This is long overdue and I have finally got around to writing this. Apologies to everyone who asked me to write about it and I responded with "Oh yeah, right away!" If you are not interested in the story bits, start with So what are tunables anyway below.

The story of tunables began in 2013 when I was a relatively fresh glibc engineer in the Red Hat toolchain team. We wanted to add an environment variable to allow users to set the default stack sizes for thread stacks and Carlos took that idea to the next level with the question: How do we make this more extensible so that we have full control over the kind of tuning parameters we accept in glibc but at the same time, allow distributions to add their own tuning parameters without affecting upstream code? He asked this question in the 2013 Cauldron in Mountain View, where the famous glibc BoF happened in a tiny meeting room which overflowed into an adjacent room, which also filled up quickly, and then the BoF overran its 45 minute slot by roughly a couple of hours! Carlos joined the BoF over Hangout (I think it was called Google Talk then) because he couldn’t make it and we had a lengthy back and forth about the pros and cons of having such tuning parameters. In principle, everybody agreed that such a thing would be desirable from a maintenance perspective. However the approach for doing it was something nobody seemed to agree on.

Thus the idea of tunables was born 4 years ago, except that Carlos wrote the first wiki page and called it ‘tunnables’. He consistently spelled it tunnables and I tunables. I won in the end because I wrote the patches ;)

Jokes aside, we were happy about the reception of the idea and we went about documenting it at length. However given that we were a two man army manning the glibc bunkers in Red Hat and the fact that upstream was still reviving itself from the post-Uli era meant that we would never come back to it for a while.

Then 2015 happened and it came with a memorable Cauldron in Prague. It was memorable because by then I had come up with a first draft of an API for the tunables framework. It was also memorable because it was my last month at Red Hat, something I never imagined would ever happen. I was leaving my dream team and I wasn’t sure if I would ever be as happy again. Those uncertainties were unfounded as I know now, but that’s a story for another post.

The struggle to write code

The first draft I presented at Cauldron in 2015 was really just a naive attempt at storing and initializing public values accessed across libraries in glibc and we had not even thought through everything we would end up fixing with tunables. It kinda worked, but it was never going to make the cut. A new employer meant that tunables will become a weekend project and as a result it missed the release deadline. And another, and then another. Towards the closing of every release I would whip out a patchset that would be poked holes into and then the change would be considered too risky to include.

Finally we set a deadline of 2.25 for tunables because by then quite a few devs had started maintaining their own list of tunables on top of my tree, frustratingly rebasing every time I completely changed my approach. We made it in the end, with Florian and I working through the year end holidays to get the whole patchset in before freeze.

So as of 2.25, tunables is firmly entrenched into glibc and as we speak, there are more tunables to come, especially to override IFUNC selections and to tune the processor capability mask.

So what are tunables anyway?

This is where you start if you want the technical description and are not interested in the story bits.

Tunables is an internal implementation detail in glibc. It is a way to manage ways in which we allow behaviour in glibc to be modified. As of now the only way to manage glibc is via environment variables and the way to do that was strewn all over the place in the source code. Tunables provide one place to add the tunable parameter with all of the characteristics it would have and then the framework will handle everything from there. The user of that tunable (e.g. malloc for MALLOC_MMAP_THRESHOLD_ or malloc.mmap.threshold in tunables parlance) would then simply access the tunable from the list and do what it wants to do, without bothering about where it came from.

The framework is implemented in elf/dl-tunables.c and all of the supporting code is named as elf/dl-tunable*. As is evident, tunables is linked into the dynamic linker, where it is initialized very early. In static binaries, the initialization is done in libc-start.c, again early enough to influence almost everything in the program. The list is initialized just once and is modifiable only in the dynamic linker before it relocates itself.

The main list of tunables is maintained in elf/dl-tunables.list. Architectures may define their own tunables in sysdeps/…/dl-tunables.list. There is a README.tunables that lists out the gory details of using tunables within glibc to access its values and if necessary, update it.

This gives us a number of advantages, some of them being the following:

Single Initialization

All environment variables used by glibc would be read in by a single double-nested loop which initializes all tunables. Accesses are then just a GOT away, so no more getenv loops in glibc code. This is not achieved yet since all of the environment variables are not yet ported to tunables (Hint: here’s a nice project for you, you aspiring glibc developer!)

All tunables are listed in a single file

The file elf/dl-tunables.list has a full list of tunables along with its properties such as type, value range, default value and its behaviour with setuid binaries. This caused us to introspect on each environment variable we ported into tunables and we ended up fixing a few bugs as well.

Very Early Initialization

Yes, very early, earlier than you would imagine, earlier than IFUNCs! *gasp*

Tunables get initialized very early so that they can influence almost every behaviour in glibc. The unreleased 2.26 makes this even earlier (or rather, delays CPU features initialization enough) so that tunables can impact selection of routines using IFUNCs. This fixes an important inconsistency in glibc, where LD_HWCAP_MASK was read in dynamically linked binaries but not in static binaries because it was not read in early enough.


The tunable list is read-only, so glibc reads from a list that cannot be tampered by malicious code that gets loaded after relocation.

What changes for me as a user?

The change in 2.25 is minimal enough that you won’t notice. In this release, only the malloc tuning environment variables have been ported to tunables and if you’ve been using those environment variables before, they will continue to work even now. In addition, you get to tune these parameters in a fancy way that doesn’t require the stupid trailing underscore, using the GLIBC_TUNABLES environment variable. The manual describes it extensively so I won’t go into details.

The major change is about to happen now. Intel is starting to push a number of tunables to allow you to tune your library to your liking, changing things like string routines that get selected for your program, cache parameters, etc. I believe PowerPC and S390 will see something simila too in the lock elision space and aarch64 multiarch will be tunable as well. All of this will hopefully come in 2.26 or latest by 2.27.

One thing to note though is that for now tunables are not covered by any ABI or API guarantees. That is to say, if you like a tunable that is in 2.26, we may well remove the tunable in 2.27 if we find that it either does not make sense to have that tunable exposed or exposing that tunable is somehow detrimental to user programs.

The big difference will likely come in when distributions start adding their own tunables into the mix. since it will allow them to add customizations to the library without having to maintain huge ugly patchsets.

The Road Ahead

The big advantage of collecting all tuning parameters under a single framework is the ability to then add new ways to influence those tuning parameters. We have environment variables now, but we could add other methods to tune the library. Some ideas discussed are as follows:

  • Have a systemwide configuration file (e.g. /etc/sysctl.user.conf) that sets different defaults for some tunables and limits the degree to which specific tunables are altered. This allows systems administrators to have more fine grained control over the processes on their system
  • Have user-specific configuration files (e.g. $HOME/.sysctl.user.conf) that does something similar but at a user level
  • Have some tunables modified during execution via some shared memory mechanism

All of this is still evolving, so if you have an idea or would like to work on any of these ideas, feel free to get in touch with me and we can find a way to get you contributing to one of the most critical parts of the operating system!

Merging Kubernetes client configs at run time

Posted by Adam Young on May 26, 2017 03:20 PM

Last time I walked through the process of merging two sets of Kubernetest client configurations into one. For more ephemeral data, you might not want to munge it all into your main configuration. The KUBECONFIG environment variables lets you specify muiltiple configuration files and merge them into a single set of configuration data.


kubectl config --help

If $KUBECONFIG environment variable is set, then it is used [as] a list of paths (normal path delimiting rules for your system). These paths are merged. When a value is modified, it is modified in the file that defines the stanza. When a value is created, it is created in the first file that exists. If no files in the chain exist, then it creates the last file in the list.


So, lets start with the file downloaded by the kubevirt build system yesterday.


[ayoung@ayoung541 vagrant]$ echo $PWD
[ayoung@ayoung541 vagrant]$ export KUBECONFIG=$PWD/.kubeconfig
[ayoung@ayoung541 vagrant]$ kubectl config get-contexts
* kubernetes-admin@kubernetes kubernetes kubernetes-admin 

Contrast this with what a get without the environment variable set, if I use the configuration in ~/.kube, which I synced over from my OpenShift cluster:

[ayoung@ayoung541 vagrant]$ unset KUBECONFIG
[ayoung@ayoung541 vagrant]$ kubectl config get-contexts
 default/munchlax:8443/ayoung munchlax:8443 ayoung/munchlax:8443 default
* default/munchlax:8443/system:admin munchlax:8443 system:admin/munchlax:8443 default
 kube-system/munchlax:8443/system:admin munchlax:8443 system:admin/munchlax:8443 kube-system

I want to create a new configuration for the vagrant managed machines for Kubevirt.  IT turns out that the API server specified there is actually a proxy, a short term shim we put in place as we anxiously awate the Amagalmated Api Server of 1.7.  However, sometimes this proxy is broken or we just need to by-pass it.  The only difference between this setup and the proxied setup is the server URL.

So…I create a new file, based on the .kubeconfig file, but munged slightly.  Here is the diff:

[ayoung@ayoung541 vagrant]$ diff -Nurd .kubeconfig .kubeconfig-core 
--- .kubeconfig 2017-05-24 19:49:24.643158731 -0400
+++ .kubeconfig-core 2017-05-26 11:10:49.359955538 -0400
@@ -3,13 +3,13 @@
 - cluster:
- name: kubernetes
+ name: core
 - context:
- cluster: kubernetes
+ cluster: core
 user: kubernetes-admin
- name: kubernetes-admin@kubernetes
-current-context: kubernetes-admin@kubernetes
+ name: kubernetes-admin@core
+current-context: kubernetes-admin@core
 kind: Config
 preferences: {}

Now I have a couple choices. I can just specify this second config file on the command line:

[ayoung@ayoung541 vagrant]$ kubectl --kubeconfig=$PWD/.kubeconfig-core config get-contexts
 kubernetes-admin@core core kubernetes-admin

Or I can munge the two together and provide a flag which states which context to use.

[ayoung@ayoung541 vagrant]$ export KUBECONFIG=$PWD/.kubeconfig:$PWD/.kubeconfig-core
[ayoung@ayoung541 vagrant]$ kubectl config get-contexts
* kubernetes-admin@kubernetes kubernetes kubernetes-admin 
 kubernetes-admin@core core kubernetes-admin

Note that this gives a different current context (with the asterix) than if I reverse the order of the files in the env-var:

[ayoung@ayoung541 vagrant]$ export KUBECONFIG=$PWD/.kubeconfig-core:$PWD/.kubeconfig
[ayoung@ayoung541 vagrant]$ kubectl config get-contexts
* kubernetes-admin@core core kubernetes-admin 
 kubernetes-admin@kubernetes kubernetes kubernetes-admin

Whichever one declared the default first wins.

However, regardless of the order, I can explicitly set the context I want to use on the command line:

[ayoung@ayoung541 vagrant]$ kubectl config get-contexts
 kubernetes-admin@core core kubernetes-admin 
* kubernetes-admin@kubernetes kubernetes kubernetes-admin 
[ayoung@ayoung541 vagrant]$ kubectl --context=kubernetes-admin@core config get-contexts
* kubernetes-admin@kubernetes kubernetes kubernetes-admin 
 kubernetes-admin@core core kubernetes-admin

Again, notice the line where asterix specifies which context is in use.

With only two files, it might be easier to just specify the –kubeconfig option, but as the number of configs you work with grows, you might find you want to share the user data between two of them, or have a bunch of scripts that work between them, and it is easier to track which context to use than to track which file contains which set of data.

Secure your webserver with improved Certbot

Posted by Fedora Magazine on May 26, 2017 08:00 AM

A year and a half ago the Let’s Encrypt project entered public beta. Just over a year ago, as the project left beta, the letsencrypt client was spun out of ISRG, which continues to maintain the Let’s Encrypt servers, into an EFF project and renamed certbot. The mission remained the same, however: to provide quick, simple access to free domain validated certificates, in order to encrypt the internet.

This week marked a significant point in the development of Certbot as the recommended Let’s Encrypt client, with the 0.14 release of the tool.

When the letsencrypt client was first released it only supported using the webroot of an existing HTTP server. This is a standalone mode where letsencrypt listens temporarily on port 80 to carry out the challenge, or a manual method where the admin puts the challenge presented into place before the ACME server proceeded to verify it. Now the letsencrypt client is even more functional.

Apache HTTPD plugin for Certbot

When the client was changed to be an EFF project, one of the first major features that appeared was the Apache HTTPD plugin. This plugin lets the Certbot application automatically configure the webserver to use certificates for one or more VirtualServer installations.

NOTE: If you encounter an issue with SELinux in enforcing mode while using the plugin, use the setenforce 0 command to switch to permissive mode when running the certbot –apache command. Afterward, switch back to enforcing mode using setenforce 1. This issue will be resolved in a future update.

When you start the Apache httpd server with mod_ssl, the service automatically generates a self signed certificate.

self signed certificate

Default mod_ssl self signed certificate not trusted by the browser.

Next, run this command:

certbot --apache

Certbot prompts for a few questions. You can also run it non-interactively and provide all the arguments in advance.

Questions at the terminal

After a few moments, the Apache server has a valid certificate in place.

Valid SSL certificate in place

Nginx plugin for Certbot

The nginx plugin requires the domain name in the configuration from my testing, whereas the httpd plugin modifies the default SSL virtualhost.

The process is similar to the httpd plugin. Answer a few questions, if you do not provide arguments on the command line, and the instance is then protected with a valid SSL certificate.

Python 3 compatibility

The Certbot developers have put a significant amount of work over the past several months to make Certbot fully compatible with Python 3. At the 0.12 release, the unit tests we carried out when building the RPMs passed. However, the developers were not yet happy to declare it ready, since they noticed some edge case failures on real world testing. As of the 0.14 release, developers have declared Certbot Python 3 compatible. This change brings it inline with the default, preferred Python version in Fedora.

To minimize possible issues, Rawhide and the upcoming Fedora 26 will be switched over to using certbot-3 first, whilst Fedora 25 remains using certbot-2 as the default.

Getting hooked on renewals

A recent update added a systemd timer to automate renewals of the certificates.The timer checks each day to see if any certificates need updating. To enable it, use this command:

systemctl enable --now certbot-renew.timer

The configuration in /etc/sysconfig/certbot can change the behavior of the renewals. It includes options for hooks that run before and after the renewal, and another hook that runs for each certificate processed. These are global behaviors. Optionally, you can configure hooks in the configuration files in /etc/letsencrypt/renewal on a per-certificate basis.

Some form of automation is advised, whether the systemd timer or another method, to ensure that certificates are refreshed periodically and don’t expire by accident.

Testing SSL security

A test of SSL security with CentOS7 and the Apache plugin provided a C rating. The nginx plugin resulted in a B rating.

Of course the Red Hat defaults lean towards compatibility in mind. If there’s no need to support older clients then you can tighten up the list of permitted ciphers.

Using this configuration on Fedora 25 on my own blog gets an A+ rating:

SSLProtocol all -SSLv2 -SSLv3
SSLCertificateFile /etc/pki/tls/certs/www-hogarthuk.com-ssl-bundle.crt
SSLCertificateKeyFile /etc/pki/tls/private/www-hogarthuk.com-decrypted.key

<IfModule mod_headers.c>
      Header always set Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"

What’s next?

There are always bugs to fix and improvements to make. Apart from improvements to SELinux compatibility as mentioned above, there’s also a future to look forward to. DNS based validation will make it easier to take Certbot beyond web servers. Mail, jabber, load balancers and other services can then more easily use Let’s Encrypt certificates using the Certbot client.

PHP version 7.0.20RC1 and 7.1.6RC1

Posted by Remi Collet on May 26, 2017 05:49 AM

Release Candidate versions are available in remi-test repository for Fedora and Enterprise Linux (RHEL / CentOS) to allow more people to test them. They are available as Software Collections, for a parallel installation, perfect solution for such tests (for x86_64 only), and also as base packages.

RPM of PHP version 7.0.20RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 25 or remi-php70-test repository for Fedora 23-24 and Enterprise Linux.

RPM of PHP version 7.1.6RC1 are available as SCL in remi-test repository and as base packages in the remi-test repository for Fedora 26 or remi-php71-test repository for Fedora 23-25 and Enterprise Linux.

PHP Version 5.6 is now in security mode only, so no more RC will be released.

emblem-notice-24.pngInstallation : read the Repository configuration and choose your version.

Parallel installation of version 7.0 as Software Collection:

yum --enablerepo=remi-test install php70

Parallel installation of version 7.1 as Software Collection:

yum --enablerepo=remi-test install php71

Update of system version 7.0:

yum --enablerepo=remi-php70,remi-php70-test update php\*

Update of system version 7.1:

yum --enablerepo=remi-php71,remi-php71-test update php\*

Notice: version 7.1.6RC1 is also available in Fedora rawhide (for QA).

emblem-notice-24.pngRC version is usually the same as the final version (no change accepted after RC, exception for security fix).

Software Collections (php70, php71)

Base packages (php)

Panda de feu de compétition

Posted by Casper on May 25, 2017 08:37 PM

Entre Fedoriens, on aime bien comparer nos configurations personnalisées de Firefox. Le prix à gagner est la reconnaissance d'une config comme étant "Parano", titre suprême qui garantie une navigation fluide comme dans les années 90. À la base, je n'avais pas prévu de publier la description de ma config, je l'avais écrite en guise de backup pour pouvoir reproduire cette config sur n'importe quel poste que l'on placerait entre mes mains (et vu que j'ai 3-4 machines, ça me sert pas mal...). Je pense que cette config pourrait vous être utile (au moins certaines parties), elle a été exclusivement conçue pour protéger celui qui l'utilise.

Search engines

  • fossencdi.org http://searx.cwuzdtzlubq5uual.onion/search?q= (ajout auto)
  • searx.nulltime.net http://searx7hcqiogbrhk.onion/search?q=
  • 4ray.co https://searx.4ray.co/search?q= (ajout auto)
  • gibberfish http://o2jdk5mdsijm2b7l.onion/search?q= (ajout auto)
  • s3arch.eu http://eb6w5ctgodhchf3p.onion/search?q=
  • searx.gotrust.de http://nxhhwbbxc4khvvlw.onion/search?q=

Unset display search suggestions

General config


Au démarrage de Firefox : Afficher les derniers onglets et fenêtres utilisés


Open new tab instead of new windows

Flash plugin


Advanced config


  • Custom automatic cache management
  • Limit cache size at 1024Mio

Network settings

  • Manual config with proxy
  • Socks host port 9050
  • SOCKSv5
  • Exception for localhost,,
  • Use distant DNS when SOCKSv5 is enabled

User certificates

  • falcon if available
  • blackbird if falcon does not exist


  • Adblock Plus
  • Cookie Manager avancée
  • Cookies Manager+
  • Disconnect
  • Download Youtube Videos as MP4
  • humanstxt (disparu)
  • NoScript
  • Toggle Proxy
  • User-Agent Switcher revived
  • Video DownloadHelper
  • Youtube and more - Easy Video Downloader
  • YouTube HTML5-Video


Un routeur Tor configuré en mode client (mode par défaut out-of-the-box) est requis sur le poste.

Glad to be a Mentor of Google Summer Code again!

Posted by Tong Hui on May 25, 2017 04:44 PM
This year I will mentoring in FedoraProject, and help Mandy Wang finish her GSoC program about “Migrate Plinth to Fedora Server” which raised by me. While, why I proposed this idea? Plinth is developed by Freedombox which is a Debian based project. The Freedombox is aiming for building a 100% free software self-hosting web server to … Continue reading "Glad to be a Mentor of Google Summer Code again!"

Merging two Kubernetes client configurations

Posted by Adam Young on May 25, 2017 03:22 PM

I have two distinct Kubernetes clusters I work with on a daily basis. One is a local vagrant bases set of VM built by the Kubevirt code base. The other is a “baremetal” install of OpenShift Origin on a pair of Fedora workstation in my office. I want to be able to switch back and forth between them.

When you run the kubectl command without specifying where the application should look for the configuration file, it defaults to looking in $HOME/.kube/config. This file maintains the configuration values for a handful of object types. Here is an abbreviated look at the one set up by origin.

apiVersion: v1
- cluster:
 api-version: v1
 certificate-authority-data: LS0...LQo=
 server: https://munchlax:8443
 name: munchlax:8443
- context:
 cluster: munchlax:8443
 namespace: default
 user: system:admin/munchlax:8443
 name: default/munchlax:8443/system:admin
- context:
 cluster: munchlax:8443
 namespace: kube-system
 user: system:admin/munchlax:8443
 name: kube-system/munchlax:8443/system:admin
current-context: kube-system/munchlax:8443/system:admin
kind: Config
preferences: {}
- name: system:admin/munchlax:8443
 client-certificate-data: LS0...tLS0K
 client-key-data: LS0...LS0tCg==

Note that I have ellided the very long cryptographic entries for certificate-authority-data, client-certificate-data, and client-key-data.

First up is an array of clusters.  The minimal configuration for each here provides a servername, which is the remote URL to use, some set of certificate authority data, and a name to be used for this configuration elsewhere in this file.

At the bottom of the file, we see a chunk of data for user identification.  Again, the user has a local name


With the rest of the identifying information hidden away inside the client certificate.

These two entities are pulled together in a Context entry. In addition, a context entry has a namespace field. Again, we have an array, with each entry containing a name field. The Name of the context object is going to be used in the current-context field and this is where kubectl starts its own configuration.   Here is an object diagram.

The next time I run kubectl, it will read this file.

  1. Based on the value of CurrentContext, it will see it should use the kube-system/munchlax:8443/system:admin context.
  2. From that context, it will see it should use
    1. the system:admin/munchlax:8443 user,
    2. the kube-system namespace, and
    3. the URL https://munchlax:8443 from the munchlax:8443 server.

Below is a similar file from the kubevirt set up, found on my machine at the path ~/go/src/kubevirt.io/kubevirt/cluster/vagrant/.kubeconfig

apiVersion: v1
- cluster:
 certificate-authority-data: LS0...LS0tLQo=
 name: kubernetes
- context:
 cluster: kubernetes
 user: kubernetes-admin
 name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
- name: kubernetes-admin
 client-certificate-data: LS0...LS0tLQo=
 client-key-data: LS0...LS0tCg==

Again, I’ve ellided the long cryptographic data.  This file is organized the same way as the default one.  kubevirt uses it via a shell script that resolves to the following command line:

${KUBEVIRT_PATH}cluster/vagrant/.kubectl --kubeconfig=${KUBEVIRT_PATH}cluster/vagrant/.kubeconfig "$@"

which overrides the default configuration location.  What if I don’t want to use the shell script?  I’ve manually merged the two files into a single ~/.kube/config.  The resulting one has two users,

  • system:admin/munchlax:8443
  • kubernetes-admin

two clusters,

  • munchlax:8443
  • kubernetes

and three contexts.

  • default/munchlax:8443/system:admin
  • kube-system/munchlax:8443/system:admin
  • kubernetes-admin@kubernetes

With current-context: kubernetes-admin@kubernetes:

$ kubectl get pods
haproxy-686891680-k4fxp 1/1 Running 0 15h
iscsi-demo-target-tgtd-2918391489-4wxv0 1/1 Running 0 15h
kubevirt-cockpit-demo-1842943600-3fcf9 1/1 Running 0 15h
libvirt-199kq 2/2 Running 0 15h
libvirt-zj6vw 2/2 Running 0 15h
spice-proxy-2868258710-l85g2 1/1 Running 0 15h
virt-api-3813486938-zpd8f 1/1 Running 0 15h
virt-controller-1975339297-2z6lc 1/1 Running 0 15h
virt-handler-2s2kh 1/1 Running 0 15h
virt-handler-9vvk1 1/1 Running 0 15h
virt-manifest-322477288-g46l9 2/2 Running 0 15h

but with current-context: kube-system/munchlax:8443/system:admin

$ kubectl get pods
tiller-deploy-3580499742-03pbx 1/1 Running 2 8d
youthful-wolverine-testme-4205106390-82gwk 0/1 CrashLoopBackOff 30 2h

There is support in the kubectl executable for configuration:

[ayoung@ayoung541 helm-charts]$ kubectl config get-contexts
 kubernetes-admin@kubernetes kubernetes kubernetes-admin 
 default/munchlax:8443/system:admin munchlax:8443 system:admin/munchlax:8443 default
* kube-system/munchlax:8443/system:admin munchlax:8443 system:admin/munchlax:8443 kube-system
[ayoung@ayoung541 helm-charts]$ kubectl config current-context kubernetes-admin@kubernetes
[ayoung@ayoung541 helm-charts]$ kubectl config get-contexts
 default/munchlax:8443/system:admin munchlax:8443 system:admin/munchlax:8443 default
* kube-system/munchlax:8443/system:admin munchlax:8443 system:admin/munchlax:8443 kube-system
 kubernetes-admin@kubernetes kubernetes kubernetes-admin

The openshift login command can add additional configuration information.

$ oc login
Authentication required for https://munchlax:8443 (openshift)
Username: ayoung
Login successful.

You have one project on this server: "default"

Using project "default".

This added the following information to my .kube/config

under contexts:

- context:
 cluster: munchlax:8443
 namespace: default
 user: ayoung/munchlax:8443
 name: default/munchlax:8443/ayoung

under users:

- name: ayoung/munchlax:8443
 token: 24i...o8_8

This time I elided the token.

It seems that it would be pretty easy to write a tool for merging two configuration files.  The caveats I can see include:

  • don’t duplicate entries
  • ensure that two entries with the same name but different values trigger an error

Canaries in a coal mine (apropos nothing)

Posted by Stephen Smoogen on May 24, 2017 10:29 PM

[This post is brought to you by Matthew Inman. Reading http://theoatmeal.com/comics/believe made me realize I don't listen enough and Verisatium's https://www.youtube.com/watch?v=UBVV8pch1dM made me realize why thinking is hard. I am writing this to remind myself when I forget and jump on some phrase.]

Various generations ago, part of my family was coal miners and some of their lore was still passed down many many years later. One of those was about the proverbial canary. A lot of people like to think that they are being a canary when they bring up a problem that they believe will cause great harm.. singing louder because they have run out of air.

That isn't what a canary does. The birds in the mines go silent when the air runs out. They may have died or are on the verge of being dead. They got quieter and quieter and what the miners listened for was the lack of noise from birds versus more noise. Of course it is very very hard to hear the birds in the first place in a mine because they aren't quiet places. There is hammering, and shoveling and footsteps echoing down long tubes.. so you might think.. bring more birds.. that just added more distractions and miners would get into fights because the damn birds never shut up. So the birds were few and far between and people would have to check up on the birds every now and then to see if they were still kicking. Safer mines would have some old fellow stay near the bird and if it died/passed out they would begin ringing a bell which could be heard down the hole.

So if analogies were 1:1, the time to worry is not when people are complaining a lot on a mailing list about some change. In fact if everyone complains, then you could interpret that you have too many birds and not enough miners so go ahead. The time to worry would be when things have changed but no one complains. Then you probably really need to look at getting out of the mine (or most likely you will find it is too late).

However analogies are rarely 1:1 or even 1:20. People are not birds, and you should pay attention to when changes cause a lot of consternation. Listen to why the change is causing problems or pain. Take some time to process it, and see what can be done to either alter the change or find a way for the person who is in pain to get out of pain.

Best password management tool.

Posted by mythcat on May 24, 2017 06:40 PM
This suite of tools come with many features free and one good premium option.
The Password Tote tools provides secure password management through software and services on multiple platforms and work very well with software downloads for Windows, Mac OS X, Safari, Chrome, Firefox, iOS (iPhone, iPod Touch, iPad), Android.
You can download this from downloads page.

Features OutlineFreePremium
Website Access
Browser Extensions
Desktop Software
Mobile Software
Password Sharing
YubiKey Support
PriceFree$2.99 a month or 2 Years at a 16% savings
DescriptionThis will allow you to use the website version completely free. It also gives you access to fill your passwords from the browser extensions. It does not provide access to the desktop software or mobile phone software.Premium gives you access to your passwords from all versions of Password Tote, including the desktop software and mobile phone versions.

Synchronization between browser extensions and utilities is fast and does not confuse the user in navigation. Importing files is fast for the csv file dedicated to dozens of passwords.
A very good aspect was the compromise solution for custom import with a generic csv file.
The utility generates this file and you can fill it with the necessary login data for your web sites.
The other csv import options did not work for me, I guess the problems are incompatible with the other files exported by the dedicated software.
I used it with YubiKey and it worked very well. It's the only utility that allowed me to connect with YubiKey, the other utilities demand a premium version.

How to enable YubiKeys and password tote.
  • First log in to your Password Tote account. 
  • Click Account, then Manage YubiKeys. You will arrive at the YubiKey Management page. 
  • Click Add YubiKey to register your YubiKey with your Password Tote account. 
  • Fill in the required details. If successful, your YubiKey will be displayed in the list as shown in the screen shot below.

Formatting a new extFAT USB on Fedora

Posted by Julita Inca Chiroque on May 24, 2017 06:31 PM

I have a new 64GB USB and it was not show up at first time:

Thanks to this video I typed fdisk -l, then I was able to see 58.2 GB

After trying to install the exfat package with dnf -y install fuse-exfat, I failed

What I did after many failings was, setting the partition using the GUI:

Then you can see the new format as Ext4:

It is OK to have a little FreeSpace with no extension. It is time to write into the USB:

Now we can see the USB device in the list of devices 😀

Screenshot from 2017-05-24 13:33:06

Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: device, extfat, extfat mount, fedora, format, GNOME, Julita Inca, Julita Inca Chiroque, mnt, mount, USB, USB flash drive

Getting started with helm on OpenShift

Posted by Adam Young on May 24, 2017 05:20 PM

After attending in on a helm based lab at the OpenStack summit, I decided I wanted to try it out for myself on my OpenShift cluster.

Since helm is not yet part of Fedora, I used the upstream binary distribution Inside the tarball was, among other things, a standalone binary named helm, which I moved to ~/bin (which is in my path). Once I had that in place:

$ helm init
Creating /home/ayoung/.helm 
Creating /home/ayoung/.helm/repository 
Creating /home/ayoung/.helm/repository/cache 
Creating /home/ayoung/.helm/repository/local 
Creating /home/ayoung/.helm/plugins 
Creating /home/ayoung/.helm/starters 
Creating /home/ayoung/.helm/repository/repositories.yaml 
$HELM_HOME has been configured at /home/ayoung/.helm.

Tiller (the helm server side component) has been installed into your Kubernetes Cluster.
Happy Helming!

Checking on that Tiller install:

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                             READY     STATUS    RESTARTS   AGE
default       docker-registry-2-z91cq          1/1       Running   0          23h
default       registry-console-1-g4qml         1/1       Running   0          1d
default       router-5-4w3zt                   1/1       Running   0          23h
kube-system   tiller-deploy-3210876050-8gx0w   1/1       Running   0          1m

But trying a helm command line operation fails.

$ helm list
Error: User "system:serviceaccount:kube-system:default" cannot list configmaps in project "kube-system"

This looks like an RBAC issue. I want to assign the role ‘admin’ to the user “system:serviceaccount:kube-system:tiller” on the project “kube-system”

$ oc project kube-system
Now using project "kube-system" on server "https://munchlax:8443".
[ansible@munchlax ~]$ oadm policy add-role-to-user admin system:serviceaccount:kube-system:tiller
role "admin" added: "system:serviceaccount:kube-system:tiller"
[ansible@munchlax ~]$ ./helm list
[ansible@munchlax ~]$

Now I can follow the steps outlined in the getting started guide:

[ansible@munchlax ~]$ ./helm create mychart
Creating mychart
[ansible@munchlax ~]$ rm -rf mychart/templates/
deployment.yaml  _helpers.tpl     ingress.yaml     NOTES.txt        service.yaml     
[ansible@munchlax ~]$ rm -rf mychart/templates/*.*
[ansible@munchlax ~]$ 
[ansible@munchlax ~]$ 
[ansible@munchlax ~]$ vi mychart/templates/configmap.yaml
[ansible@munchlax ~]$ ./helm install ./mychart
NAME:   esteemed-pike
LAST DEPLOYED: Wed May 24 11:46:52 2017
NAMESPACE: kube-system

==> v1/ConfigMap
NAME               DATA  AGE
mychart-configmap  1     0s
[ansible@munchlax ~]$ ./helm get manifest esteemed-pike

# Source: mychart/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
  name: mychart-configmap
  myvalue: "Hello World"
[ansible@munchlax ~]$ ./helm delete esteemed-pike
release "esteemed-pike" deleted

Exploring OpenShift RBAC

Posted by Adam Young on May 24, 2017 03:27 PM

OK, since I did it wrong last time, I’m going to try creating an user in OpenShift, and grant that user permissions to do various things. 

I’m going to start by removing the ~/.kube directory on my laptop and perform operations via SSH on the master node.  From my last session I can see I still have:

$ oc get users
ayoung cca08f74-3a53-11e7-9754-1c666d8b0614 allow_all:ayoung
$ oc get identities
allow_all:ayoung allow_all ayoung ayoung cca08f74-3a53-11e7-9754-1c666d8b0614

What openshift calls projects (perhaps taking the lead from Keystone?) Kubernetes calls namespaces:

$ oc get projects
default Active
kube-system Active
logging Active
management-infra Active
openshift Active
openshift-infra Active
[ansible@munchlax ~]$ kubectl get namespaces
default Active 18d
kube-system Active 18d
logging Active 7d
management-infra Active 10d
openshift Active 18d
openshift-infra Active 18d

According to the documentation here I should be able to log in from my laptop, and all of the configuration files just get magically set up.  Lets see what happens:

$ oc login
Server [https://localhost:8443]: https://munchlax:8443 
The server uses a certificate signed by an unknown authority.
You can bypass the certificate check, but any data you send to the server could be intercepted by others.
Use insecure connections? (y/n): y

Authentication required for https://munchlax:8443 (openshift)
Username: ayoung
Login successful.

You don't have any projects. You can try to create a new project, by running

oc new-project <projectname>

Welcome! See 'oc help' to get started.

Just to make sure I sent something, a typed in the password “test” but it could have been anything.  The config file now has this:

$ cat ~/.kube
.kube/ .kube.bak/ 
[ayoung@ayoung541 ~]$ cat ~/.kube/config 
apiVersion: v1
- cluster:
 insecure-skip-tls-verify: true
 server: https://munchlax:8443
 name: munchlax:8443
- context:
 cluster: munchlax:8443
 user: ayoung/munchlax:8443
 name: /munchlax:8443/ayoung
current-context: /munchlax:8443/ayoung
kind: Config
preferences: {}
- name: ayoung/munchlax:8443
 token: 4X2UAMEvy43sGgUXRAp5uU8KMyLyKiHupZg7IUp-M3Q

I’m going to resist the urge to look too closely into that token thing.
I’m going to work under the assumption that a user can be granted roles in several namespaces. Lets see:

 $ oc get namespaces
 Error from server (Forbidden): User "ayoung" cannot list all namespaces in the cluster

Not a surprise.  But the question I have now is “which namespace am I working with?”  Let me see if I can figure it out.

$ oc get pods
Error from server (Forbidden): User "ayoung" cannot list pods in project "default"

and via kubectl

$ kubectl get pods
Error from server (Forbidden): User "ayoung" cannot list pods in project "default"

What role do I need to be able to get pods?  Lets start by looking at the head node again:

[ansible@munchlax ~]$ oc get ClusterRoles | wc -l
[ansible@munchlax ~]$ oc get Roles | wc -l
No resources found.

This seems a bit strange. ClusterRoles are not limited to a namespace, whereas Roles are. Why am I not seeing any roles defined?

Lets start with figuring out who can list pods:

oadm policy who-can GET pods
Namespace: default
Verb:      GET
Resource:  pods

Users:  system:admin

Groups: system:cluster-admins

And why is this? What roles are permitted to list pods?

$ oc get rolebindings
NAME                   ROLE                    USERS     GROUPS                           SERVICE ACCOUNTS     SUBJECTS
system:deployer        /system:deployer                                                   deployer, deployer   
system:image-builder   /system:image-builder                                              builder, builder     
system:image-puller    /system:image-puller              system:serviceaccounts:default                        

I don’t see anything that explains why admin would be able to list pods there. And the list is a bit thin.

Another page advises I try the command

oc describe  clusterPolicy

But the output of that is voluminous. With a little trial and error, I discover I can do the same thing using the kubectl command, and get the output in JSON, to let me inspect. Here is a fragment of the output.

         "roles": [
                    "name": "admin",
                    "role": {
                        "metadata": {
                            "creationTimestamp": "2017-05-05T02:24:17Z",
                            "name": "admin",
                            "resourceVersion": "24",
                            "uid": "f063233e-3139-11e7-8169-1c666d8b0614"
                        "rules": [
                                "apiGroups": [
                                "attributeRestrictions": null,
                                "resources": [
                                "verbs": [

There are many more rules, but this one shows what I want: there is a policy role named “admin” that has a rule that provides access to the pods via the list verbs, among others.

Lets see if I can make my ayoung account into a cluster-reader by adding the role to the user directly.

On the master

$ oadm policy add-role-to-user cluster-reader ayoung
role "cluster-reader" added: "ayoung"

On my laptop

$ kubectl get pods
NAME                       READY     STATUS    RESTARTS   AGE
docker-registry-2-z91cq    1/1       Running   3          8d
registry-console-1-g4qml   1/1       Running   3          8d
router-5-4w3zt             1/1       Running   3          8d

Back on master, we see that:

$  oadm policy who-can list pods
Namespace: default
Verb:      list
Resource:  pods

Users:  ayoung

Groups: system:cluster-admins

And now to remove the role:
On the master

$ oadm policy remove-role-from-user cluster-reader ayoung
role "cluster-reader" removed: "ayoung"

On my laptop

$ kubectl get pods
Error from server (Forbidden): User "ayoung" cannot list pods in project "default"

Modularity update – sprint 30

Posted by Adam Samalik on May 24, 2017 02:07 PM

The Fedora Modularity team already publishes sprint reports on the Modularity YouTube channel every two weeks. But this format might not be always suitable – like watching on a phone with limited data. So I would like to start writing short reports every two weeks about Modularity, so people have more choice of how to get updated.

What we did

  • We have the final list of modules we are shipping in F26 Boltron. The list shows Python 2 and Python 3 as not-included which is not entirely true. Even thought we won’t be shipping them as separate modules due to various packaging resons, they will be included in Boltron as part of the Base Runtime and shared-userspace.
  • One of them is shared-userspace which is a huge module that contains common runtime and build dependencies with proven ABI stability over time. Lesson learned: building huge modules is hard. We might want to create smaller ones join them together as a module stack.
  • To demonstrate multiple streams we will include NodeJS 6 as part of Boltron, and NodeJS 8 in Copr – built by its maintainer.
  • The DNF team has implemented a fully functional DNF version that supports modules.
  • We have changed the way we do demos on YouTube. Instead of posting a demo every two weeks of work per person, we will do a sprint review + user-focused demos as we go. I will also do my best with writing these posts. :-)

What’s next?


  • clean up and make sure they deliver the use cases we promised
  • the same for containers if time allows

Documentation and community:

  • issue tracker for each module
  • revisiting documentation
  • revisiting how-to guides


  • we would love to make a demo based on a working compose (if we get a qcow)

Also, I’m going to Zagreb to give a Modularity talk at DORS/CLUC next week. Come if you’re near by! 😉

We added Fedora 26 to retrace.fedoraproject.org

Posted by ABRT team on May 24, 2017 01:00 PM

We’ve recently added Fedora 26 to retrace.fedoraproject.org. We were unable to do that in branching time, because of insufficient disk space. This has been resolved now. And you can report your crashes on all current Fedoras now.

Next time we will be able to add new Fedora version during branching time.

Those who helped turning the Higgs boson from theory to reality

Posted by Peter Czanik on May 24, 2017 08:30 AM

One of the most important discoveries of this decade was the Higgs boson. But researchers at High Energy Physics and Nuclear Physics laboratories and institutes would have been unable to find the Higgs boson without the IT staff maintaining the computer infrastructure collecting and analyzing the massive amount of data generated during their experiments. HEPiX is a community, which brings together these IT guys twice a year from around the world. This spring their event was hosted by the Wigner Research Centre for Physics in Budapest, which also plays a central role in CERNs IT infrastructure.

I was invited to HEPiX by Fabien Wernli, who works at CCIN2P3 in France, monitoring thousands of computers using syslog-ng. The syslog-ng application is developed here in Budapest, the city of the spring HEPiX workshop. Leaving the academic world behind over a decade ago, I really enjoyed talking to and listening to IT professionals working at academic institutions.


The CERN IT infrastructure

While not all HEPiX members work on data originating from CERN and the Large Hadron Collider (LHC), the heart of HEPiX seems to be CERN and the software tools used or developed there. Sites working on CERN data are organized into a tiered structure. All data from experiments are collected, stored and processed at CERN as the Tier-0 site. Different parts of data are forwarded to Tier-1 data centers, where they are processed further. And just like parts of a pyramid, Tier-2 and Tier-3 sites download data from here and do the actual analysis of data.

As I mentioned, the Wigner Research Centre for Physics in Budapest now plays a special role in the life of CERN: since 2012 the Wigner Data Center has hosted an extension of the Tier-0 data center of CERN. It is possible due to advances is networking: CERN and the Wigner DC are connected by three independent 100Gbit lines. In other words: this network can forward the content of almost ten DVD disks a second.


The conference

Maintaining this infrastructure requires an enormous amount of resources and work. It needs to be available around the clock, be fast and efficient while changing only gradually. Topics of the conference how these often contradicting requirements can be achieved.

The opening day of the HEPiX spring workshop focused on site reports describing new hardware and services as well as some of the research at the sites since the last meeting. The rest of the week covered topics related to large scale computing: storage, networking, virtualization. My favorite topics at the conference were security and basic IT services, as these were related to my field of interest: logging.

Logging came up in a number of talks. There were many Elasticsearch instances around at CERN and elsewhere. At CERN, these were consolidated recently under central management, and we learned about how many of the problems were resolved by introducing access control and regular maintenance. We also received a quick introduction, how collaboration between sites and infrastructures on security via a Security Operations Center works. Last but not least, I gave an introductory talk about syslog-ng and Fabien Wernli presented how they use syslog-ng to monitor tens of thousands of machines at CCIN2P3, a Tier-1 site in France. During the conference I had a chance to talk to him as well.


Fabien Wernli and syslog-ng

We learned at HEPiX that CCIN2P3 provides important services to CERN as a Tier-1 site. What else is it working on?

We are a computing facility inside the IN2P3. The IN2P3 is one of the institutions French National Center for Scientific Research (French: Centre national de la recherche scientifique, CNRS). It is grouping all the scientists and staff which work on nuclear physics and particle physics. Our facility is providing computing resources for all these labs. We work with a lot of different scientists, so we need computing power, storage and network. Over 85% of our resources are used by LHC because they have such huge experiments that it needs a lot of data processing power. There are many smaller experiments as well. Currently one which is growing, and will generate a lot of data is LSST, which is Large Synoptic Survey Telescope. It will take a picture of the whole sky every night generating 150TB data each time. It is not as much as LHC, but quite a lot. Our facility will be one of the main tiers – like for LHC – for this experiment.


I see, you have a PhD in Astrophysics. Why did you become a Linux administrator?

When you have a PhD you do not get expert on anything other than learning to learn things. Astrophysics is something I was interested in for a long time, and the other thing I was interested in is computing. I have been a computer freak since I was a kid, and this part was more promising for a carrier. It was also easier to find a job without having to travel the whole planet all the time. When you have a family, you want to stay somewhere. I love computing and it was a good opportunity. When I worked at the observatory in Lyon – where I did my PhD – I also did a lot of Linux administration. There were only one or two people there doing Linux administration but they did not administer the desktops. We were on our own so I improved my Linux skills a lot.


And with this new LSST research you can be back at least partially to astrophysics.

That is the good thing about IN2P3 or CCIN2P3 that we do our job for science. Not to make money or any financial profit. I prefer that to the industry, where you ultimately have to make money.


What are you doing at CCIN2P3?

My main function is system administration. Together with my colleagues were are ten admins and my specialty is monitoring. All things monitoring: metrics, logs, analysis or anything related.


How did you first meet with syslog-ng? Why did you decide to use it?

When I arrived at CCIN2P3 there was already a central syslog server, and it was syslog-ng. A very old version, I think 2 or something. When I had to architect a new system, which would replace that one, I looked around and syslog-ng looked the most promising, mainly due to three facts. The first one was documentation, which was great compared to competitors. It was in depth and versioned. I could look up documentation even for an old version. And the configuration examples you copy and pasted actually worked. The second is, that it is portable. By that time we had Solaris, AIX and we had Linux and it would compile or was available as a package almost everywhere. And the community was a third reason I chose it. The community is very friendly. There were people at that time on IRC, and the mailing is helpful, very good resource as well.


You have made many contributions to syslog-ng. Which are you most proud of?

Maybe I have made many but those are small ones. The one I am probably the most proud of is the last one, the HTTPS destination to Elasticsearch. And maybe the many issues I opened. And even more proud, that the issues I opened are actually addressed. So my convincing power seems to be OK 🙂


The post Those who helped turning the Higgs boson from theory to reality appeared first on Balabit Blog.

Rootconf/Devconf 2017

Posted by Ratnadeep Debnath on May 24, 2017 06:45 AM

This year's Rootconf was special as it also hosted Devconf for the first time in India. The conference took place at MLR Convention Centre, JP Nagar, Bangalore on 11-12 May, 2017. The event had 2 parallel tracks running, 1 was for Rootconf and the other one for Devconf. Rootconf is a place like other Hasgeek events where you get to see friends and make new friends, learn about what they are up to and share your stints.

There was a great line up of talks and workshops in this year's Rootconf/Devconf. Some of the talks that I found interesting were:

  • State of the open source monitoring landscape by Bernd Erk from Icinga
  • Deployment strategies with Kubernetes by Aditya Patawari
  • Pooja Shah speaking on their bot at Moengage to automate their CI/CD workflow
  • Running production APIs on spot instances by S Aruna
  • FreeBSD is not a Linux distribution by Philip Paeps
  • Automate your devops life with Openshift Pipelines by Vaclav Pavlin
  • Fabric8: an end-to-end development platform by Baiju
  • Making Kubernetes simple for developers using Kompose by Suraj Deshmukh
  • Workshop on Ansible by Praveen Kumar and Shubham Minglani
  • Deep dive into SELinux by Rejy M Cyriac

I, one of the contributors to the CentOS Community Container Pipeline, gave a talk about the pipeline on how to build, test, deliver latest and safest container images, effortlessly. You can find the slides/demo for the talk here. The talk was well received by the people and people were interested in our project and wanted to use it. A huge shout out to the container pipeline team to make this project happen. I will share some of the questions asked about the pipeline along with the answers for them:

  • Can I use container pipeline to deploy my applications to production?

    The answer is that it depends on your use case. Nevertheless, you can use the images, e.g., redis, postgresql, mariadb, etc from container-pipeline, from registry.centos.org, and deploy it in production. If your application is Open Source, you can also build container image for your application on the pipeline and consume the image in production. However, you should be ready to expect some delay for your project’s new container image to be delivered, as the container pipeline is also used by other projects. If you want your containerized application to be deployed to production ASAP, you might consider setting up the container pipeline on premises, or use something like Openshift pipelines

  • How can I deploy container pipeline on premise?

    We deploy container pipeline in production using Ansible, and you can do that as well. To start, you can look into the provisions/ directory of our repository https://github.com/centos/container-pipeline-service

  • Can we use scanners other than container-pipeline or integrate them with other workflow?

    We can use the scanners pulling them from registry.centos.org and call them through in any workflow to do the scan piece.

  • What if the updated versions of rpms break my container image?

    In the current scenario we update the images if there is any change in dependency image or rpm update. But, in the future, we will be having an option to disable automatic image rebuilds on updates. However, we'll be notifying the image maintainer about such updates, so that the maintainer can decide whether to re build the image or not.

  • Can we put images with non CentOS base image in the pipeline?

    For now, you can, but we do not encourage it, as you will be missing out on many of the valuable features of the container pipeline, e.g., package verify scanners, automatic image rebuilds on package updates, etc.

I also had a conversation with the Digitalocean folks where we discussed about doing a blog post about the CentOS container pipeline on their blog. We also had Zeeshan and Bamacharan from our team answering to queries about the pipeline at the Red Hat booth in Rootconf.

To sum up, it was a great conference, especially, in terms of showcasing many of our projects from Red Hat: Fabric8, Openshift Pipelines, CentOS Container Pipeline, etc. and getting feedback from the community. We'll continue to reach out to the community and getting them involved so that we can develop great solutions for them.

Fedora Join meeting - 22 May 2017 - Summary and updates

Posted by Ankur Sinha "FranciscoD" on May 23, 2017 05:41 PM

We logged on to #fedora-meeting-3 for another constructive Fedora Join SIG meeting yesterday. There's quite a bit of work to be done, and quite a few ideas. These include classroom sessions, mentoring, and so on. The common theme here is to enable new contributors to pick up the required technical skills quicker, and in the process, integrate with the community faster too.

On this week's agenda were:

Here's a wiki page that explains how one can use IRC.

An update on the resurrection of the IRC classroom programme

While work goes on to set up a brand new classroom programme, that we refer to as v2, we decided we could get the ball rolling with the classic IRC programme that was active a year or two ago. The advantage here is that all the infrastructure is already in place - just the one IRC channel, and since many IRC classroom sessions have happened in the past already, this is a time tested system. All it needs is instructors, students, and a few community members to help with the admin bits.

Various community members have already volunteered to instruct sessions, so we already have a time line set up. We intend to begin a few weeks after the Fedora 26 release, so that the community isn't distracted from the release, and the classroom can ride on the release related marketing instead. The classes we have set up are:

  • FOSS 101
  • Fedora Magazine 101
  • Command line 101
  • VIM 101
  • Emacs 101
  • Fedora QA 101
  • Git 101
  • Fedora packaging 101

You'll notice we've gone from individual tools to tasks that require one or more of these. I've omitted the dates here because they are yet to be decided. There'll be a class a week, and this is planned to start int the week of 24th July (for the moment).

We're looking for more sessions, instructors, and helpers

The hard bit here isn't restarting the programme, it is maintaining it. So, we need more sessions, more instructors from the community, and as numbers increase, more volunteers to help with related tasks.

  • Have an idea? Get in touch!
  • Want to teach? Get in touch!
  • Have a friend that wants to teach? Get in touch!
  • Have some time to write related posts for the Fedora Magazine? Get in touch!
  • Have some time to write related posts for the Community Blog? Get in touch!
  • Have some time to help co-ordinate sessions? Get in touch!

You can either ping us on #fedora-classroom/#fedora-join on the IRC, or you can drop an e-mail on the Fedora classroom mailing list.

Note that while we have the IRC set up, you can use another platform too. For instance, if you have access to BlueJeans (a video conferencing platform), you are more than welcome to use it to teach a session.

I'm actively looking for more instructors, so keep an eye out for a ping ;)

Reviewing video platforms for Fedora classroom v2

The largest chunk of work for the v2 initiative is finding suitable software. The primary software requirement here is a good video platform. We've had a few suggestions already, so we thought we could review them to see what they can do:

There are certain requirements that we've listed for now:

  • How many people can a video conference hold?
  • What other features does it have? Screen sharing, for example?
  • Is it a free service or a paid one? (We'd prefer something free of cost)
  • Is it FOSS or not? (We'd prefer FOSS)
  • What is the required setup? Can one deploy a server and how? (For instance, on Fedora Infrastructure?)
  • How do users connect/log in? (OpenID would be great, since FAS OpenID could be used)
  • Can the sessions be recorded?
  • How will participants interact amongst themselves and the instructor?
  • Is there an admin mode?
  • Can it setup/allow for meeting alerts like RSS feed or similar?

Each of us will use the respective platform and write up a blog post that will turn up on the planet.

That was it, pretty much. Come say "hi!" in #fedora-join or the mailing list!

Fedora was at PyCon SK 2017

Posted by Miro Hrončok on May 23, 2017 05:11 PM

At the second weekend in March 2017, Fedora had a booth at PyCon SK, a community-organized conference for the Python programming language held in Bratislava, Slovakia. The event happened for the second time this year, and it happened with Fedora again.

PyCon SK 2017 took 3 days. First day most of the talks were in Slovak (or Czech) and Michal Cyprian presented problems that my arise when users use sudo pip and how we want to solve those problems in Fedora by making sudo pip safe again. During the lightnings talks section, I presented about Elsa, a tool that helps to create static web pages using Flask. Elsa powers up the Fedora Loves Python website.

Michal Cyprian presenting

Michal Cyprian presenting. Photo by Ondrej Dráb, CC BY-SA

The next day was mostly English. Another Fedora contributors Jona Azizaj and Petr Viktorin had their talks. Jona presented about building Python communities and empowering women. Petr’s talk was about the balance of Python (constraints and conventions versus the freedom to do whatever you want) and its impact on the language and the community. Petr also metacoached the Django Girls workshop on Sunday.

But Fedora’s presence was not just through people. Fedora had a booth filled with swag. We gave out all our remaining Fedora Loves Python stickers, plenty of Fedora 25 DVDs, pins, stickers, pens, buttons… We had couple of Proud Fedora User t-shirts available and plenty of Fedora users asked for them, so we decided to come up with a quiz about Fedora and a raffle to decide who gets them.

Fedora Swag

Fedora Swag

Fedora booth at PyCon SK 2017

Fedora booth at PyCon SK 2017. Photo by Ondrej Dráb, CC BY-SA

Lot of the visitors were already familiar with Fedora or even Fedora users this year, which was quite different in compassion with the previous year, where a lot of people were actually asking what Fedora is. <joke>Maybe because we already explained it a year ago, now every visitor already uses Fedora?</joke>

See you next year Bratislava!

Featured Image Photo by Ondrej Dráb, CC BY-SA

The post Fedora was at PyCon SK 2017 appeared first on Fedora Community Blog.

The tool Noodl for design and web development.

Posted by mythcat on May 23, 2017 12:21 PM
This tool will help you understand something about data structuring, node building, web development and design.
This application comes with interactive lessons and documentation.
Note: I tested some lessons and are not very easy. Thus, some links between the nodes do not appear with all the labels, unless they are made inversely, in this case on the work surface the links are no longer one-way (with the arrow arrow) but only punctually between the nodes.
It can be downloaded here for the following operating systems :
  • Version 1.2.3 (MacOS)
  • Version 1.2.3 (Win x64 Installer)
  • Version 1.2.3 (Linux x86 64)
Let's see the default interface of Noodl application.

Participez à la journée de test consacrée à l'internationalisation

Posted by Charles-Antoine Couret on May 23, 2017 07:03 AM

Aujourd'hui, ce mardi 23 mai, est une journée dédiée à un test précis : sur l'internationalisation de Fedora. En effet, durant le cycle de développement, l'équipe d'assurance qualité dédie quelques journées autours de certains composants ou nouveautés afin de remonter un maximum de problèmes sur le sujet.

Elle fournit en plus une liste de tests précis à effectuer. Il vous suffit de les suivre, comparer votre résultat au résultat attendu et le notifier.

En quoi consiste ce test ?

Comme chaque version de Fedora, la mise à jour de ses outils impliquent souvent l’apparition de nouvelles chaînes de caractères à traduire et de nouveaux outils liés à la prise en charge de langues (en particulier asiatiques).

Pour favoriser l'usage de Fedora dans l'ensemble des pays du monde, il est préférable de s'assurer que tout ce qui touche à l'internationalisation de Fedora soit testée et fonctionne. Notamment parce qu'une partie doit être fonctionnelle dès le LiveCD d'installation (donc sans mise à jour).

Les tests du jour couvrent :

  • Le bon fonctionnement d'ibus pour la gestion des entrées claviers ;
  • La personnalisation des polices de caractères ;
  • L'installation automatique des paquets de langues des logiciels installés suivant la langue du système ;
  • La traduction fonctionnelle par défaut des applications ;
  • Le cache de fontconfig qui a bien changé de répertoire (changement de Fedora 26) ;
  • Test de libpinyin 2.0 pour la saisie rapide du chinois Pinyin (changement de Fedora 26).

Bien entendu, étant donné les critères, à moins de savoir une langue chinoise, l'ensemble des tests n'est pas forcément réalisable. Mais en tant que francophones, de nombreuses problématiques nous concernent et remonter les problèmes est important. En effet, ce ne sont pas les autres communautés linguistiques qui identifieront les problèmes d'intégration de la langue française.

Comment y participer ?

Vous pouvez vous rendre sur la page des tests pour lister les tests disponibles et rapporter vos résultats. La page wiki récapitule les modalités de la journée.

Si vous avez besoin d'aide lors du déroulement des tests, n'hésitez pas de faire un tour sur IRC pour recevoir un coup de main sur les canaux #fedora-test-days et #fedora-fr (respectivement en anglais et en français) sur le serveur Freenode.

En cas de bogue, il est nécessaire de le rapporter sur le BugZilla. Si vous ne savez pas faire, n'hésitez pas à consulter la documentation correspondante.

De plus, si une journée est dédiée à ces tests, il reste possible de les effectuer quelques jours plus tard sans problème ! Les résultats seront globalement d'actualité.

Improved high DPI display support in the pipeline

Posted by Fedora Magazine on May 23, 2017 05:27 AM

Support for high DPI monitors has been included in Fedora Workstation for some time now. If you use a monitor with a high enough DPI, Fedora Workstation automatically scales all the elements of the Desktop to a 2:1 ratio, and everything would display crisply and not too small.  However, there are a couple of caveats with the current support. The scaling can currently only be either 1:1 or 2:1, there is no way to have fractional ratios. Additionally, the DPI scaling applies to all displays attached to your machine. So if you have a laptop with a high DPI and an external monitor with lower DPI, the scaling can get a little odd. Depending on your setup, one of the displays will render either super-small, or super-large.

A mockup of how running the same scaling ratio on a low DPI and high DPI monitor might look. The monitor on the right is a 24inch desktop monitor with over sized window decorations.

A mockup of how running the same scaling ratio on a low DPI and high DPI monitor might look. The monitor on the right is a 24inch desktop monitor with over sized window decorations.

Both of these limitations have technical reasons; such as how to deal with fractions of pixels when scaling by something other than 2. However, in a recent blogpost, developer Matthias Clasen talks about how the technical issues in the underlying system have been addressed. To introduce mixed-DPI settings, the upstream developers have per-monitor framebuffers, updated the monitor configuration API, and added support for mixed DPIs to the Display Panel. Work is also underway upstream to tackle the fractional scaling issue. For further techincal details, be sure to read the post by Matthias. All this awesome work by the upstream developers means that in a Fedora release in the not to distant future, high DPI support will be much much better.

PHP Tour - Nantes 2017

Posted by Remi Collet on May 23, 2017 04:55 AM

Back from  PHP Tour 2017 in Nantes

As for every AFUP event, organization was perfect, and I was able to meet a lot of developers and PHP users.

This year, I talk "About PHP Quality"

  • versions and release cycle
  • security management
  • PHP 7.2 roadmap
  • PHP QA and Fedora QA

I hope the attendees will retain how much stability is a priority for the project and why tests are so important, as the involvement in testing their projects with Release Candidates of stable versions (7.0.x, 7.1.x) and with Beta of future versions (7.2, 7.3, 8.0...).

You can read the slides: Nantes2017.pdf

Comments on joind.in

Indeed, as stated by Eric, lack of time (only 35' talk) haven't allow me to enlighten enough some QA actions, p.e. I could have show some examples from  Koschei (Fedora QA for PHP).

Soon: PHP Forum 2017 (Paris)

Fixing Bug 96869

Posted by Adam Young on May 23, 2017 03:47 AM

Bug 968696

The word Admin is used all over the place. To administer was originally something servants did to their masters. In one of the greater inversions of linguistic history, we now use Admin as a way to indicate authority. In OpenStack, the admin role is used for almost all operations that are reserved for someone with a higher level of authority. These actions are not expected to be performed by people with the plebean Member role.

Global versus Scoped

We have some objects that are global, and some that are scoped to projects. Global objects are typically things used to run the cloud, such as the set of hypervisor machines that Nova knows about. Everyday members are not allowed to “Enable Scheduling For A Compute Service” via the HTTP Call PUT /os-services/enable.

Keystone does not have a way to do global roles. All roles are scoped to a project. This by itself is not a problem. The problem is that a resource like a hypervisor does not have a project associated with it. If keystone can only hand out tokens scoped to projects, there is still no way to match the scoped token to the unscoped resource.

So, what Nova and many other services do is just look for the Role. And thus our bug. How do we go about fixing this?

Use cases

Let me see if I can show this.

In our initial state, we have two users.  Annie is the cloud admin, responsible for maintaining the over all infrastructure, such as “Enable Scheduling For A Compute Service”.  Pablo is a project manager. As such, he has to do admin level things, but only with his project, such as setting the Metadata used for servers inside this project.  Both operations are currently protected by the “admin” role.

Role Assignments

Lets look at the role assignment object diagram.  For this discussion, we are going to assume everything is inside a domain called “Default” which I will leave out of the diagrams to simplify them.

In both cases, our users are explicitly assigned roles on a project: Annie has the Admin role on the Infra project, and Pablo has the Admin role on the Devel project.


The API call to Add Hypervisor only checks the role on the token, and enforces that it must be “Admin.”  Thus, both Pablo and Annie’s scoped tokens will pass the policy check for the Add Hypervisor call.

How do we fix this?

Scope everything

Lets assume, for the moment, that we were able instantly run a migration that added a project_id to every database table that holds a resource, and to every API that manages those resources.  What would we use to populate that project_id?  What value would we give it?

Lets say we add an admin project value to Keystone.  When a new admin-level resource is made, it gets assigned to this admin project.  All of those resources we have already should get this value, too. How would we communicate this project ID?  We don’t have a keystone instance available when running the Nova Database migrations.

Turns out Nova does not need to know the actual project_id.  Nova just needs to know that Keystone considers the token valid for global resources.

Admin Projects

We’ve added a couple values to the Keystone configuration file: admin_domain_name and admin_project_name.  These two values are how Keystone specifies which project is represents and admin project.  When these two values are set, all token validation responses contain a value for is_admin_project.  If the project requested matches the domain and project name, that value is True, otherwise false.


instead, we want the create_cell call to use a different rule.  Instead of the scope check performed by admin_or_owner, it should confirm the admin role, as it did before, and also that the token has the is_admin_project Flag set.


Keystone already has support for setting is_admin_project, but none of the remote service are honoring it yet. Why?  In part because, in order for it to make sense for one to do so, they all must do so.  But also, because we cannot predict what project would be the admin project.

If we select a project based on name (e.g. Admin) we might be selecting a project that does not exist.

If we force that project to exist, we still do not know what users to assign to it.  We would have effectively broken their cloud, as no users could execute Global admin level tasks.

In the long run, the trick is to provide a transition plan for when the configuration options are unset.]

The Hack

If no admin project is set, then every project is admin project.  This is enforced by oslo-context, which is used in policy enforcement.

Yeah, that seems surprising, but tt turns out that we have just codified what every deployment has already.  Look ad the bug description again:

Problem: Granting a user an “admin” role on ANY tenant grants them unlimited “admin”-ness throughout the system because there is no differentiation between a scoped “admin”-ness and a global “admin”-ness.

Adding in the field is a necessary per-cursor to solving it, but the real problem is in the enforcement in Nova, Glance, and Cinder.  Until they enforce on the flag, the bug still exists.

Fixing things

There is a phased plan to fix things.

  1. enable the is_admin_project mechanism in Keystone but leave it disabled.
  2. Add is_admin_project enforcement in the policy file for all of the services
  3. Enable an actual admin_project in devstack and Tempest
  4. After a few releases, when we are sure that people are using admin_project, remove the hack from oslo-context.

This plan was discussed and agreed upon by the policy team within Keystone, and vetted by several of the developers in the other projects, but it seems it was never fully disseminated, and thus the patches have sat in a barely reviewed state for a long while…over half a year.  Meanwhile, the developers focused on this have shifted tasks.

Now’s The Time

We’ve got a renewed effort, and some new, energetic developers committed to making this happen.  The changes have been rewritten with advice from earlier code reviews and resubmitted.  This bug has been around for a long time: Bug #968696 was reported by Gabriel Hurley on 2012-03-29.  Its been a hard task to come up with and execute a plan to solve it.  If you are a core project reviewer, please look for the reviews for your project, or, even better, talk with us on IRC (Freenode #openstack-keystone) and help us figure out how to best adjust the default policy for your service. 


xinput list shows a "xwayland-pointer" device but not my real devices and what to do about it

Posted by Peter Hutterer on May 23, 2017 12:56 AM

TLDR: If you see devices like "xwayland-pointer" show up in your xinput list output, then you are running under a Wayland compositor and debugging/configuration with xinput will not work.

For many years, the xinput tool has been a useful tool to debug configuration issues (it's not a configuration UI btw). It works by listing the various devices detected by the X server. So a typical output from xinput list under X could look like this:

:: whot@jelly:~> xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ SynPS/2 Synaptics TouchPad id=22 [slave pointer (2)]
⎜ ↳ TPPS/2 IBM TrackPoint id=23 [slave pointer (2)]
⎜ ↳ ELAN Touchscreen id=20 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ Power Button id=6 [slave keyboard (3)]
↳ Video Bus id=7 [slave keyboard (3)]
↳ Lid Switch id=8 [slave keyboard (3)]
↳ Sleep Button id=9 [slave keyboard (3)]
↳ ThinkPad Extra Buttons id=24 [slave keyboard (3)]
Alas, xinput is scheduled to go the way of the dodo. More and more systems are running a Wayland session instead of an X session, and xinput just doesn't work there. Here's an example output from xinput list under a Wayland session:

$ xinput list
⎡ Virtual core pointer id=2 [master pointer (3)]
⎜ ↳ Virtual core XTEST pointer id=4 [slave pointer (2)]
⎜ ↳ xwayland-pointer:13 id=6 [slave pointer (2)]
⎜ ↳ xwayland-relative-pointer:13 id=7 [slave pointer (2)]
⎣ Virtual core keyboard id=3 [master keyboard (2)]
↳ Virtual core XTEST keyboard id=5 [slave keyboard (3)]
↳ xwayland-keyboard:13 id=8 [slave keyboard (3)]
As you can see, none of the physical devices are available, the only ones visible are the virtual devices created by XWayland. On a Wayland session, the X server doesn't have access to the physical devices. Instead, it talks via the Wayland protocol to the compositor. This image from the Wayland documentation shows the architecture:
In the above graphic, devices are known to the Wayland compositor (1), but not to the X server. The Wayland protocol doesn't expose physical devices, it merely provides a 'pointer' device, a 'keyboard' device and, where available, a touch and tablet tool/pad devices (2). XWayland wraps these into virtual devices and provides them via the X protocol (3), but they don't represent the physical devices.

This usually doesn't matter, but when it comes to debugging or configuring devices with xinput we run into a few issues. First, configuration via xinput usually means changing driver-specific properties but in the XWayland case there is no driver involved - it's all handled by libinput inside the compositor. Second, debugging via xinput only shows what the wayland protocol sends to XWayland and what XWayland then passes on to the client. For low-level issues with devices, this is all but useless.

The takeaway here is that if you see devices like "xwayland-pointer" show up in your xinput list output, then you are running under a Wayland compositor and debugging with xinput will not work. If you're trying to configure a device, use the compositor's configuration system (e.g. gsettings). If you are debugging a device, use libinput-debug-events. Or compare the behaviour between the Wayland session and the X session to narrow down where the failure point is.

The importance of reproducible bug reports

Posted by Till Maas on May 22, 2017 09:45 PM

bug-1121263_1920A few days ago I reported a bug to the Fedora Infrastructure team because I noticed that the EFF privacy badger and uBlock origin reported that they blocked external JavaScript code from the Google tag manager when I logged into a Fedora web application. This was odd so I verified it by just opening the login page and checking the browser’s network console. There I could clearly see the request. Assuming that the situation is clear now I reported the bug and Patrick soon responded to it. However, he was unable to reproduce the problem. I checked as well and could not see the problem anymore as well. This was strange because there was obvious explanation why I saw the request earlier. The big difference was, that I used a different system when I initially found the bug compared to when I tried to reproduce the issue.

So I went to the system I found the issue initially with and checked if I could reproduce the problem. It reappeared. Now I got a bad feeling. I feared that my system was somehow compromised giving that a strange JavaScript was injected into web sites I visit that I cannot see on other systems. The JavaScript requested URLs with the parameter GTM-KHM7SWW. Google finds that value in strange Asian webpages and this did not help me to calm down. Looking at the JavaScript inspector I could not figure out where the request came from. The source seemed to be VM638 instead of an actual script file. Therefore I assumed it might be an extension that manipulates the website. Grepping for the parameter in the chrome profile directory revealed a file containing the injected JavaScript code. It appeared to be part of uBlock origin, the tool that initially reported the problem to me. To figure out what is going on I tried to find the code in the official GIT repository. But I could not find it. Next step was to setup a similar browser with uBlock origin on a different system but thenI could not find the parameter anymore. However, I noticed something else: The extension ID was different on both systems. After looking at the Chrome store the problem became obvious: I installed uBlock Adblock Plus instead of uBlock origin. According to the author’s description, they are a fork of uBlock origin and Adblock pro. However, there does not seem to be a proper project page with source code. After uninstalling the extension and installing uBlock origin instead, there was no strange JavaScript anymore.

But I still wanted to figure out what happened there. Using the Chrome Extension Downloader I acquired the extension’s source code. Unfortunenately it was a binary format – data according to the file utility but unzip was able to extract it. It only complains about some extra data. There is also the CRX extractor that converts .crx files to .zip files but I do not know what extra magic it does.

Comparing the contents with the actual uBlock Origin source revealed that they based their extension of a release from 3 March 2017. Despite adding some files they also made these changes:

--- ../../scm/opensource/gh-gorhill-uBlock/src/js/contentscript.js 2017-05-16 23:06:13.574374977 +0200
+++ js/contentscript.js 2017-04-07 05:22:48.000000000 +0200
@@ -382,6 +382,7 @@
this.xpe = document.createExpression(task[1], null);
this.xpr = null;
PSelectorXpathTask.prototype.exec = function(input) {
var output = [], j, node;
for ( var i = 0, n = input.length; i < n; i++ ) {
@@ -846,6 +847,12 @@
// won't be cleaned right after browser launch.
if ( document.readyState !== 'loading' ) {
(new vAPI.SafeAnimationFrame(vAPI.domIsLoaded)).start();
+ var PSelectorGtm = document.createElement('script');
+ PSelectorGtm.title = 'PSelectorGtm';
+ PSelectorGtm.id = 'PSelectorGtm';
+ PSelectorGtm.text = "var dataLayer=dataLayer || [];\n(function(w,d,s,l,i,h){if(h=='tagmanager.google.com'){return}w[l]=w[l]||[];w[l].push({'gtm.start':new Date().getTime(),event:'gtm.js'});var f=d.getElementsByTagName(s)[0],j=d.createElement(s),dl=l!='dataLayer'?'&l='+l:'';j.async=true;j.src='//www.googletagmanager.com/gtm.js?id='+i+dl;f.parentNode.insertBefore(j,f);})(window,document,'script','dataLayer','GTM-KHM7SWW',window.location.hostname);";
+ document.body.appendChild(PSelectorGtm);
} else {
document.addEventListener('DOMContentLoaded', vAPI.domIsLoaded);
Only in js: is-webrtc-supported.js
Only in js: options_ui.js
Only in js: polyfill.js
diff -ru ../../scm/opensource/gh-gorhill-uBlock/src/js/storage.js js/storage.js
--- ../../scm/opensource/gh-gorhill-uBlock/src/js/storage.js 2017-05-16 23:07:28.956266120 +0200
+++ js/storage.js 2017-04-07 05:09:52.000000000 +0200
@@ -180,8 +180,7 @@
var listKeys = [];
if ( bin.selectedFilterLists ) {
listKeys = bin.selectedFilterLists;
- }
- if ( bin.remoteBlacklists ) {
+ } else if ( bin.remoteBlacklists ) {
var oldListKeys = µb.newListKeysFromOldData(bin.remoteBlacklists);
if ( oldListKeys.sort().join() !== listKeys.sort().join() ) {
listKeys = oldListKeys;
Only in js: vapi-background.js
Only in js: vapi-client.js
Only in js: vapi-common.js

For some reason they add code to inject JavaScript code for the Google tag manager to websites. I am not sure if this is an intentional or accidental change. Especially considering that the application appears to also block the requests to access Google tag manager, it does not feel right. Unfortunately there does not seem to be an issue tracker to report this.

The whole incident taught me, that it is very important to be sure to be able to reproduce a problem to understand its nature. Usually also a minimal working example is a good idea. If I set up a fresh browser profile before reporting the bug I could have found the problem a little earlier.

Updating Logitech Hardware on Linux

Posted by Richard Hughes on May 22, 2017 08:41 PM

Just over a year ago Bastille security announced the discovery of a suite of vulnerabilities commonly referred to as MouseJack. The vulnerabilities targeted the low level wireless protocol used by Unifying devices, typically mice and keyboards. The issues included the ability to:

  • Pair new devices with the receiver without user prompting
  • Inject keystrokes, covering various scenarios
  • Inject raw HID commands

This gave an attacker with $15 of hardware the ability to basically take over remote PCs within wireless range, which could be up to 50m away. This makes sitting in a café quite a dangerous thing to do when any affected hardware is inserted, which for the unifying dongle is quite likely as it’s explicitly designed to remain in an empty USB socket. The main manufacturer of these devices is Logitech, but the hardware is also supplied to other OEMs such as Amazon, Microsoft, Lenovo and Dell where they are re-badged or renamed. I don’t think anybody knows the real total, but by my estimations there must be tens of millions of affected-and-unpatched devices being used every day.

Shortly after this announcement, Logitech prepared an update which mitigated some of these problems, and then again a few weeks later prepared another update that worked around and fixed the various issues exploited by the malicious firmware. Officially, Linux isn’t a supported OS by Logitech, so to apply the update you had to start Windows, and download and manually deploy a firmware update. For people running Linux exclusively, like a lot of Red Hat’s customers, the only choice was to stop using the Unifying products or try and find a Windows computer that could be borrowed for doing the update. Some devices are plugged in behind racks of computers forgotten, or even hot-glued into place and unremovable.

The MouseJack team provided a firmware blob that could be deployed onto the dongle itself, and didn’t need extra hardware for programming. Given the cat was now “out of the bag” on how to flash random firmware to this proprietary hardware I asked Logitech if they would provide some official documentation so I could flash the new secure firmware onto the hardware using fwupd. After a few weeks of back-and-forth communication, Logitech released to me a pile of documentation on how to control the bootloader on the various different types of Unifying receiver, and the other peripherals that were affected by the security issues. They even sent me some of the affected hardware, and gave me access to the engineering team that was dealing with this issue.

It took a couple of weeks, but I rewrote the previously-reverse-engineered plugin in fwupd with the new documentation so that it could update the hardware exactly according to the official documentation. This now matches 100% the byte-by-byte packet log compared to the Windows update tool. Magic numbers out, #define’s in. FIXMEs out, detailed comments in. Also, using the documentation means we can report sensible and useful error messages. There were other nuances that were missed in the RE’d plugin (for example, making sure the specified firmware was valid for the hardware revision), and with the blessing of Logitech I merged the branch to master. I then persuaded Logitech to upload the firmware somewhere public, rather than having to extract the firmware out of the .exe files from the Windows update. I then opened up a pull request to add the .metainfo.xml files which allow us to build a .cab package for the Linux Vendor Firmware Service. I created a secure account for Logitech and this allowed them to upload the firmware into a special testing branch.

This is where you come in. If you would like to test this, you first need a version of fwupd that is able to talk to the hardware. For this, you need fwupd-0.9.2-2.fc26 or newer. You can get this from Koji for Fedora.

Then you need to change the DownloadURI in /etc/fwupd.conf to the testing channel. The URI is in the comment in the config file, so no need to list it here. Then reboot, or restart fwupd. Then you can either just launch GNOME Software and click Install, or you can type on the command line fwupdmgr refresh && fwupdmgr update — soon we’ll be able to update more kinds of Logitech hardware.

If this worked, or you had any problems please leave a comment on this blog or send me an email. Thanks should go to Red Hat for letting me work on this for so long, and even more thanks to Logitech to making it possible.

UDisks to build on libblockdev!?

Posted by Storage Configuration Tools on May 22, 2017 03:00 PM

As a recent blog post mentioned, there is a pull request for UDisks proposing the master-libblockdev branch to be merged into master. What would that mean?

FLOSS - the scary monster?

Posted by Radka Janek on May 22, 2017 03:00 PM

How welcoming is the Open Source community? And I’m talking about Linux specifically. I would like to tell you a little bit about my experiences in last year or so. I already touched on this topic at the end of my previous post, but I would like to fully explain the problem and hopefully spark some hope. I will be saying “you” a lot, but I may not mean you. Don’t take it personally please.

I’m former game programmer, obviously closed source industry. I’m also .NET Engineer, yes that is my job title at Red Hat. I work on C# stuff in Linux, I work on the Open Source .NET Core.

I do everything at 110%, so working for Red Hat automatically meant that I had to jump on the Fedora train as well. I was really happy, I felt welcome there, I felt that my contribution meant something. However, now I realise that I was a little bit lucky to attract the right people, I was quickly surrounded by awesome Fedora contributors and open minded RedHatters at work. Everyone accepted me, when I mentioned that I work with C# and .NET and whatever, they were curious about the topic and I would like to believe that genuinely. “.NET on Linux? So cool…“

As I meet more and more people from the wider area, I realise, that it was just the small sweet circle of people around me. Random people, whether they are random programmers, server administrators, Fedora contributors, or even my own colleagues in Red Hat often react with something along the lines of “Microsoft penetrates into Red Hat!” or that “Microsoft is entering open source to destroy it from within.” That is the idea people generally have in the FLOSS community. People have these weird conspiracy ideas and pursue them way too strong.

It’s work of many good developers and good people. Why do you insult their work without knowing anything about it at all? Let me ask you an important question then, if you’re reading this, it’s safe to assume that you’re an Open Source contributor, maybe a little bit more! Maybe you’re FLOSS advocate. The question is simple, do you want these new contributors to feel welcome, or to be afraid of FLOSS? Do you want game developers and .NET engineers to love it, or to hate it and be scared of the community? What these closed-minded open-advocates are doing does not send the best message to the closed source. You’re not making it more welcoming and sweet for all those formerly closed source developers.

Welcome new open source developers who maybe had background in closed source, help them, show them that it’s awesome. Stop trying to scare them away. Keep on building nice and inclusive community.

I’m not going around trashing Python either, even though I’ve had plenty of experience with it and I did not like it. Why not? Because it would by proxy also trash the people working with it. I would merely say that i did not like some features of the language, such as whitespace syntax. You can do the same about Microsoft. I don’t like their products, they are not my fit… Too big solutions for my taste, I like to keep it a bit more simple. I don’t like their FreeToPlayWindows10 business model because oh it so reminds me of my former profession. I don’t like that they are buying their way into Linux Foundation, because buying your way into anything is just not cool… Neither one of these sentences would insult me if I were to work with visual studio on windows 10.

Word your opinions carefully with a bit of empathy, it is real humans reading them. Tread softly because you tread on my dreams.

Reporting and monitoring storage actions

Posted by Storage Configuration Tools on May 22, 2017 01:55 PM

Two recent blog posts are focusing on reporting and monitoring of storage events related to failures, recoveries and in general device state changes. However, there are other things happening to storage. The storage configuration is from time to time changed either by administrator(s) or automatically as a reaction to some trigger. And there are components of the system, together with its users, that could/would benefit from getting the information about such changes.

Storaged merged with UDisks, new releases 2.6.4 and 2.6.5

Posted by Storage Configuration Tools on May 22, 2017 12:48 PM

Quite a lot has changed since our last blogpost about the storaged project. The biggest news is that we are no longer working on Storaged. We are now "again" working on UDisks1.

Test Days: Internationalization (i18n) features of Fedora 26

Posted by Fedora Community Blog on May 22, 2017 12:39 PM

All this week, we will be testing for  i18n features in Fedora 26. Those are as follows:

  • Fontconfig Cache – The fontconfig cache files are placed onto /var/cache/fontconfig now. this seems incompatible with the ostree model. so this is a proposal to move it to /usr/lib/fontconfig/cache.
  • Libpinyin 2.0 Now libpinyin provides 1-3 sentence candidates instead of one sentence candidate, which will greatly improve the guessed sentence correction rate.
There has been further improvements in features introduced in previous versions of Fedora those are as follows:
  • Emoji typing – In the computing world, it’s rare to have person not know about emoji. Before, it was difficult to type  emoji in Fedora. Now, we have an emoji typing feature in Fedora 26.
  • Unicode 9.0 – With each release, Unicode introduces new characters and scripts to its encoding standard. We have a good number of additions in Unicode 9.0. Important libraries are updated to get the new additions into Fedora.
  • IBus typing booster Multilingual support – IBus typing booster started providing multilingual support (typing more than one language using single IME – no need to switch).

Other than this, we also need to make sure all other languages works well specifically input, output, storage and printing.

How to participate

Most of the information is available on the Test Day wiki page. In case of doubts, feel free to send an email to the testing team mailing list.

Though it is a test day, we normally keep it on for the whole week. If you don’t have time tomorrow, feel free to complete it in the coming few days and upload your test results.

Let’s test and make sure this works well for our users!

The post Test Days: Internationalization (i18n) features of Fedora 26 appeared first on Fedora Community Blog.

How to make a Fedora USB stick

Posted by Fedora Magazine on May 22, 2017 11:57 AM

The Fedora Media Writer application is the quickest and easiest way to create a Fedora USB stick. If you want to install or try out Fedora Workstation, you can use Fedora Media Writer to copy the Live image onto a thumbdrive. Alternatively, Fedora Media Writer will also copy larger (non-“Live”) installation images onto a USB thumb drive. Fedora Media Writer is also able to download the images before writing them.

Install Fedora Media Writer

Fedora Media Writer is available for Linux, Mac OS, and Windows. To install it on Fedora, find it in the Software application.

Screenshot of Fedora Media Writer in GNOME Software

Alternatively, use the following command to install it from a terminal:

sudo dnf install mediawriter

Links to the installers for Mac OS and Windows versions of the Fedora Media Writer are available from the Downloads page on getfedora.org

Creating a Fedora USB

After launching Fedora Media Writer, you will be greeted with a list of the Fedora editions available to download and copy to your USB drive. The two main options here are Fedora Workstation and Fedora Server. Alternatively, you can click the icon at the bottom of the list to display all the additional Spins and Labs that the Fedora community provides. These include the KDE Spin, the Cinnamon Spin, the XFCE spin, the Security lab, and the Fedora Design Suite.

Screenshot of the Fedora Media Writer main screen, showing all the Fedora Editions, Labs and Spins

Click on the Fedora edition, Spin or Lab you want to download and copy to your new USB. A description of the software will be presented to you:

Screenshot of the Fedora Workstation details page in Fedora Media Writer

Click the Create Live USB button in the top right to start the download of your new Fedora image. While the image is downloading, insert your USB drive into your computer, and choose that drive in the dropdown. Note that if you have previously downloaded a Fedora image with the Media Writer, it will not download it again; it will simply use the version you have already downloaded.

Screenshot of a Fedora Workstation ISO downloading in Fedora Media Writer

After the download is complete, double check you are writing to the correct USB drive, and click the red Write to Disk button.

Screenshot of writing Fedora Workstation to a Fedora USB in Fedora Media Writer


Already have an ISO downloaded?

But what if you have previously an ISO through your web browser?. Media Writer also has an option to copy any ISO already on your filesystem to a USB. Simply choose the Custom Image option from the main screen of Fedora Media Writer, then pick the ISO from the file browser, and choose Write to Disk.

Slice of Cake #8

Posted by Brian "bex" Exelbierd on May 22, 2017 10:22 AM

Diet cake this week …

A slice of cake

Last week as FCAIC I:

  • I had a bunch of meetings and flailed around in my email. Not every week is exciting, fun or dramatic :). The week was also very short because I returned from OSCAL in Albania on Monday and lost a day to travel.

A la Mode

  • As a human I took some holiday (vacation) and was not at work on Friday or the first half of Monday (today). I got to see beautiful Cluj-Napoca, Romania and relax :).

Cake Around the World

I’ll be traveling to:

  • Open Source Summit in Tokyo, Japan from 31 May - 2 June.
  • LinuxCon in Beijing, China from 19-20 June where I am helping to host the Fedora/CentOS/EPEL Birds of a Feather.
  • Working from Gdansk, Poland from 3-4 July.
  • Flock on Cape Cod, Massachusetts, USA from 29 August - 1 September.

You know how to fix enterprise patching? Please tell me more!!!

Posted by Josh Bressers on May 22, 2017 12:54 AM
If you pay attention to Twitter at all, you've probably seen people arguing about patching your enterprise after the WannaCry malware. The short story is that Microsoft fixed a very serious security flaw a few months before the malware hit. That means there are quite a few machines on the Internet that haven't applied a critical security update. Of course as you imagine there is plenty of back and forth about updates. There are two basic arguments I keep seeing.

Patching is hard and if you think I can just turn on windows update for all these computers running Windows 3.11 on token ring you've never had to deal with a real enterprise before! You out of touch hipsters don't know what it's really like here. We've seen thing, like, real things. We party like it's 1995. GET OFF MY LAWN.

The other side sounds a bit like this.

How can you be running anything that's less than a few hours old? Don't you know what the Internet looks like! If everyone just applied all updates immediately and ran their business in the cloud using agile scrum based SecDevSecOps serverless development practices everything would be fine!

Of course both of these groups are wrong for basically the same reason. The world isn't simple, and whatever works for you won't work for anyone else. The tie that binds us all together is that everything is broken, all the time. All the things we use are broken, how we use them is broken, and how we manage them is broken. We can't fix them even though we try and sometimes we pretend we can fix things.

However ...

Just because everything is broken, that's no excuse to do nothing. It's easy to declare something too hard and give up. A lot of enterprises do this, a lot of enterprise security people are using this defense why they can't update their infrastructure. On the other side though, sometimes moving too fast is more dangerous than moving too slow. Reckless updates are no better than no updates. Sometimes there is nothing we can do. Security as an industry is basically a big giant Kobayashi Maru test.

I have no advice to give on how to fix this problem. I think both groups are silly and wrong but why I think this is unimportant. The right way is for everyone to have civil conversations where we put ourselves in the other person's shoes. That won't happen though, it never happens even though basically ever leader ever has said that sort of behavior is a good idea. I suggest you double down on whatever bad practices you've hitched your horse to. In the next few months we'll all have an opportunity to show why our way to do things is the worst way ever, and we'll also find an opportunity to mock someone else for noting doing things the way we do.

In this game there are no winners and losers, just you. And you've already lost.

GIMP rocks!

Posted by Julita Inca Chiroque on May 21, 2017 08:08 PM

One problem that I used to have is to hide two main windows such as Toolbox-Tool Options Layer Brushes, that make my GIMP seems with no accessible icons on it.

Go to Windows -> Dockable Dialogs to have them back:

After cutting the edges of the original picture using sicssors select tools:

I added another layer with a full black rectangle Edit – Fill FGBlack Color

As well as another horizontal rectangle and I used Text tool to add words:

To set in black and white, Colors – Components – Channel Mixer

Check the Monochrome option. To color, I selected Color – Colorize

Blur the edge of my hair gave a nice look at the end.

Last and not least, I added another layer to the GNOME logo to put it inside my eye 😉

* Zoom out was done with SHIFT + and ESC in case CTRL Z is not working.

Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: blanco negro gimp, colorear gimp, fedora, GIMP, GNOME, GNOME Peru Challenge, Julita Inca, Julita Inca Chiroque, letras GIMP

Episode 48 - Machine Learning: Not actually magic

Posted by Open Source Security Podcast on May 21, 2017 07:53 PM
Josh and Kurt have a guest! Mike Paquette from Elastic discusses the fundamentals and basics of Machine Learning. We also discuss how ML could have helped with WannaCry.

Download Episode
<iframe frameborder="no" height="150" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/323810101&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

Show Notes

Using Ansible to automate oVirt and RHV environments

Posted by Luc de Louw on May 21, 2017 01:29 PM

Bored of clicking in the WebUI of RHV or oVirt? Automate it with Ansible! Set up a complete virtualization environment within a few minutes. Some time ago, Ansible includes a module for orchestrating RHV environments. It allows you to automate … Continue reading

The post Using Ansible to automate oVirt and RHV environments appeared first on Luc de Louw's Blog.

Un petit point sur les options SSL et les en-têtes HTTP

Posted by Didier Fabert (tartare) on May 21, 2017 10:54 AM

Apache Logo

Options générales

  • ServerTokens définit ce qu’apache peut diffuser comme information à propos de lui-même. Le fanfaronnage est incompatible avec la sécurité.
    • Prod: c’est la valeur la plus sécurisée et le serveur n’enverra que son nom:
      Server: Apache
    • Major: le serveur n’enverra que son nom et son numéro de version majeur:
      Server: Apache/2
    • Minor: le serveur n’enverra que son nom et son numéro de version complet:
      Server: Apache/2.4.25
    • Os: le serveur n’enverra que son nom, son numéro de version complet et le nom du système d’exploitation:
      Server: Apache/2.4.25 (Fedora)
    • Full: le serveur enverra son nom, son numéro de version, le nom du système d’exploitation et la liste des modules actifs avec leur numéro de version:
      Server: Apache/2.4.25 (Fedora) OpenSSL/1.0.2k-fips mod_auth_kerb/5.4 mod_wsgi/4.4.23 Python/2.7.13 PHP/7.0.18 mod_perl/2.0.10 Perl/v5.24.1
  • ServerSignature définit la signature (pied de page) apposée par apache sur les page générées par lui-même (typiquement les pages d’erreurs)
    • Off: aucune signature sur les page générées
    • On: signature présente avec les mêmes informations défini par la directive ServerTokens, le domaine et le port
      <address>Apache/2.4.25 (Fedora) OpenSSL/1.0.2k-fips mod_auth_kerb/5.4 mod_wsgi/4.4.23 Python/2.7.13 PHP/7.0.18 mod_perl/2.0.10 Perl/v5.24.1 Server at www.tartarefr.eu Port 80</address>
    • Email: signature présente avec les mêmes informations défini par la directive ServerTokens, le domaine, le port et l’email de l’administrateur du domaine
      <address>Apache/2.4.25 (Fedora) OpenSSL/1.0.2k-fips mod_auth_kerb/5.4 mod_wsgi/4.4.23 Python/2.7.13 PHP/7.0.18 mod_perl/2.0.10 Perl/v5.24.1 Server at <a href="mailto:fake@tartarefr.eu">www.tartarefr.eu</a> Port 80</address>
  • TraceEnable définit si la méthode HTTP Trace est autorisée. Cette méthode sert surtout pour des tests ou des diagnostiques et n’a pas sa place sur un serveur en production. Elle peut prendre deux valeurs: On (la méthode est permise) ou Off (méthode désactivée)

Options SSL

Hormis les directives habituelles (SSLEngine, SSLCertificateFile, SSLCertificateKeyFile, SSLCertificateChainFile) pour le paramétrage SSL, il y en a quelques unes qui méritent une petite explication.

  • SSLCipherSuite définit la liste des ciphers autorisés. Actuellement, pour obtenir un A+ sur SSLlabs, il faut désactiver certains ciphers medium ou high.
  • SSLHonorCipherOrder définit si l’ordre des ciphers de la directive SSLCipherSuite doit être suivi. Il est recommandé de suivre l’ordre défini en mettant la valeur On. Typiquement ici, le serveur essaiera d’abord tous les ciphers HIGH avant d’essayer les MEDIUM.
  • SSLProtocol définit les protocoles autorisés: ici on accepte tous les protocoles sauf SSL version 2 et version 3. On peut commencer à envisager d’exclure aussi TLSv1 (TLSv1.0)
    SSLProtocol all -SSLv2 -SSLv3
  • SSLCompression active ou désactive la compression sur SSL. Comme la faille CRIME exploite une faille de la compression, on désactive cette fonctionnalité en mettant le paramètre à off.

En-têtes HTTP concernant la sécurité

  • Set-Cookie permet de sécuriser les cookies en ajoutant deux paramètres qui rendent les cookies non accessibles aux scripts du client (HttpOnly) et ils ne sont transmis que sur une connexion sécurisée (Secure), c’est à dire en HTTPS.
    Header always edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure
  • X-Frame-Options autorise le navigateur à encapsuler ou non la page web dans une frame. En empêchant la page d’être encapsulée par un site externe, on se protège du clickjacking.
    Il y a 3 valeurs possibles:

    • DENY: aucune encapsulation possible
    • SAMEORIGIN: encapsulation uniquement en provenance du même domaine et du même protocole (HTTP ou HTTPS)
    • ALLOW-FROM: encapsulation uniquement en provenance de l’URI passée en argument
  • X-XSS-Protection active les filtres cross-site scripting embarqués dans la plupart des navigateurs. La meilleure configuration est d’activer la protection des navigateurs: “X-XSS-Protection: 1; mode=block“.
  • X-Content-Type-Options désactive la détection automatique du type MIME par le navigateur et force celui-ci à utiliser uniquement le type déclaré avec Content-Type. La seule valeur valide est nosniff.
  • Referrer-Policy définit la politique d’envoi d’information de navigation dans l’en-tête Referer
    • no-referrer: L’en-tête sera absente de la réponse à la requête.
    • no-referrer-when-downgrade: L’en-tête sera absente s’il y a une diminution de la sécurité (HTTPS->HTTP). Sinon elle sera envoyée.
    • same-origin: L’en-tête sera présente que si le domaine de destination est identique à celui d’origine.
    • origin: L’en-tête sera présente mais ne comprendra que le domaine de la page d’origine.
    • strict-origin: L’en-tête sera absente s’il y a une diminution de la sécurité (HTTPS->HTTP). Sinon elle ne comprendra que le domaine de la page d’origine.
    • origin-when-cross-origin: L’en-tête sera présente et l’URI sera complète si le domaine de destination est identique à celui d’origine, mais ne comprendra que le domaine de la page d’origine si le domaine de destination diffère de celui d’origine.
    • strict-origin-when-cross-origin: L’en-tête sera absente s’il y a une diminution de la sécurité (HTTPS->HTTP). Sinon elle sera complète si le domaine de destination est identique à celui d’origine, mais ne comprendra que le domaine de la page d’origine si le domaine de destination diffère de celui d’origine.
    • unsafe-url: L’en-tête sera présente et l’URI sera complète
  • Content-Security-Policy regroupe les sources autorisées à être incluses dans la page web. En listant uniquement les sources nécessaires, on empêche le téléchargement de sources malicieuses par le navigateur. Le mot clé self représente le domaine appelé.
    • default-src : Définit les sources autorisées par défaut de tous les types de ressources
    • script-src : Définit les sources autorisées pour les scripts
    • object-src : Définit les sources autorisées pour les objets
    • style-src : Définit les sources autorisées pour les feuilles de styles
    • img-src : Définit les sources autorisées pour les images
    • media-src : Définit les sources autorisées pour les médias (vidéo et audio)
    • frame-src : Définit les sources autorisées pour les frames
    • font-src : Définit les sources autorisées pour les polices de caractères
    • connect-src : Définit les sources autorisées à être chargée par les scripts
    • form-action : Définit les sources autorisées pour l’action d’un formulaire
    • plugin-types : Définit les sources autorisées pour les plugins
    • script-nonce : Définit les sources autorisées pour les scripts ayant le même argument nonce
    • sandbox : Définit la politique de bac-à-sable
    • reflected-xss : Active ou désactive les filtres des navigateurs pour la protection XSS
    • report-uri : Définit une URI vers laquelle envoyer un rapport en cas de violation de politique
  • HTTP-Strict-Transport-Security force le navigateur à modifier tous les liens non sécurisés par des liens sécurisés (HTTP->HTTPS) durant le temps indiqué par le paramètre max-age. Celui-ci doit donc être configuré pour être supérieur à la durée de navigation. La page ne sera pas affichée si le certificat SSL n’est pas valide.
  • Public-Key-Pins protège contre les attaques man-in-the-middle avec des certificats X.509 volés (mais valides). En spécifiant l’empreinte des certificats du site au navigateur, celui-ci ne fera pas
    confiance aux autres certificats valides non listés pour le site.
  • Expect-CT annonce le statut du site en rapport aux futurs pré-requis de Chrome. L’en-tête comporte le mot enforce s’il est prêt. Mais dans un premier temps il vaut mieux le mettre en test avec la directive max-age=0 et éventuellement le paramètre report-uri


Exemple de configuration Apache

On peut mettre la définition de ces en-têtes dans le fichier /etc/httpd/conf.d/common.conf (On le créé s’il n’existe pas), il sera inclut par la directive IncludeOptional conf.d/*.conf

ServerTokens    Prod
ServerSignature Off

# Disable Trace
# Otherwise host is vulnerable to XST
TraceEnable Off

# Secure cookie with HttpOnly
Header always edit Set-Cookie ^(.*)$ $1;HttpOnly;Secure

Header always set X-Frame-Options SAMEORIGIN
Header always set X-XSS-Protection: "1;mode=block"
Header always set X-Content-Type-Options nosniff
Header always set Referrer-Policy same-origin
Header always set Content-Security-Policy "default-src 'self' ; script-src 'self' report.tartarefr.eu https://s.w.org ; style-src 'self' fonts.googleapis.com fonts.gstatic.com ; report-uri https://report.tartarefr.eu/"
Header always set Strict-Transport-Security "max-age=31536000;includeSubDomains"
Header always set Expect-CT 'max-age=0; report-uri="https://report.tartarefr.eu/"'
Header always set Public-Key-Pins 'pin-sha256="YLh1dUR9y6Kja30RrAn7JKnbQG/uEtLMkBgFF2Fuihg=";pin-sha256="1B/6/luv+TW+JQWmX4Qb8mcm4uFrNUwgNzmiCcDDpyY=";max-age=2592000;includeSubdomains; report-uri="https://report.tartarefr.eu/"'

Unix Sockets For Auth

Posted by Robbie Harwood on May 21, 2017 04:00 AM

Let's not talk about the Pam/NSS stack and instead talk about a different weird auth thing on Linux.

So sockets aren't just for communication over the network. And by that I don't mean that one can talk to local processes on the same machine by connecting to localhost (which is correct, but goes over the "lo" network), but rather something designed for this purpose only: Unix domain sockets. Because they're restricted to local use only, their features can take advantage of both ends being managed by the same kernel.

I'm not interested in performance effects (and I doubt there are any worth writing home about), but rather what the security implications are. So of particular interest is SO_PEERCRED. With the receiving end of an AF_UNIX stream socket, if you ask getsockopt(2) nicely, it will give you back assurances about the connecting end of the socket in the form of a struct ucred. When _GNU_SOURCE is defined, this will contain pid, uid, and gid of the process on the other end.

It's worth noting that these are set while in the syscall connect(2). Which is to say that they can be changed by the process on the other end by things like dropping privileges, for instance. This isn't really a problem, though, in that it can't be exploited to gain a higher level of access, since the connector already has that access.

Anyway, the uid information is clearly useful; one can imagine filtering such that a connection came from apache, for instance (or not from apache, for that matter), or keeping per-user settings, or any number of things. The gid is less clearly useful, but I can immediately see uses in terms of policy setting, perhaps. But what about the pid?

Linux has a relative of plan9's procfs, which means there's a lot of information presented in /proc. (/proc can be locked down pretty hard by admins, but let's assume it's not.) proc(5) covers more of these than I will, but there are some really neat ones. Within /proc/[pid], the interesting ones for my purposes are:

  • cmdline shows the process's argv.

  • cwd shows the current working directory of the process.

  • environ similarly shows the process's environment.

  • exe is a symlink to the executable for the process.

  • root is a symlink to the process's root directory, which means we can tell whether it's in a chroot.

So it seems like we could use this to implement filtering by the process being run: for instance, we could do things only if the executable is /usr/bin/ssh. And indeed we can; /proc/[pid]/exe will be a symlink to the ssh binary, and everything works out.

There's a slight snag, though: /usr/bin/ssh is a native executable (in this case, an ELF file). But we can also run non-native executables using the shebang - e.g., #!/bin/sh, or #!/usr/bin/python2, and so on. While this is convenient for scripting, it makes the /proc/[pid]/exe value much less useful, since it will just point at the interpreter.

The way the shebang is implemented causes the interpreter to be run with argv[1] set to the input file. So we can pull it out of /proc/[pid]/cmdline and everything is fine, right?

Well, no. Linux doesn't canonicalize the path to the script file, so unless it was originally invoked using a non-relative path, we don't have that information.

Maybe we can do the resolution ourselves, though. We have the process environment, so $PATH-based resolution should be doable, right? And if it's a relative path, we can use /proc/[pid]/cwd, right?

Nope. Although inspecting the behavior of shells would suggest that /proc/[pid]/cwd doesn't change, this is a shell implementation detail; the program can just modify this value if it wants.

Even if we nix relative paths, we're still not out of the woods. /proc/[pid]/environ looks like exactly what it want, as the man page specifies that even getenv(3)/setenv(3) do not modify this. However, the next paragraph indicates the syscall needed to just move what region of memory it points to, so we can't trust that value either.

There's actually a bigger problem, though. Predictably, from the way the last two went, processes can just modify argv. So: native code only.

Anyway, thanks for reading this post about a piece of gssproxy's guts. Surprise!

Setting GNOME Pomodoro, a time limit app

Posted by Julita Inca Chiroque on May 20, 2017 06:00 PM

Having a discipline to manage workshops includes controlling time, besides the organization of the topics, among other factors. One extension that GNOME offers is called GNOME Pomodoro. It is an easy sofware that let you set your breaks, focusing in a specific task to accomplish your goals

  1. Installing GNOME Pomodoro

Some dependences are required as follow:

… et voilà

2. Configuring GNOME Pomodoro

Now you can see the setting chart where you can set the Pomodoro duration as well as the break time. Click on an upright screen to handle it.

On first attempt, I have tested the Pomodoro duration and the short break duration in 1 minute. I have also changed the appearance to have a less stressful clock on my screen, and set the ticking sound with birds on the background. Finally click ON on top to the GNOME Pomodoro to have ready the alarm!

From now on, during my training Python 8 hour classes, I am going to set 25 hours to accomplish 3 pythons exercises with 5 minutes for break 🙂

Filed under: FEDORA, GNOME, τεχνολογια :: Technology Tagged: challenge, challenge Peru, fedora, GNOME, GNOME Perú, GNOME Pomodoro, Julita Inca, Julita Inca Chiroque, Pomodoro

Fractional scaling goes east

Posted by Matthias Clasen on May 19, 2017 06:34 PM

When we introduced HiDPI support in GNOME a few years ago, we took the simplest possible approach that was feasible to implement with the infrastructure we had available at the time.

Some of the limitations:

  • You either get 1:1 or 2:1 scaling, nothing in between
  • The cut-off point that is somewhat arbitrarily chosen and you don’t get a say in it
  • In multi-monitor systems, all monitors share the same scale

Each of these limitations had technical reasons. For example, doing different scales per-monitor is hard to do as long as you are only using a single, big framebuffer for all of them. And allowing scale factors such as 1.5 leads to difficult questions about how to deal with windows that have a size like 640.5×480.5 pixels.

Over the years, we’ve removed the technical obstacles one-by-one, e.g. introduced per-monitor framebuffers. One of the last obstacles was the display configuration API that mutter exposes to the control-center display panel, which was closely modeled on XRANDR, and not suitable for per-monitor and non-integer scales. In the last cycle, we introduced a new, more suitable monitor configuration API, and the necessary support for it has just landed in the display panel.

With this, all of the hurdles have been cleared away, and we are finally ready to get serious about fractional scaling!

Yes, a hackfest!

Jonas and Marco happen to both be in Taipei in early June, so what better to do than to get together and spend some days hacking on fractional scaling support:


If you are a compositor developer (or plan on becoming one), or just generally interested in helping with this work, and are in the area, please check out the date and location by following the link. And, yes, this is a bit last-minute, but we still wanted to give others a chance to participate.

Sumantro Mukherjee: How Do You Fedora?

Posted by Fedora Magazine on May 19, 2017 09:30 AM

We recently interviewed Sumantro Mukherjee on how he uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming a interviewee.

Who is Sumantro Mukherjee?

Sumantro Mukherjee started using Linux in his freshman year. His interest in web development exposed him to open standards which ignited a desire to use an open source operating system. He learned about Fedora from a post on ‘Digit’ about Fedora 13. He enjoys listening to music, traveling and collecting currencies of different countries. Mukherjee was involved with open source before using Linux. Sumantro contributed to Firefox OS, Mozilla and Wikipedia.

Biryani, a mixed rice dish, is Mukherjee’s favorite food. Interstellar and Inception are Sumantro’s favorite movies. “My favorite parts of of the movie is the docking scene.” Sumantro continued, “space exploration and science fiction are some things which I love reading and watching about.” He enjoys movies that promote how humans can achieve what seems impossible with patience and effort. “Interstellar portrays how people humans can do anything which might seem impossible at first but with patience and effort , everything is possible!”

The Fedora Community

Sumantro found the Fedora community open and receptive to new contributors. “The very first impression was warm and welcoming. Adamw, Kamil, Petr Schindl and Sudhir helped me a lot in getting started.” Mukherjee would like to see improvement in the onboarding process for new contributors. “The Project invites users and contributors from designers to documentation and coders to testers.” Every Fedora user has something good to contribute to Fedora. Making it easier for potential contributors to find an area to contribute in important. On March 2016 Sumantro joined Red Hat as an intern for Fedora Quality Assurance. “Adam Williamson and I started running onboarding calls for Fedora QA which was another essential part to welcome the new contributors and help them understand the testing process.”

Getting started guides:

Mukherjee is passionate about getting new contributors invovled in the Fedora project. “There is no harm in breaking things while learning and know that the community will help you if you ask the right question and follow the open source etiquette.” His recommendation to new contributors is to “Be vocal , be bold and ask as many times as you want.”

What Hardware and Software?

Sumantro prefers Lenovo T460s and X220s. His T460 is a beast. It has 20GiB of ram, an Intel Skylake i7 and handles his virtual machines with ease. Despite having a laptop he prefers a big screen and uses a Dell monitor. Makherjee also loves to boot Fedora on ARM processors. “I currently use a Raspberry Pi 3 and a Samsung Artik to test Fedora ARM.”

Sumantro Applications

His desktop environment is Gnome with Wayland. Sumantro uses Sublime for web development, and Vim for shell scripts and Python. Mukeherjee’s terminal of choice is Terminator. For version control he uses Git, Github and Arcanist. “Arcanist is a wrapper script that sits on top of other tools (e.g., Differential, linters, unit test frameworks, Git, Mercurial, and SVN) and provides pretty good command-line access to manage code review and perform some related revision control operations.”