February 12, 2016

Trusting a self-generated CA system-wide on Fedora

Say you’re using FreeIPA (or perhaps you’ve generated your own CA) and you want to have your machines trust it. Well in Fedora you can run the following command against the CA file:

# trust anchor rootCA.pem

February 11, 2016

Moderate reviews in GNOME Software

I’m pondering adding something like this for GNOME Software:

Screenshot from 2016-02-11 20-31-28

The idea is it would be launched using a special geeky-user-only gnome-software --mode=moderate command line, so that someone can up or down-vote any reviews that are compatible with their country code. This would be used by super-users using community distros like Fedora, and perhaps people in a QA team for distros like RHEL. The reason I wanted to try to use gnome-software rather than just doing it in a web-app was that you have access to the user machine hash, so you can hide reviews the user has already voted on without requiring a username and password that has to be matched to the local machine. We also have all the widgets and code in place, so it is really just a couple of hundred lines of code for this sekret panel. The server keeps track of who voted on what and so reviewers can just close the app, open it a few weeks later and just continue only moderating the reviews that came in since then.

I can’t imagine the tool would be used by many people, but it does make reviewing comments easy. Comments welcome.

My experience of DevConf CZ 2016 - Day 1
This was my first DevConf, not able to join last year due to other schedules. Had very good experience and if get chance definitely would like to attend DevConf 2017 :)

Day 1

  • From registration desk to conference room, everything was well organized.
  • I liked Keynote on first day. It was motivating for all developers and provided key points about how to become "Rock star". During the start only Tim mentioned all these points came from him own experience of working with 1000+ engineers. 
    Tim Burke - Talking on "Rock Star Recipe"
  • We had planned Globalization meeting specifically for FLTG and talks with Council. Most of the rooms were already booked for meetups. Luckily we got 1pm slot. Some time went on organizing this meetup and communicating with interested participants.

  • In DevConf there is no slot for lunch, rather they have live food counter for almost all day. One can simply go there as per his preferred time. It provided one more slot for sessions.
  • I met Remy on first day. It was great meeting him, we had quick conversation on G11N and he mention he is interested to get more idea and also has some plans for localization. 

Remy, Ani and Mike Fabian - During G11N meetup
  • It was me, Noriko, Remy, Mike Fabian, Fale and Ani for the meeting. We started by having discussion on on-boarding steps for new contributors. Significance of badges. Remy also informed about Fedora hubs and how it will be beneficial. Luckily we got free lunch boxes from earlier meeting and we enjoyed lunch there :)

Summary of discussions in meetup
  • After meeting i attended talk on "Ceph vs Gluster vs Swift: Similarities and Differences". It was good talk and nicely provided idea on basic concepts and differences between all these projects.

Lukáš Fryč - in Workshop
  • End of day i attended workshop on "Create & deploy mobile apps in minutes with Red Hat Mobile Application Platform". It was already housefull, luckily i got one seat. Lukáš Fryč going in good speed and was making sure everyone with him during workshop. I have already deployed few apps on OpenShift, it made bit easy and quick for me. At the end of session i had app developed by me, deployed in my Mobile.

Me with Guide and Group during City Tour
  • First day Devconf had organized "City Tour", i joined it. Guide was excellent and kept us busy with interesting information about Brno. 
Testing unstable gnome using xdg-app

Lot of interesting work on xdg-app lately!

I’ve created a new runtime based on the latest unstable gnome, and during the Gnome developer experience hackfest we made bundles for a bunch of core Gnome applications.

I’ve set up a nightly build of these so that anyone can play with the latest Gnome apps on any distro, without having to build anything.

Additionally, Richard and I have been working on making gnome-software able to work with xdg-app.

The culmination of this is using xdg-app to install gnome-software and then using that to install more xdg-apps:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="495" src="https://www.youtube.com/embed/RsNqT13uIQo?feature=oembed" width="660"></iframe>

This works out of the box on Fedora 23, just make sure you have xdg-app 0.4.11 installed (it’s in updates-testing at the moment). For other distributions, I have made packages which are available here.

Once you have xdg-app installed, its very easy to test them. First you need to add the remote repository:

$ curl -O http://sdk.gnome.org/nightly/keys/nightly.gpg
$ xdg-app --user remote-add --gpg-key=nightly.gpg gnome-nightly http://sdk.gnome.org/nightly/repo/

Then you need to install the runtime:

$ xdg-app --user install gnome-nightly org.gnome.Platform

And then you can install some app:

$ xdg-app --user install gnome-nightly org.gnome.Weather

At this point the app is installed and you should be able to start it like any regular app in your desktop. You can also manually start it via xdg-app:

xdg-app run org.gnome.Weather

The list of available apps can be seen with:

$ xdg-app --user remote-ls gnome-nightly --app

Or you can install and use gnome software like in the demo.

QEMU command-line: behavior of ‘-serial stdio’ vs. ‘-serial mon:stdio’

[Thanks: To Paolo Bonzini for writing the original patch; and to Alexander Graf for mentioning this on IRC some months ago, I had to go and look up the logs.]

I forget to remember: to avoid QEMU being terminated on SIGINT (Ctrl+c), instead of just stdio, supply the special parameter mon:stdio to -serial option (which redirects the virtual serial port to a host character device). The subtle difference between them:

  • -serial stdio: Redirects the virtual serial port onto stdio; upon Ctrl+c, QEMU immediately terminates.
  • -serial mon:stdio: In this mode, the virtual serial port and QEMU monitor are multiplexed onto stdio. And Ctrl+c is handled, i.e. QEMU won’t be terminated, and the signal will be passed to the guest.

In summary, a working example command-line:

$ qemu-system-x86_64               \
   -display none                   \
   -nodefconfig                    \
   -nodefaults                     \
   -m 2048                         \
   -device virtio-scsi-pci,id=scsi \
   -device virtio-serial-pci       \
   -serial mon:stdio  \
   -drive file=./fedora23.qcow2,format=qcow2,if=virtio

To toggle between QEMU monitor prompt and the serial console, use Ctrl+a, followed by typing ‘c’. To terminate QEMU, while at monitor prompt, (qemu), type ‘quit’.

All systems go
Service 'Fedora People' now has status: good: Everything seems to be working.
There are scheduled downtimes in progress
Service 'Fedora People' now has status: scheduled: migration
[Howto] Access Red Hat Satellite REST API via Ansible

Ansible LogoAs with all tools, Red Hat Satellite offers a REST API. Ansible offers a simple way to access the API.


Most of the programs and functions developed these days offer a REST API. Red Hat for example usually follows the “API first” methodology with most of the products these days, thus all functions of all programs can be accessed via REST API calls. For example I covered how to access the CloudForms REST API via Python.

While exploring a REST API via Python teaches a lot about the API and how to deal with all the basic tasks around REST communication, in daily enterprise business API calls should be automated – say hello to Ansible.

Ansible offers the URI module to work with generic HTTP requests. It offers various authentication modules, can pass general headers and provides ways to deal with different return codes and has a generic body field. Together with Ansible’s extensive variable features this makes the ideal combination for automated REST queries.


The setup is fairly simple: a Red Hat Satellite Server in a newer version (6.1 or newer), Ansible, and that’s it. The URI module in Satellite comes pre-installed.

Since the URI module accesses the target hosts via http, the actual host executing the http commands is the host on which the playbooks run. As a result, the host definition in the playbook needs to be localhost. In such case it doesn’t make sense to gather facts, either, so gather_facts: no can be set to save time.

In the module definition itself, it might make sense for test environments to ignore certification errors if the Satellite server certificate is not properly signed: validate_certs: no. Also, sometimes the Python library stumbles upon the status code 401 to initiate authentication. In that case, the option force_basic_auth: yes might help.

Last but not least, the API itself must be understood. The appropriate documentation is pretty helpful here: Red Hat Satellite API Guide. Especially the numerious examples at the end are a good start to build own REST calls in Ansible.

Getting values

Getting values via the REST API is pretty easy – the usual URL needs to be queried, the result is provided as JSON (in this case). The following example playbook asks the Satellite for the information about a given host. The output is reduced to the puppet modules, the number of modules is counted and the result is printed out.

$ cat api-get.yml
- name: call API from Satellite
  hosts: localhost
  gather_facts: no
    satelliteurl: satellite-server.example.com
    client: helium.example.com

    - name: get modules for given host from satellite 
        url: https://{{ satelliteurl }}/api/v2/hosts/{{ client }}
        method: GET 
        user: admin
        password: password
        force_basic_auth: yes 
        validate_certs: no
      register: restdata
    - name: output rest data
      debug: msg="{{ restdata.json.all_puppetclasses | count }}" 

The execution of the playbook show the number of the installed Puppet modules:

$ ansible-playbook api-get.yml

PLAY [call API from Satellite] ************************************************ 

TASK: [get ip and name from satellite] **************************************** 
ok: [localhost]

TASK: [output rest data] ****************************************************** 
ok: [localhost] => {
    "msg": "8"

PLAY RECAP ******************************************************************** 
localhost                  : ok=2    changed=0    unreachable=0    failed=0

If the Jinja filter string | count is removed, the actual Puppet classes are listed.

Performing searches

Performing searches is simply another URL, and thus works the exact same way. The following playbook shows a search for all servers which are part of a given Puppet environment:

- name: call API from Satellite
  hosts: localhost
  gather_facts: no
    satelliteurl: satellite-server.example.com
    clientenvironment: production

    - name: get Puppet environment from Satellite 
        url: https://{{ satelliteurl }}/api/v2/hosts/?search=environment={{ clientenvironment }}
        method: GET 
        user: admin
        password: password
        force_basic_auth: yes 
        validate_certs: no
      register: restdata
    - name: output rest data
      debug: msg="{{ restdata.json }}"

Changing configuration: POST

While querying the REST API can already be pretty interesting, automation requires the ability to change values as well. This can be done by changing the method: in the playbook to POST. Also, additional headers are necessary, and a body defining what data will be posted to Satellite.

The following example implements the example CURL call from the API guide mentioned above to add another architecture to Satellite:

$ cat api-post.yml
- name: call API from Satellite
  hosts: localhost
  gather_facts: no
    satelliteurl: satellite-server.example.com

    - name: set additional architecture in Satellite 
        url: https://{{ satelliteurl }}/api/architectures
        method: POST
        user: admin
        password: password
        force_basic_auth: yes 
        validate_certs: no
        HEADER_Content-Type: application/json
        HEADER_Accept: :application/json,version=2
        body: "{\"architecture\":{\"name\":\"i686\"}}"
      register: restdata
    - name: output rest data
      debug: msg="{{ restdata }}"

The result can be looked up in the web interface: an architecture of the type i686 can now be found.


Ansible can easily access, query and control the Red Hat Satellite REST API and thus other REST APIs out there as well.

Ansible offers the possibility to automate almost any tool which expose a REST API. Together with the dynamic variable system results from one tool can easily be re-used to perform actions in another tool. That way even complex setups can be integrated with each other via Ansible rather easy.

Filed under: Business, Cloud, Debian & Ubuntu, Fedora & RHEL, HowTo, Linux, Microsoft, Shell, SUSE, Technology
Tip: FUSE-mount a disk image with Windows drive letters

guestmount is the libguestfs tool for taking a disk image and mounting it under the host filesystem. This works great for Linux disk images:

$ virt-builder centos-7.2
$ mkdir /tmp/mnt
$ guestmount -a centos-7.2.img -i /tmp/mnt
$ ls /tmp/mnt
bin   dev  home  lib64       media  opt   root  sbin  sys  usr
boot  etc  lib   lost+found  mnt    proc  run   srv   tmp  var
$ guestunmount /tmp/mnt

Those files under /tmp/mnt are inside the centos-7.2.img disk image file, and you can read and write them.

guestmount is fine for Windows disk images too, except when Windows has multiple drives, C:, D:, etc., because in that case you’ll only “see” the contents of the C: drive.

But guestmount is nowadays just a wrapper around the “mount-local” API in libguestfs, and you can use that API directly if you want to do anything a bit more complicated … such as exposing Windows drive letters.

Here is a Perl script which uses the mount-local API directly to do this:

#!/usr/bin/perl -w
use strict;
use Sys::Guestfs;
$| = 1;
die "usage: $0 mountpoint disk.img" if @ARGV < 2;
my $mp = shift @ARGV;
my $g = new Sys::Guestfs;
$g->add_drive_opts ($_) foreach @ARGV;
my @roots = $g->inspect_os;
die "$0: no operating system found" if @roots != 1;
my $root = $roots[0];
die "$0: not Windows" if $g->inspect_get_type ($root) ne "windows";
my %map = $g->inspect_get_drive_mappings ($root);
foreach (keys %map) {
    $g->mkmountpoint ("/$_");
    eval { $g->mount ($map{$_}, "/$_") };
    warn "$@ (ignored)\n" if $@;
$g->mount_local ($mp);
print "filesystem mounted on $mp\n";

You can use it like this:

$ mkdir /tmp/mnt
$ ./drive-letters.pl /tmp/mnt windows7.img
filesystem ready on /tmp/mnt

in another window:

$ cd /tmp/mnt
$ ls
C  D
$ cd C
$ ls
Documents and Settings
Program Files
$ cd ../..
$ guestunmount /tmp/mnt

(Thanks to Pino Toscano for working out the details)

All systems go
Service 'COPR Build System' now has status: good: Everything seems to be working.
Fedora nominated for Blackshield Awards
The Fedora Project, the ISECOM and companies like audius GmbH enabled me since years to teach security along with Fedora and the OSSTMM in india [1], [2], [3] ... a long way since my first indian event foss.in 2009.

Seems like it payed out - making it to the finalists of the nullcon blackshield awards is just WOW ;)

Do not forget to vote! nullcon 2016 Blackshield Award Voting

Filling your data lake with log messages: the syslog-ng Hadoop (HDFS) destination
Petabytes of data are now collected into huge data lakes around the world. Hadoop is the technology enabling this. While syslog-ng was able write logs to Hadoop using some workarounds (mounting HDFS through FUSE) for quite some time, the new Java-based HDFS destination driver provides both better performance and more reliability. Instead of developing our […]
There are scheduled downtimes in progress
Service 'COPR Build System' now has status: scheduled: scheduled maintenance in progress
Introduction to Tang and Clevis

In this post I continue the discussion of network-bound decryption and introduce Tang and Clevis, new unlock tools that supersede Deo (which was covered in an earlier post).

Deo is dead. Long live Tang.

Nathaniel McCallum discovered a key encryption protocol based on ElGamal with a desirable security characteristic: no one but the party decrypting the secret can learn the secret. It was reviewed and refined into McCallum-Relyea (MR) exchange. With Deo, the server decrypted (thus learned) the key and sent it back to the client (through an encrypted channel). McCallum-Relyea exchange avoids this. A new protocol based on MR was developed, called Tang.

Another perceived drawback of Deo was its use of X.509 certificates for TLS and for encryption, making it complex to deploy. The Tang protocol is simpler and avoids X.509.

I will avoid going into the details of the cryptography or the protocol in this post, but will include links at the end.


Using Tang to bind data to a network is great, but there are many other things we might want to bind our data to, such as passwords, TPM, biometrics, Bluetooth LE beacons, et cetera. It would also be nice to define policies – possibly nested – about how many of what data binders must succeed in order to decrypt or "unlock" a secret. The point here is that unlock policy should be driven by business or and/or user needs, not by technology. The technology must enable but not constrain the policy.

Enter Clevis, the pluggable client-side unlock framework. Plugins, which are called pins, implement different kinds of bindings. Clevis comes with a handful a pins including pwd (password) and https (PUT and GET the secret; a kind of escrow). The tang pin is provided by Tang to avoid circular dependencies.

The sss pin provides a way to "nest" pins, and also provides k of n threshold unlocking. "SSS" stands for Shamir’s Secret Sharing, the algorithm that makes this possible.

LUKS volume decryption, which was implemented in Deo, has not yet been implemented in Clevis, but it is a high priority.

By the way, if you were wondering about the terminology, a clevis, clevis pin and tang together form a kind of shackle.


TLS private key decryption

Let’s revisit the TLS private key decryption use case from my earlier Deo post, and update the solution to use Clevis and Tang.

Recall the encryption command, which required the user to input the TLS private key’s passphrase, then encrypted it with Deo, storing it at a location determined by convention:

# (stty -echo; read LINE; echo -n "$LINE") \
  | deo encrypt -a /etc/ipa/ca.pem deo.ipa.local \
  > /etc/httpd/deo.d/f22-4.ipa.local:443

We will continue to use the same file storage convention. Clevis, unlike Tang, does not receive a secret to be encrypted but instead generates one and tells us what it is. Let’s run clevis provision with the Tang pin and see what it gives us:

# clevis provision -P '{"type": "tang", "host": "f23-1.ipa.local"}' \
  -O /etc/httpd/tang.d/f22-4.ipa.local:443

The server advertised the following signing keys:


Do you wish to trust the keys? [yn] y

Breaking down the command, the -P argument is a JSON tang pin configuration object, specifying the Tang server’s host name. The argument to -O specifies the output filename.

The program prints the signing key(s) and asks if we want to trust them. Tang is a trust on first use (TOFU) protocol. Out-of-band validation is possible but not yet implemented (there is a ticket for DNSSEC support).

Having trusted the keys, the program performs the Tang encryption, saves the metadata in the specified output file, and finally prints the secret: 709DAFCBC8ACF879D1AC386798783C7E.

We now need to update the passphrase on the TLS private key with the secret that Clevis generated:

# openssl rsa -aes128 < key.pem > newkey.pem && mv newkey.pem key.pem
Enter pass phrase:
writing RSA key
Enter PEM pass phrase:
Verifying - Enter PEM pass phrase:

OpenSSL first asked for the original passphrase to decrypt the private key, then asks (twice) for a new passphrase, which shall be the secret Clevis told us.

Now we must change the helper script that unlocks the private key. Recall the definition of the Deo helper:

[ -f "$DEO_FILE" ] && deo decrypt < "$DEO_FILE" && echo && exit
exec /bin/systemd-ask-password "Enter SSL pass phrase for $1 ($2) : "

The Clevis helper is similar:

[ -f "$CLEVIS_FILE" ] && clevis acquire -I "$CLEVIS_FILE" && echo && exit
exec /bin/systemd-ask-password "Enter SSL pass phrase for $1 ($2) : "

The clevis acquire -I "$CLEVIS_FILE" invocation is the only substantive change. Now we can finally systemctl restart httpd and observe that the key is decrypted automatically, without prompting the operator.

What are the possible downsides to this approach? First, due to limitations in Apache’s passphrase acquisition at present it is possible only to use Clevis pins that do not interact with the user or write to standard output. Second, the secret is no longer controlled by the user doing the provisioning – the TLS private key must be re-encrypted under the new passphrase generated by Clevis, and if the Tang server is unavailable, that is the passphrase that must be entered at the fallback prompt. A lot more work needs to be done to make Clevis a suitable general solution for key decryption in Apache or other network servers, but for this simple case, Clevis and Tang work very well, as long as the Tang server is available.


This has been a very quick and shallow introduction to Clevis and Tang. For a deeper overview and demonstration of Tang server deployment and more advances Clevis policies, I recommend watching Nathaniel McCallum’s talk from DevConf.cz 2016.

Other useful links:

February 10, 2016

OpenStack Keystone Q and A with the Boston University Distributed Systems Class Part 1

Dr. Jonathan Appavoo was kind enough to invite me to be a guest lecturer in his distributed systems class at Boston University. The students proved a list of questions, and I only got a chance to address a handful of them during the class. So, I’ll try to address the rest here.

Page 1 of 1 So far (I’ll update this line as I post the others.)

When do tokens expire? If they don’t expire, isn’t it potentially dangerous since attackers can use old tokens to gain access to privileged information?
Tokens have a set expiry. The default was originally set to be 12 hours. We shortened that to 1 hour a couple years back, but it turns out that some workloads use a token all the way through, and those workload last longer than 1 hour. Those deployments either have lengthened the life of the token in the configuration or had users explicitly request tokens that last longer than an hour.

Can users share roles with in the same project?
Yes, and this is the norm. A role assignment is a many to many to many association between users (or groups of users), projects, and roles. This means:

  • One user may have multiple roles on the same project
  • One user may have the same role on multiple projects
  • Multiple users may have the same role on a project
  • Any mix of the above

It’s interesting that one of the key components is a standardized GUI (Horizon). Wouldn’t it be more useful for there to be a handful of acceptable GUIs tailored to the service a particular set of OpenStack instances is providing?

This is the standardized GUI as you point out. While many companies have deployed OpenStack with custom GUIs, this one is the one that is designed to be the most generally applicable. Each user gets a service catalog along with their tokens, and the Horizon UI can use this to determine what services the user can see, and customize the UI displayed accordingly. So, from that perspective, the UI is tailored to the user.

The individual project teams are not compose of UI or UX folks. You really don’t want us designing UI. As soon as you realize the tough problem is getting a consistent look, feel, set of rules, and all the things that keep users from running away screaming, you realize that it really is its own effort and project.

That said, I did propose a few years back that the Keystone project should be able to render to HTML, and not just JSON (and XML). This would be the start of a progressive enhancement approach that would also make the Keystone team aware of the gaps in the coverage of the UI: its really easy to see when you click through. But, again, this was for test-ability, completeness, and rapid implementation of a UI for new features, not the rich user experience that is the end product. It would still be the source for a follow on UX effort.

Since that time, the Horizon team has embraced a single-page-app effort based on a proxy (to avoid CORS issues) to all of the endpoints. The proxy converts the URLs in the JSON requests and responses to the proxy, but otherwise lets things pass unchanged. I would love to see HTML rendering as an option on this proxy.

Can You elaborate on some examples of OpenStack being used by companies on a large scale?

Here is the official list. If you click through the Case studies, some of them have numbers.

A question about the philosophy behind open source. Do the  problems that arise in distributed systems lend themselves well to an open source approach? Or does Brooks’s Law apply?

Brooks law states: “adding manpower to a late software project makes it later.”  Open source projects are not immune to Brook’s Law.  OpenStack is not driven by any one company, and it has a “release each 6 months” rule that means that if a feature is not going to make a release, it will have another chance six months later.  We have not slipped a release since I’ve been on the project, and I don’t think they did before.

The Keystone team is particularly cautious. New features happen, but they are well debated, beat on, and often deferred a release or more.  Getting code into Keystone is a serious undertaking, with lots of back and forth for even small changes, and some big changes have gone through 70+ revisions before getting merged.  I have a page and a half of links to code that I have submitted and later abandoned.

Adding more people to a project under OpenStack (like Keystone) can’t happen without the approval of the developers.  I mean, anyone can submit and review code, but to be accepted as a core requires a vote of confidence from the existing cores, and that vote won;’t take place if you’ve not already proved yourself.  So the worst that could happen is that one company goes commit happy and gets a bunch of people to try to submit patches, and we would ignore them. It hasn’t happened yet.

Does the public source code make authentication a much more difficult project for OpenStack than it is for a close source Identity-as-a-Service?

So, the arguments why Open Source is good for security is well established, and I won’t repeat them here.  To me, there are Open Source projects, and then there is the Open Source development model.  The first means that the code3 is free software, you can contribute, etc.  But it means that the project might be run by one person or company.  The Open Source development model is more distributed by default.  It means that no one person can ram through changes.  Even the project technical leaders can;t get away with approving code without at least another core also approving.  Getting anything done is difficult.  So, from that perspective, yes.  But there is lot of benefit to offset it:  we get a wider array of inputs, and we get public discussion of features and bugs.  We get contributions from people that are interested in solving their own problems, and, in doing so, solve ours.

Why can we not force or why has there not been more standardization for Identity Providers(IdPs) in a  federation?

Adoption of Federated protocols has been happening, slow but steady.  SAML from “oh that’s neat” to “we have to have that.”  IN my time on this project.  SAML is apretty good standard, and many of the IdPs are implementing it.  There is a little wiggle room in the standard, as my coworker who is working on the client protocol (ECP) can tell you, but the more implementations we see, the more we can iron out the details.  So, I think at least from SAML, we do have a good degree of standardization.

The other protocols are also picking up steam, and I think will play out similarly to SAML.  I suspect the OpenID connect will end up just as well standardized in implementation as SAML is starting to be.  The process is really iterative, and you don;t know the issues you are going to have to deal with until you find them in a deployment.

What makes OpenStack better than other Cloud Computing services?

Short answer: I don’t know.

Too Long answer with a lot of conjecture:  I think that the relative strength of OpenStack depends on which of the other services you compare it to.

Amazon’s are more mature, so there the OpenStack benefit is Open Source, and that you can actually implement it on premise, not just have it  hosted for you.  Control of Hardware is a still a big deal.  I think the Open Source aspect really helped OpenStack compete with vCloud as well.

I think that the open source development model for the Cloud mirrors the success of the Linux Kernel.  The majority of the activity on the Kernel is device drivers.  In OpenStack, there is a real push by vendors to support their devices.  In a proprietary solution, a hardware vendor is dependent on working with that proprietary software vendor to get their device supported. In Open Stack, any device manufacturer that wants to be part of Cinder, Nova, or Neutron can get the code and make it work.

This means that even the big vendors get interested.  The software becomes a marketplace of sorts, and if you can’t inter-operate, you miss out on potential sales.  Thus, we have Cisco interested in Neutron and VMware interested in Nova where it might have initially appeared against their interested to have that competition.

I think part of its success was due to the choice of the Python programming language.  Its a language that System administrators don’t tend to react to negatively like they do with Java.  I pick on Java because it was the language used for Eucalyptus.  I personally like working in Java, but I can really see the value in Python for OpenStack.  The fact that source code is essentially shipped by default overcomes the Apache license potential for a closing off code: end users can see what is actually running on their systems. System administrators for Linux systems are likely to already have some familiarity with Python.

I think the micro project approach has also allowed OpenStack to scale.  It lets people interested in identity focus on identity, and block storage people get to focus on block storage.  The result has been an explosion of interest in contributing.

I think OpenStack got lucky with timing:  the world realized it needed a cloud management solution when OpenStack got mature enough to start filing that role.

Comments are live

With a huge amount of help from Robert Ancell for a lot of the foundations for the new code, I’ve pushed a plugin today to allow anonymous rating of applications.

Screenshot from 2016-02-10 17-16-04

If people abuse or spam this I’ll take the feature away until we can have OpenID logins in GNOME Online Accounts, but I’m kinda hoping people won’t be evil. The server is live and accepting reviews and votes, but the API isn’t set in stone.

Video From My FOSDEM 2016 Talk

The FOSDEM folks have processed and released the video on my “Live Migration of Virtual Machines From The Bottom Up” talk.  Available at this location.

<video class="wp-video-shortcode" controls="controls" height="360" id="video-739-1" preload="metadata" width="640"><source src="https://video.fosdem.org/2016/k1105/live-migration-of-virtual-machines-from-the-bottom-up.mp4?_=1" type="video/mp4">https://video.fosdem.org/2016/k1105/live-migration-of-virtual-machines-from-the-bottom-up.mp4</video>
Project Remote Dependency Solving

Project Remote Dependency Solving


Project was initiated by Jan Zeleny. Aim of the project is mainly for low-end and low-cost devices,
which have usually slower hardware. The project can be useful on Fedora as well.
Nowadays three students work on the project from FIT VUT Brno and me (Petr Hracek) as a leader of the team.
Project is a part Red Hat Lab Q.

Students are:

  • Josef Řídký

  • Michal Ruprich

  • Šimon Matěj

The aim:

Let’s say, we have a device with low-cost hardware and we have Fedora Linux distribution installed on the device. If we wanted to install a package or set of packages and device memory might not be sufficient, memory could be overlapped or device could be restarted during computing of package dependencies. In case of failure or crash, dependencies are not determined properly or we don’t even have dependencies at all.
The project “Remote Dependency Solving” should solve it. Dependencies are computed remotely, the name of the project implies. First, client sends information about the system and which packages it would like to install. Next, server computes dependencies and returns to client a set of packages with correct versions. Finally, client can just install them.

Another aim of the project is feature called “caching”.
This is for the case where another client has already solved dependencies which are the same as ours.

This is mainly useful for computer farms where one computer solves dependencies and another one will only install a set of packages with correct versions as was computed before. An example usage can be Red Hat Satellite.

Of course project needs an infrastructure, but it is not in the scope right now. We are planning to do it later on.

How to test the project on your local machine?

Because the project is not part of Fedora repository yet, we need to install them from GitHub repo. However, we are working hard on adding the project as a package to Fedora.

First of all, we need to clone the project on computer or embedded device with command

git clone https://github.com/rh-lab-q/server-side-dependency-solving

Now, we have cloned the project on our device.

Before project execution, we need to install dependencies.
Unfortunately, we can not use Remote Dependency Solving project for it.

Run this command to install required packages:

sudo dnf install gcc cmake libsolv librepo-devel hawkey-devel glib2-devel json-glib json-glib-devel check-devel

Now, we are ready to compile and install project.

Project compilation:

First of all, we have to switch to build directory where we have cloned the project.
Next step is to run cmake command, like this

$ cd build
$ sudo cmake ../

The command verifies that we have all required packages for the project.
If there is no error we can build the project using make command:

$ sudo make

Command make creates two binary files with the names rds-client and rds-server.

How to use RDS to install/erase/update package(s)?

How to test the project on local computer over localhost?

First of all, we have to open two terminals. First one will be used for rds-server and second one for rds-client.
In order to install/erase/update package(s) we need to execute them under root account:

sudo ./ssds-server


sudo ./ssds-client

Now, let’s look how commands install, delete and update work and see what is going on on both terminals.
On the first terminal we run rds-server; we should see communication with rds-client.

Installation of emacs package

Before emacs installation, let’s check if emacs package is installed or not. This can be done by command rpm -q emacs:
$ rpm -q emacs
package emacs is not installed

Now, let’s install emacs package. We’ll run command rds-client --install emacs to install the package

Server communication log:

$ sudo ./rds-server
[2/2/16 14:26:06 SSDS]: Server started.
[MESSAGE]: Connection accepted from ip address


pkg v query: emacs
[MESSAGE]: Dependencies for emacs are ok.
pred insert
  "code" : 11,
  "data" : {
    "install" : [
        "pkg_name" : "emacs-common",
        "pkg_loc" : "Packages/e/emacs-common-24.5-6.fc23.x86_64.rpm",
        "base_url" : null,
        "metalink" : "https://mirrors.fedoraproject.org/metalink?repo=fedora-23&arch=x86_64"
        "pkg_name" : "emacs",
        "pkg_loc" : "Packages/e/emacs-24.5-6.fc23.x86_64.rpm",
        "base_url" : null,
        "metalink" : "https://mirrors.fedoraproject.org/metalink?repo=fedora-23&arch=x86_64"
    "upgrade" : [
    "erase" : [
    "obsolete" : [

Client communication log:

$ sudo ./rds-client --install emacs
[2/2/16 14:30:27 SSDS]: Client startup.
[MESSAGE]: Client startup. Required package count 1.
[MESSAGE]: Trying to connect to server...(1 of 3)
[MESSAGE]: Connection to server is established.
[MESSAGE]: Sending initial message to server.
[MESSAGE]: Message sent.
[MESSAGE]: Installation of packages was selected.
[MESSAGE]: Sending message with repo info to server.
[MESSAGE]: Waiting for answer from server.
Number of packages to
install: 2
update: 0
erase: 0
maybe erase: 0
[MESSAGE]: Result from server:
Packages for install
[QUESTION]: Is it ok?
[y/n/d]: y
[MESSAGE]: Downloading preparation for package: emacs-common
[MESSAGE]: Downloading preparation for package: emacs
[MESSAGE]: Downloading packages.
emacs                        100% - Downloaded.
emacs-common 100% - Downloaded.
[MESSAGE]: All packages were downloaded successfully.
[MESSAGE]: Installing packages.
Preparing...                          ################################# [100%]
Updating / installing...
   1:emacs-common-1:24.5-6.fc23       ################################# [ 50%]
   2:emacs-1:24.5-6.fc23              ################################# [100%]

Now, let’s check if emacs package is really installed.

$ rpm -q emacs

Erasing of emacs package

Now, let’s erase emacs package. We’ll run command rds-client --erase emacs to erase the package.

Server communication log:

$ sudo ./rds-server
[2/2/16 14:26:06 SSDS]: Server started.
[MESSAGE]: Connection accepted from ip address



pkg v query: emacs
[MESSAGE]: Dependencies for emacs are ok.
pred insert
  "code" : 11,
  "data" : {
    "install" : [
    "upgrade" : [
    "erase" : [
        "pkg_name" : "emacs",
        "pkg_loc" : null,
        "base_url" : null,
        "metalink" : "@System"
    "obsolete" : [

Client communication log:

$ sudo ./rds-client --erase emacs
[3/2/16 21:11:55 SSDS]: Client startup.
[MESSAGE]: Client startup. Required package count 1.
[MESSAGE]: Trying to connect to server...(1 of 3)
[MESSAGE]: Connection to server is established.
[MESSAGE]: Erase of packages was selected.
[MESSAGE]: Sending message with repo info to server.
[MESSAGE]: Waiting for answer from server.
from server:
  "code" : 11,
  "data" : {
    "install" : [
    "upgrade" : [
    "erase" : [
        "pkg_name" : "emacs",
        "pkg_loc" : null,
        "base_url" : null,
        "metalink" : "@System"
    "obsolete" : [
Number of packages to
install: 0
update: 0
erase: 1
maybe erase: 0
[MESSAGE]: Result from server:
Packages for erase
[QUESTION]: Is it ok?
[y/n]: y
[MESSAGE]: Downloading packages.
[MESSAGE]: All packages were downloaded successfully.
[MESSAGE]: Erasing packages.
Preparing...                          ################################# [100%]
Cleaning up / removing...
   1:emacs-1:24.5-6.fc23              ################################# [100%]
[3/2/16 21:12:04 SSDS]: End of client.

Now let’s check if package is really uninstalled.

$ rpm -q emacs
package emacs is not installed

How to update a system with RDS?

Update of the system will be done by command rds-client --update.

Server communication log:
  $ sudo ./rds-server
[9/2/16 09:44:21 SSDS]: Server started.
[MESSAGE]: Connection accepted from ip address


Downloading repo: updates  - 100%
[MESSAGE]: Metadata for updates - download successfull (Destination dir: /tmp/ssds/updates).
Downloading repo: fedora  - 100%
[MESSAGE]: Metadata for fedora - download successfull (Destination dir: /tmp/ssds/fedora).
Downloading repo: adobe-linux-i386  - 100%
... SNIP ...
Downloading repo: mhlavink-developerdashboard  - 100%
[MESSAGE]: Metadata for mhlavink-developerdashboard - download successfull (Destination dir: /tmp/ssds/mhlavink-developerdashboard).
Downloading repo: rhpkg  - 100%
[MESSAGE]: Metadata for rhpkg - download successfull (Destination dir: /tmp/ssds/rhpkg).
Downloading repo: fedora-steam  - 100%
[MESSAGE]: Metadata for fedora-steam - download successfull (Destination dir: /tmp/ssds/fedora-steam).
Downloading repo: helber-atom  - 100%
[MESSAGE]: Metadata for helber-atom - download successfull (Destination dir: /tmp/ssds/helber-atom).
Downloading repo: google-chrome  - 100%
[MESSAGE]: Metadata for google-chrome - download successfull (Destination dir: /tmp/ssds/google-chrome).
  "code" : 11,
  "data" : {
    "install" : [
    "upgrade" : [
        "pkg_name" : "tzdata-java",
        "pkg_loc" : "t/tzdata-java-2016a-1.fc23.noarch.rpm",
        "base_url" : null,
        "metalink" : "https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64"
        "pkg_name" : "tzdata",
        "pkg_loc" : "t/tzdata-2016a-1.fc23.noarch.rpm",
        "base_url" : null,
        "metalink" : "https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64"
        "pkg_name" : "selinux-policy-targeted",
        "pkg_loc" : "s/selinux-policy-targeted-3.13.1-158.4.fc23.noarch.rpm",
        "base_url" : null,
        "metalink" : "https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64"
        "pkg_name" : "selinux-policy",
        "pkg_loc" : "s/selinux-policy-3.13.1-158.4.fc23.noarch.rpm",
        "base_url" : null,
        "metalink" : "https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64"
      ... SNIP ...
        "pkg_name" : "abi-dumper",
        "pkg_loc" : "a/abi-dumper-0.99.14-1.fc23.noarch.rpm",
        "base_url" : null,
        "metalink" : "https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64"
    "erase" : [
    "obsolete" : [

Client communication log:
  $ sudo ./rds-client --update
[9/2/16 09:44:36 SSDS]: Client startup.
[MESSAGE]: Client startup. Required package count 0.
[MESSAGE]: Trying to connect to server...(1 of 3)
[MESSAGE]: Connection to server is established.
[MESSAGE]: Update all packages was initiated.
[MESSAGE]: Sending message with repo info to server.
[MESSAGE]: Waiting for answer from server.
from server:
  "code" : 11,
  "data" : {
    "install" : [
    "upgrade" : [
        "pkg_name" : "tzdata-java",
        "pkg_loc" : "t/tzdata-java-2016a-1.fc23.noarch.rpm",
        "base_url" : null,
        "metalink" : "https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64"
        "pkg_name" : "tzdata",
        "pkg_loc" : "t/tzdata-2016a-1.fc23.noarch.rpm",
        "base_url" : null,
        "metalink" : "https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64"
        "pkg_name" : "selinux-policy-targeted",
        "pkg_loc" : "s/selinux-policy-targeted-3.13.1-158.4.fc23.noarch.rpm",
        "base_url" : null,
        "metalink" : "https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64"
        "pkg_name" : "selinux-policy",
        "pkg_loc" : "s/selinux-policy-3.13.1-158.4.fc23.noarch.rpm",
        "base_url" : null,
        "metalink" : "https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64"
... SNIP ...
        "pkg_name" : "abi-dumper",
        "pkg_loc" : "a/abi-dumper-0.99.14-1.fc23.noarch.rpm",
        "base_url" : null,
        "metalink" : "https://mirrors.fedoraproject.org/metalink?repo=updates-released-f23&arch=x86_64"
    "erase" : [
    "obsolete" : [
Number of packages to
install: 0
update: 31
erase: 0
maybe erase: 0
[MESSAGE]: Result from server:
Packages for update
... SNIP ...
[QUESTION]: Is it ok?
[y/n/d]: y
[MESSAGE]: Downloading preparation for package: tzdata-java
[MESSAGE]: Downloading preparation for package: tzdata
[MESSAGE]: Downloading preparation for package: selinux-policy-targeted
[MESSAGE]: Downloading preparation for package: selinux-policy
... SNIP ...
[MESSAGE]: Downloading preparation for package: abi-dumper
[MESSAGE]: Downloading packages.
[MESSAGE]: Packages are installed
[9/2/16 09:47:23 SSDS]: End of client.

The rest of DNF options will be implemented soon.

What next?

  • Use the same commands as DNF,

  • secure communication between client and server,

  • package project into Fedora,

  • cache request on the server side so that we do not need to repeat dependency solving,

  • start rds-server as a daemon,

  • and include project into Fedora-Infrastructure.


Want to help Fedora?

During the past week of conferences I finally had an opportunity to meet people from my team abroad and it was a blast!

One of the many things we discussed was how to attract more contributors and make it easier for people help Fedora (thinking first of all Fedora Hubs, of course). During that discussion I learned about this amazing site, where anybody can come and see for themselves, what their area of interest might be and how to join different teams we have.

Screenshot from 2016-02-10 13-46-48

So I really recommend you to go and check it out, sign up and, of course, share ;)

The most important part of your project might not even be a line of code
Open-source licensing: how does it affect your work?

Open-source licensing: how does it affect your work?

Today’s entry to the blog is sourced from a thread that I posted on the SpigotMC Forums. If you wish to join in the discussion about this, feel free to chime in on the thread or leave a comment on my blog. In this post, I covered licensing, licenses, and why your open-source software project should have a license. You can read my original post in this blog entry.

I’d like to share some personal and real-word advice to many of you contributing open-source resources to Spigot, but also to other open-source software projects you may work on even outside of just Minecraft or Spigot.


What is licensing? Why does it matter? Why should you care? There are many reasons that licensing is an important part of a project you are working on. You are taking the time to write code and share it with the world in an open way, such as publishing it on GitHub, Bitbucket, or any number of other code-hosting services. Anyone might stumble across your code and find it useful.

Licensing is the way that you can control exactly how someone who finds your code can use it and in what ways.

Okay, why does it really matter?

Maybe you’ve been writing code for a really long time and you’ve never bothered with licenses and don’t feel the need to. I’d like to present two hypothetical situations that I see pop up all the time, one in Spigot and one in the greater open-source community.

Your Plugin

You have spent a lot of time writing an awesome resource and you pushed all of your code on GitHub! Woohoo, project complete! You package it up as a JAR and submit out to the open. Skip ahead a few months, and maybe you no longer have the time to contribute to your project. Or maybe someone has an awesome idea for a totally different plugin that uses similar functionality to what you have written.

A new person finds your code on GitHub and discovers that it has the perfect method or algorithm for his own project. Or maybe they want to continue your project with new, fresh energy! But you have no license for your code. By default, this means default copyright laws will apply to your code. This is an extremely limiting type of copyright enforcement and almost defeats the entire purpose of even open-sourcing your code. A law-abiding programmer might just give up on the project and look elsewhere, or maybe a not-so-law-abiding programmer will secretly copy and paste your code without attributing your work back to you. This helps neither you or the friendly programmer looking at continuing or forking your work.

In many cases, the SpigotMC Staff receive reports about people “copying” other peoples’ code. Having a licensed project makes reviewing these reports 10x easier. People without licenses or with ambiguous sources makes it extremely difficult to review and make decisions about whether projects are copies.

By licensing your code, you are protecting your own work and writing the rules to how people can use your code. If you are open-sourcing your code, usually the point is to have collaboration with others and give back to the community by allowing others to tinker, modify, or play with what you have created. Make it easier for others to contribute, help, or build new awesome things by choosing a license!

Your Project

For any open-source software project on the Internet, having a license is very, very important. For example, let’s say you write an important library or utility that can be used to make a developer’s life easier for making a user interface more friendly. Your program is well-designed and has usefulness outside of what even you intended to write it for.

Perhaps a large company stumbles across your code and also thinks it’s very useful for their own project. Maybe their project is proprietary or closed-source. Having a license in a situation like this suddenly becomes very important. Some licenses would permit this company to take the plugin or library, modify it to their own needs, and include it in their own product, while only leaving a small mention to you in the “Legal” section of their app. Maybe you’re okay with that! Maybe you’re not.

If you’re not, there are licenses that let you define how the code is used in a case like this. With some licenses, if the company decides to modify and use your code, they will have to open-source their changes that made as well. If they don’t modify anything, they just have to link back to your original source code. In some more extreme licenses, anything that touches your code also by extension has to be open source.

For a Minecraft example of this, let’s say you have a “Super Craft Bros.” plugin open-sourced on your GitHub. Hypixel stumbles across your code and decides they want to use it for their own servers. Let’s say your code is licensed under the Mozilla Public License 2.0. For this license, if they take your code and make no changes, they only have to give credit back to you. If they take your code and change it, they also have to open-source all of the changes they make to your code.

Now, the changes made by the bigger company can benefit many others instead of just the one company!

What licenses are there?

If you Google “open source licenses”, you may be overwhelmed. There are maybe close to the hundreds of different licenses for you to choose from. How can you pick one to settle on?! Fortunately, there are websites that do a great job of summarizing licenses to exactly what others can or cannot do with your code. A very popular site is tldrlegal.com, which provides bullet-point summaries of different licenses.


Exploring that site is a great reference for picking a license. However, in this thread, I’m going to do a very quick summary of four of the most popular open-source licenses that exist. However, it is important to preface this with a statement: I am not a lawyer and this does not constitute legal advice. It is important for you to look more into a license that feels right for how you want to share your code and determine what others can do with it.

MIT License

Open-sourcing licensing: the MIT License is the most relaxed

The MIT License may be the most relaxed open-source license available today

The MIT License is almost universally regarded as one of the least strict licenses in open source. You can read more about it here.

You can:

  • Use the work commercially (think of the big company example said earlier)
  • Modify the original code
  • Distribute the original code or distribute your modifications
  • Sublicense the code (in other words, use it with code that has a different license)
  • Use the code for private use

You cannot:

  • Hold the original author liable for damages
    • So this can’t happen: “Oh noes! I accidentally exploded my entire server with your code! You must pay me monies to fix this nao!!!”

You must:

  • Include a copyright notice in all copies or other uses of the work
  • Include an original copy of the license with the original or modified code
    • You will always be credited for your work!

Apache License 2.0

Open-source licensing: the Apache License 2.0 offers more than MIT License

Slightly stricter than the MIT License, the Apache License 2.0 offers more protection to the author

The Apache License 2.0 is only slightly more restrictive than the MIT License, but it defines a few more rules than the MIT License. This can be useful if you want to make sure your work is given proper credit back to you and you care a little more about how it’s used. You can read more here.

You can and cannot do the same things mentioned above for the MIT License. So we will just highlight the changes!

You can:

  • Same as MIT License
  • Use patent claims (might be advanced for most of you, but can be useful for bigger projects)
  • Place a warranty (lets you have a warranty on your code, if desired)

You cannot:

  • Same as MIT License

You must:

  • Same as MIT License
  • Openly state changes you make from the original project
  • Include the NOTICE (if the project has a NOTICE file, you have to keep it in copies / modified works)

Mozilla Public License 2.0

Open-source licensing: Introducing the Mozilla Public License 2.0

Introducing the Mozilla Public License 2.0

The next step up from the Apache License 2.0 is the Mozilla Public License 2.0. This license has the same basic rights as the Apache License 2.0, but it goes a little more in-depth about how the code can be re-used. This is my personal favorite license! You can read more here.

Most of the things for what you can and cannot do are the same as the Apache License (and thereby, the MIT License). So again, we’ll just highlight the changes.

You can:

  • Same as Apache License 2.0

You cannot:

  • Same as Apache License 2.0

You must:

  • Same as Apache License 2.0
  • Disclose the source (any changes made using MPL’d code must also be made open under the MPL!)
  • Include the original (either the source code or instructions to get the original code must be provided)

GNU Public License v3

Open-source licensing: the GNU Public License v3

Open-source licensing: the GNU Public License v3

The GNU Public License v3, also known as the GPLv3, is one of the most well-known and strict licenses in open-source. It has very specific rules for how the code can be used and shared, and leaves a lot of control over to the author. In a sense, it’s “after” the MPL 2.0, but it also has some key differences. You can read more about it here.

Again, we will highlight the changes from the Mozilla Public License 2.0.

You can:

  • Same as Mozilla Public License 2.0 (except sublicensing)

You cannot:

  • Same as Mozilla Public License 2.0
  • Sublicense the code (this is a big concept worth understanding if you use the GPLv3)

You must:

  • Same as Mozilla Public License 2.0
  • Include original copyright (must be retained in all copies or modified works)
  • Include install instructions (you must document how to install the software)

Go forth and conquer!

Congratulations! You now know a little bit more about licensing, open-source licenses, and how to use them. Hopefully this will help emphasize why and how licenses are important in open-source software. In many ways, the license you choose to use can even be more important than any lines of code you write. That might sound absurd, but when it comes to deciding how your code can be reused, modified, or distributed, it’s something that can be vitally important to your project.

Those of you without a license, please consider choosing one, or talk to other teammates of your projects about what license you all want to use. If you code in the open, make sure you are protecting yourself and paying attention to how you want other people to use your code.

The post The most important part of your project might not even be a line of code appeared first on Justin W. Flory's Blog.

My first travelling experience to Myanmar


Yangon, Myanmar is one of my wish list destination country and finally I could make it.

I got a chance to attend the BarCamp Yangon in January 31 and February 1 2016. It was my first time to participate the BarCamp in different country but attended many BarCamp events in Cambodia. So what is it BarCamp? It is an international network of user-generated conference or another word is self-organized conference that primarily focused around technology, social media, startup, community works and so on. They are open, participatory workshop-events, the content of which is provided by participants.

I attended the BarCamp Yangon as a Fedora Ambassador APAC and got the chance to provide 2 topics talks for IT Automation with Ansible and How to Contribute to Fedora Project. There were around 2500 participants came on the first day and as well as similar amount on the second day, most of them came with passionated to learn and explore about new topics, lessons or ideas which is different from some experiences that I had with other BarCamp.

First Day 1 in Yangon, I arrived around 4pm local time and head to hotel. Thanks to Pravin who booked a great hotel which provided the very good services and friendly staffs.


BarCamp Day 1

We were instructed to go very early at 8:30AM in the morning at the Myanmar ICT (MICT) Park to be able register our topics that we wish to talk for the BarCamp Yangon. Finally we arrived early as the hotel and venue is very closer and on the time of the event opening ceremony and everything were spoken in Burmese.

After the opening, we were able to get the topic registration forms and spending 20 minutes to register, waiting for the call from organizer but finally our topics were accepted to talk on the first day. I went to a few topics but these were in Burmese.

My first topic was on the IT Automation and Configuration Management with Ansible. There were a few people able to catch up about the tool and also my English.


BarCamp Day 2

It was almost the same structure, we were instructed to follow the same processes by taking the forms and waiting for the topic acceptant. Luckily again, our topics were accepted to talk for the second day. I was giving my second topic on “How to Contribute to Fedora and Benefit for Students”.


Myanmar Fedora Community Meetup Day 3


We arranged a meetup with the local Fedora community group to discuss on the:

  • Fedora project translation
  • Fedora 23 Release Party (it was a late party)
  • Leap talks on How students could contribute to the Fedora project
  • Pravin talks on the Fedora Globalization
  • Yan introduce the Fedora Installation Workshop

Here are the highlights about activities for the last two days at BarCamp Yangon and other activities:

  • Unlikely all my topics were conducted in English but I have to find the Burmese translator to translate all my talks. My friend Pravin were able to talk in English better way.
  • Pravin were able to talk about what he interested and intended to talk on the Unicode which is a hot topic in Myanmar.
  • We were able to distributed almost 50 Fedora DVDs to different participants within our talks.
  • I were able to meet many old friends who were in the BarCamp organizing team who were helping us a lot during the time in Yangon.
  • Pravin and I have similar favourite about the foods, places to visit and some other common points as a husband.
  • My flights were delayed 3 out of 4 times from Phnom Penh to Yangon. It was my first worst flights to stay in the airports longer than 9 hours.


A Holla out to the Kolla devs

Devstack uses Pip to install packages, which conflict with the RPM versions on my Fedora system. Since I still need to get work done, and want to run tests on Keystone running against a live database, I’ve long wondered if I should go with container based approach. Last week, I took the plunge and started messing around with Docker. I got the MySQL Fedora container to run, then found Lars Keystone container using Sqlite, and was stumped. I poked around for a way to get the two containers talking to each other, and realized that we had a project dedicated to exactly that in OpenStack: Kolla. While it did not work for me right out of a git-clone, several of the Kolla devs worked with me to get it up and running. here are my notes, distilled.

I started by reading the quickstart guide. Which got me oriented (I suggest you start there, too). But found a couple things I needed to learn. First, I needed a patch that has not quite landed, in order to make calls as a local user, instead of as root. I still ended up creating /etc/kolla and chowning it to ayoung. That proved necessary, as the work done in that patch is “necessary but not sufficient.”

I am not super happy about this, but I needed to make docker run without a deliberate sudo. So I added the docker group, added myself to it, and restarted the docker service via systemd. I might end up doing all this as a separate developer user, not as ayoung, so at least I need to su – developer before the docker stuff. I may be paranoid, but that does not mean they are not out to get me.

Created a dir named ~/kolla/ and put in there:


kolla_base_distro: "centos"
kolla_install_type: "source"

# This is the interface with an ip address you want to bind mariadb and keystone too
network_interface: "enp0s25"
# Set this to an ip address that currently exists on interface "network_interface"
kolla_internal_address: ""

# Easy way to change debug to True, though not required
openstack_logging_debug: "True"

# For your information, but these default to "yes" and can technically be removed
enable_keystone: "yes"
enable_mariadb: "yes"

# Builtins that are normally yes, but we set to no
enable_glance: "no"
enable_haproxy: "no"
enable_heat: "no"
enable_memcached: "no"
enable_neutron: "no"
enable_nova: "no"
enable_rabbitmq: "no"
enable_horizon: "no"

I also copied the file ./etc/kolla/passwords.yml from the repo into that directory, as it was needed during the deploy.

To build the images, I wanted to work inside the kolla venv (didn’t want to install pip packages on my system) so I ran the

tox -epy27

Which, along with running the unit tests, created a venv. I activated that venv for the build command:

. .tox/py27/bin/activate
./tools/build.py --type source keystone mariadb rsyslog kolla-toolbox

Note that I had first built the binary versions using:

./tools/build.py keystone mariadb rsyslog kolla-toolbox

But then tried to deploy the source version. The source versions are downloaded from tarballs on http://tarballs.openstack.org/ whereas the binary versions are the Delorean RPMS, and the trail the source versions by a little bit (not a lot).

I’ve been told “if you tox gen the config you will get a kolla-build.conf config. You can change that to git instead of url and point it to a repo.” But I have not tried that yet.

I had to downgrade to the pre 2.0 version of Ansible, as I was playing around with 2.0’s support for Keystone V3 API. Kolla needs 1.9

dnf downgrade ansible

There is an SELinux issue. I worked round for now by setting SELInux into permissive mode, but we’ll revisit that shortly. It was only for deploy; once the containers were running, I was able to switch back to enforcing mode.
We will deal with it here.

./tools/kolla-ansible --configdir /home/ayoung/kolla   deploy

Once that ran, I wanted to test Keystone. I needed a keystone RC file. To get it:

./tools/kolla-ansible post-deploy

It put it in /etc/kolla/.

. /etc/kolla/admin-openrc.sh 
[ayoung@ayoung541 kolla]$ openstack token issue
| Field      | Value                            |
| expires    | 2016-02-08T05:51:39.447112Z      |
| id         | 4a4610849e7d45fdbd710613ff0b3138 |
| project_id | fdd0b0dcf45e46398b3f9b22d2ec1ab7 |
| user_id    | 47ba89e103564db399ffe83d8351d5b8 |


I have to admin that I removed the warning.

usr/lib/python2.7/site-packages/keyring/backends/Gnome.py:6: PyGIWarning: GnomeKeyring was imported without specifying a version first. Use gi.require_version('GnomeKeyring', '1.0') before import to ensure that the right version gets loaded.
  from gi.repository import GnomeKeyring

Huge thanks to SamYaple and inc0 (Michal Jastrzebski) for their help in getting me over the learning hump.

I think Kolla is fantastic. It will be central to my development for Keystone moving forward.

Kernel self protection introduction

At the last kernel summit, Kees Cook started calling for people to participate in the "Kernel Self Protection Project" to increase security or 'harden' the kernel. The goal is to focus on eliminating classes of security bugs or reducing the impact of those bugs. This is a slightly different goal than just finding and fixing security bugs. As long as developers are writing code, there are going to be security bugs. There is no way around this. "Self Protection" involves making sure that these bugs can't be used as part of an exploit. A classic example is the buffer overflow:

#include <stdio.h>

int main(int argc, char **argv)
    char buffer[10];

    printf("you said %s\n", buffer);
    return 0;

For those who haven't seen this before, gets is a function which reads characters from stdin and stores them in buffer. There's no bounds checking involved on gets which means that it's easy to write outside the buffer. (Give it “a” and it’s fine. Give it “aaaaaaaaaaaaaaaaaaaaaaaaaaaa” and you’ve overrun the buffer) In this example, buffer is stack allocated which means that the corruption is going to happen on the stack. A clever attacker could use this to overwrite the return address on the stack and execute their own code in the buffer or jump to another location.

How would hardening features help here? Under normal circumstances, code should never run on the stack. Marking the stack as non-exectuable would prevent an attacker from running their own code. This still would not prevent a jump to other executable code though. An attacker needs to figure out where to jump to in order to do something 'interesting'. Randomization of code would make it more difficult to figure out where to jump. The end result of hardening is that there is still a bug to be fixed but the security implications are significantly reduced.

Many of the protections being discussed for the kernel are coming out of the grsecurity patches. These patches have been around for a very long time and provide a set of modern security features. The question always comes up "but why aren't they in the mainline kernel if they are so useful?". The simplest answer is that the authors and the kernel maintainers never came to an agreement about the patches so they were never merged. (The full history is available in various mailing lists for those who are interested. Google will find you plenty of interesting reading.) The patch authors have been doing the hard work of rebasing and reworking the patches to work with newer kernel versions ever since.

The hope is that this new push will lead to some of the grsecurity features being moved into mainline so more people can benefit from increased security. So far, a few features have been pulled out and sent out for review. Kernel development is an iterative process and a good amount of feedback has been given. The key is persistence in taking the feedback and continuing to send out new versions of the patches. I'll talk in a later post about my own experience in sending out patches for this project. The self protection wiki has more details about the overall project in depth and the types of problems being looked at.

February 09, 2016

[Short Tip] Use Red Hat Satellite 6 as an inventory resource in Ansible

Ansible Logo

Besides static file inventories, Ansible can use custom scripts to dynamically generate inventories or access other sources, for example a CMDB or a system management server – like Red Hat Satellite.
Luckily, Nick Strugnell has already written a custom script to use Satellite as an inventory source in Ansible.

After checking out the git, the hammer.ini needs to be adjusted: at least host, username, password and organization must be adjusted.

Afterwards, the script can be invoked directly to show the available hosts:

$ ansible -i ~/Github/ansible-satellite6/satellite-inventory.py all --list-hosts

This works with ansible CLI and playbook calls:

$ ansible-playbook -i ~/Github/ansible-satellite6/satellite-inventory.py apache-setup.yml
PLAY [apache setup] *********************************************************** 

GATHERING FACTS *************************************************************** 

The script works quite well – as long as the certificate you use on the Satellite server is trusted. Otherwise the value for self.ssl_verify must be set to False. Besides, it is a nice and simple way to access already existing inventory stores. This is important because Ansible is all about integration, and not about “throwing away and making new”.

Filed under: Cloud, Debian & Ubuntu, Fedora & RHEL, Linux, Microsoft, Shell, Short Tip, SUSE, Technology
Google celebrate Safer Internet Day 2016 with one great gift.
All you have to do is check your secure account.
After that will see this message: To help celebrate Safer Internet Day 2016, we added 2 GB of free Drive storage to your Google account because you completed the Security Checkup.
Segmentation faults with sphinx and pyenv

I’m a big fan of the pyenv project because it makes installing multiple python versions a simple process. However, I kept stumbling into a segmentation fault whenever I tried to build documentation with sphinx in Python 2.7.11:

writing output... [100%] unreleased
[app] emitting event: 'doctree-resolved'(<document: <section "current series release notes"...>>, u'unreleased')
[app] emitting event: 'html-page-context'(u'unreleased', 'page.html', {'file_suffix': '.html', 'has_source': True, 'show_sphinx': True, 'last
generating indices... genindex[app] emitting event: 'html-page-context'('genindex', 'genindex.html', {'pathto': <function pathto at 0x7f4279d51230>, 'file_suffix': '.html'
Segmentation fault (core dumped)

I tried a few different versions of sphinx, but the segmentation fault persisted. I did a quick reinstallation of Python 2.7.11 in the hopes that a system update of gcc/glibc was causing the problem:

pyenv install 2.7.11

The same segmentation fault showed up again. After a ton of Google searching, I found that the --enable-shared option allows pyenv to use shared Python libraries at compile time:

env PYTHON_CONFIGURE_OPTS="--enable-shared CC=clang" pyenv install -vk 2.7.11

That worked! I’m now able to run sphinx without segmentation faults.

The post Segmentation faults with sphinx and pyenv appeared first on major.io.

F23-20160208 Updated Lives available. (Kernel:4.3.5-300)

Updated Lives for 23 are  available in torrent and raw iso download format from: (Includes GNOME,KDE,LXDE,MATE,CINNAMON,SOAS,XFCE)

Fedora 23 Updated Lives

Additional Spins available from:

Fedora Spins

All Versions also available  via Official Torrent from:

All Official Fedora Torrents

Filed under: Community, Fedora, Fedora 22 Torrents
Tunir 0.13 is released and one year of development

Tunir 0.13 release is out. I already have a koji build in place for F23. There are two major feature updates in this release.

AWS support

Yes, we now support testing on AWS EC2. You will have to provide your access tokens in the job configuration along with the required AMI details. Tunir will boot up the instance, will run the tests, and then destroy the instance. There is documentation explaining the configuration details.

CentOS images

We now also support CentOS cloud images, and Vagrant images in Tunir. You can run the same tests on the CentOS images based on your need.

One year of development

I have started Tunir on Jan 12 2015, means it got more than one year of development history. At the beginning it was just a project to help me out with Fedora Cloud image testing. But it grew to a point where it is being used as the Autocloud backend to test Fedora Cloud, and Vagrant images. We will soon start testing the Fedora AMI(s) too using the same. Within this one year, there were total 7 contributors to the project. In total we are around 1k lines of Python code. I am personally using Tunir for various other projects too. One funny thing from the code commits timings, no commit on Sundays :)

[Howto] Look up of external sources in Ansible

Ansible Logo Part of Ansible’s power comes from an easy integration with other systems. In this post I will cover how to look up data from external sources like DNS or Redis.


A tool for automation is only as good as it is capable to integrate it with the already existing environment – thus with other tools. Among various ways Ansible offers the possibility to look up Ansible variables from external stores like DNS, Redis, etcd or even generic INI or CSV files. This enables Ansible to easily access data which are stored – and changed, managed – outside of Ansible.


Ansible’s lookup feature is already installed by default.

Queries are executed on the host where the playbook is executed – in case of Tower this would be the Tower host itself. Thus the node needs access to the resources which needs to be queried.

Some lookup functions for example for DNS or Redis servers require additional python libraries – on the host actually executing the queries! On Fedora, the python-dns package is necessary for DNS queries and the package python-redis for Redis queries.

Generic usage

The lookup function can be used the exact same way variables are used: curly brackets surround the lookup function, the result is placed where the variable would be. That means lookup functions can be used in the head of a playbook, inside the tasks, even in templates.

The lookup command itself has to list the plugin as well as the arguments for the plugin:

{{ lookup('plugin','arguments') }}



Entire files can be used as content of a variable. This is simply done via:

  content: "{{ lookup('file','lorem.txt') }}"

As a result, the variable has the entire content of the file. Note that the lookup of files always searches the files relative to the path of the actual playbook, not relative to the path where the command is executed.

Also, the lookup might fail when the file itself contains quote characters.


While the file lookup is pretty simple and generic, the CVS lookup module gives the ability to access values of given keys in a CVS file. An optional parameter can identify the appropriate column. For example, if the following CSV file is given:

$ cat gamma.csv

Now the lookup function for CVS files can access the lines identified by keys which are compared to the values of the first column. The following example looks up the key dinner and gives back the entry of the third column: {{ lookup('csvfile','dinner file=gamma.csv delimiter=, col=2') }}.

Inserted in a playbook, this looks like:

ansible-playbook examples/lookup.yml

PLAY [demo lookups] *********************************************************** 

GATHERING FACTS *************************************************************** 
ok: [neon]

TASK: [lookup of a cvs file] ************************************************** 
ok: [neon] => {
    "msg": "noodles"

PLAY RECAP ******************************************************************** 
neon                       : ok=2    changed=0    unreachable=0    failed=0

The corresponding playbook gives out the variable via the debug module:

- name: demo lookups
  hosts: neon

    - name: lookup of a cvs file
      debug: msg="{{ lookup('csvfile','dinner file=gamma.csv delimiter=, col=2') }}"


The DNS lookup is particularly interesting in cases where the local DNS provides a lot of information like SSH fingerprints or the MX record.

The DNS lookup plugin is called dig – like the command line client dig. As arguments, the plugin takes a domain name and the DNS type: {{ lookup('dig', 'redhat.com. qtype=MX') }}. Another way to hand over the type argument is via slash: {{ lookup('dig', 'redhat.com./MX') }}

The result for this example is:

TASK: [lookup of dns dig entries] ********************************************* 
ok: [neon] => {
    "msg": "10 int-mx.corp.redhat.com."


It gets even more interesting when existing databases are queried. Ansible lookup supports for example Redis databases. The plugin takes as argument the entire URL: redis://$URL:$PORT,$KEY.

For example, to query a local Redis server for the key dinner:

  - name: lookup of redis entries
    debug: msg="{{ lookup('redis_kv', 'redis://localhost:6379,dinner') }}" 

The result is:

TASK: [lookup of redis entries] *********************************************** 
ok: [neon] => {
    "msg": "noodles"


As already mentioned, lookups can not only be used in Playbooks, but also directly in templates. For example, given the template code:

$ cat templatej2
Red Hat MX: {{ lookup('dig', 'redhat.com./MX') }}
$ cat template.conf
Red Hat MX: 10 mx2.redhat.com.,5 mx1.redhat.com.


As shown the lookup plugin of Ansible provides many possibilities to integrate Ansible with existing tools and environments which already contain valuable data about the systems. It is easy to use, integrates well with the existing Ansible concepts and can quickly be integrated. Just drop it where a variable would be dropped, and it already works.

I am looking forward to more lookup modules support in the future – I’d love to see a generic “http” and a generic “SQL” plugin, even with the ability to provide credentials, although these features can be somewhat realized with already existing modules.

Filed under: Business, Cloud, Debian & Ubuntu, Fedora & RHEL, HowTo, Linux, RPM, Shell, SUSE, Technology
Fedora Infrastructure – Year In Review


The Infrastructure Team consists of dedicated volunteers and professionals managing the servers, building the tools and utilities, and creating new applications to make Fedora development a smoother process. We’re located all over the globe and communicate primarily by IRC and email.

Fedora Infrastructure

Infrastructure Highlights

Ansible Migration

We believe Ansible is the best new technology for systems deployment and management. This year, the Infrastructure team moved all remaining Puppet recipes (78 at start of FY2016) in the infrastructure to Ansible playbooks.

The automation provided by Ansible allows us to quickly fix, rebuild, and scale our existing services and deploy new services. Additionally, we worked with Ansible upstream to test the new Ansible 2.0 and have just recently moved our control host to 2.0.

Fedora Infrastructure Ansible Repository

RHEL 6 to 7 conversion

As we moved hosts over from Puppet to Ansible, we used the opportunity to rebuild all of our hosts on top of RHEL 7 and dealt with all the yak-shaving entailed therein. The last RHEL 6 instances (aside from a few that need to stay, like Jenkins RHEL 6 builder) should go away next year.

OpenStack migration

We migrated our old OpenStack instance to a newer version and moved out from under the cloud.fedoraproject.org domain to fedorainfracloud.org for HSTS reasons.

Development Highlights


Our very own git forge!  It just got a face lift last week and we think it’s pretty cool.


HyperKitty is a web front end to the new Mailman version 3 which allows users to browse topics in a more familiar, forum-like interface. We will complete development of this application and deploy for use with Fedora mailing lists.


Koschei is a continuous integration service for Fedora packages. Koschei is aimed at helping Fedora developers by detecting problems as soon as they appear in Rawhide. It tries to detect package FTBFS in Rawhide by scratch-building them in Koji.


Pronounced as bo-dee, it is a Buddhist term for the wisdom by which one attains enlightenment. Bodhi is a modular, web-based system that facilitates the process of publishing package updates for Fedora. It maintains a single stage of repositories by adding, updating, and removing packages.

MirrorManager 2

This started with a FAD at the end of 2014, but was finished and deployed in 2015.  The new MirrorManager 2 is written on top of a modern framework and has many more people familiar with its code now.


This service got a partial rewrite this year, attempting to resolve some data stability issues.


A new service that provides a JSON API to the contents of a yum repository’s metadata (a useful service for our other services).  “mdapi” means “metadata api“.


møte, conceived only last May, handles the organization and serving of MeetBot logs.  møte is a web-based graphical interface and repository for the IRC logs produced by MeetBot, replacing the dated system of serving IRC logs through an httpd directory listing.

Other teams have been doing really cool stuff that ends up making its way in through the Infrastructure team, but we really can’t claim credit for it. Notably, Release Engineering (releng) has been enhancing their automation and working with us to stand up supportive services. QA-devel has done crazy awesome work with taskotron and autoQA.  They can talk more about all that.

Goals for 2016

We tend to set goals for the next year around April each year, and so we’re not
quite ready to commit to a list, but here are some ideas we’ve been batting

  • fedora-hubs is a project that was brainstormed, designed, and prototyped
    throughout 2015, and we hope to bring it up to maturity in the coming year.
    Read mizmo’s writeups on it for a solid introduction.

  • We use nagios and collectd for monitoring our deployments, but we need to
    rethink how we’re approaching the whole operation; we’ll likely be revamping
    all that this year.
  • And… surely there are other plans lurking around the team that we just
    aren’t ready to articulate yet.  More to come!


We live in interesting times.  New directions in Fedora (the Council, Releng, NEXT, etc.) mean there’s no shortage of infrastructure problems to solve.  If you’re interested in helping out, check out our wiki page and join our infrastructure meetings to follow along.

The post Fedora Infrastructure – Year In Review appeared first on Fedora Community Blog.

InstallParty Phnom Penh

Last Saturday, we had another event with the PPLUG. Back on Software Freedom Day 2015, we agreed that we should have some InstallParties here in Phnom Penh to growth the amount of Linux users. It took us a while until most of us had time for doing such an event. But on 30th January we all had time, a venue was found very fast the National Institute of Posts, Telecommunications and Information Communication Technology (NIPTICT) offered their rooms and supplied us also with drinking water and the necessary equipment.

We started at 1pm and the room filled up very quickly around 30 people showed up, who wanted to know more about Linux and Free Software. We started with an introduction talk what Free Software is and I did later on an introduction to the most common Linux distributions.
We demonstrated later on installation processes and gave an introductions to the file hierarchy and made with them the first steps in their new operating system. Even their choice was not Fedora, its a win for the Linux community in Phnom Penh, especially the Phnom Penh LUG got new life with this event.

Xfce-4.12 for EL-7
With the update to RHEL - 7.2, GNOME was updated to the next stable version. Along those lines, for Xfce users, I would like to update the Xfce version to Xfce 4.12. However, before I push a mega update, I would like some feedback.

So, I have created a Xfce 4.12 for EPEL-7 repo. If people have some time and have a virtual machine and/or a test system running RHEL-7.2 or CentOS 7.2 or Scientific Linux 7.2, please use the COPR (details below) to install/update to Xfce 4.12 packages.

Here is the link to the COPR - xfce412-epel7 

Here are some screenshots from my Centos VM.


Potential Issues

  • There may be some version issues since I just built the latest Fedora SRPMs. 
  • xfce4-pulseaudio-plugin should obsolete xfce4-mixer. If not, please report to me. 
  • There may be some xfce panel plugins missing. Please report to me by email and I will build whatever is missing.

As a reminder,  these are NOT official packages. Please do NOT use bugzilla for these messages. Please do report any issues to me by email. 

My contact information is here - About page

My GPG key - E5C8BC67

February 08, 2016

Shop, Bikes, Fedora, Checkout

Shop, Bikes, Fedora, Checkout, Man, 1966. This image, and other public domain vintage images like it, are available on GadoImages.com.

This item belongs to: image/gadoimages.

This item has files of the following types: Archive BitTorrent, JPEG, Metadata

Copy and CudaDrive will be discontinued.
Today I got one questionable mail survey about cloud services and I thinking about the cloud and prices.
Now I saw one news about Copy and CudaDrive after my Dropbox take some space.
The news it's come from Copy and CudaDrive and will be closed.
This is the message from oficial website: copy.

Copy and CudaDrive services will be discontinued. We are announcing today that the Copy and CudaDrive services will be discontinued on May 1, 2016. Copy and CudaDrive have provided easy-to-use cloud file services and sharing functionality to millions of users the past 4+ years. However, as our business focus has shifted, we had to make the difficult decision to discontinue the Copy and CudaDrive services and allocate those resources elsewhere. For more information on this decision, please view the blog post from Rod Mathews our VP & GM, Storage Business.
Ring – The Free & Open Source Text/Audio/Video Messenger As Alternative To WhatsApp/Telegram/Facebook Messenger: Review And Video Demonstration
Introduction Welcome to the new part of Free Messenger Saga (Signal, Tox, Matrix, Tor Messenger). It’s very good that in the modern world of Facebook, WhatsApp and Skype some peoples create new software for communication. Ring – open source platform for text chat, voice and video communications and released under one of the best free … Continue reading Ring – The Free & Open Source Text/Audio/Video Messenger As Alternative To WhatsApp/Telegram/Facebook Messenger: Review And Video Demonstration
Anonymous reviews in GNOME Software

Choosing an application to install is hard when there are lots of possible projects matching a specific search term. We already list applications based on the integration level and with useful metrics like “is it translated in my language” and this makes sure that high quality applications are listed near the top of the results. For more information about an application we often want a more balanced view than the PR speak or unfounded claims of the upstream project. This is where user-contributed reviews come in.


To get a user to contribute a review (which takes time) we need to make the process as easy as possible. Making the user create a user account on yet-another-webservice will make this much harder and increase the barrier to participation to the point that very few people would contribute reviews. If anonymous reviewing does not work the plan is to use some kind of attestation service so you can use a GMail or Facebook for confirming your identity. At this point I’m hoping people will just be nice to each other and not abuse the service although this reviewing facility will go away if it starts being misused.

Designing an anonymous service is hard when you have to be resilient against a socially awkward programmer with specific political ideologies. If you don’t know any people that match this description you have obviously never been subscribed to fedora-devel or memo-list.

Obviously when contacting a web service you share your IP address. This isn’t enough to uniquely identify a machine and user, which we want for the following reasons:

  • Allowing users to retract only their own reviews
  • Stopping users up or down-voting the same review multiple times

A compromise would be to send a hash of two things that identify the user and machine. In GNOME Software we’re using a SHA1 hash of the machine-id and the UNIX username along with a salt, although this “user_id” is only specified as a string and the format is not checked.

For projects like RHEL where we care very much what comments are shown to paying customers we definitely want reviews to be pre-approved and checked before showing to customers. For distros like Fedora we don’t have this luxury and so we’re going to rely on the community to self-regulate reviews. Reviews are either up-voted or down-voted according how useful they are along with the nuclear option of marking the review as abusive.


By specifying the users current locale we can sort the potential application reviews according to a heuristic that we’re still working on. Generally we want to prefer useful reviews in the users locale and hide ones that have been marked as abusive, and we also want to indicate the users self-review so they can remove it later if required. We also want to prioritize reviews for the current application version compared to really old versions of these applications.

Comments welcome!

Efficient Multiplexing for Spark RDDs

In this post I'm going to propose a new abstract operation on Spark RDDs -- multiplexing -- that makes some categories of operations on RDDs both easier to program and in many cases much faster.

My main working example will be the operation of splitting a collection of data elements into N randomly-selected subsamples. This operation is quite common in machine learning, for the purpose of dividing data into a training and testing set, or the related task of creating folds for cross-validation).

Consider the current standard RDD method for accomplishing this task, randomSplit(). This method takes a collection of N weights, and returns N output RDDs, each of which contains a randomly-sampled subset of the input, proportional to the corresponding weight. The randomSplit() method generates the jth output by running a random number generator (RNG) for each input data element and accepting all elements which are in the corresponding jth (normalized) weight range. As a diagram, the process looks like this at each RDD partition:

Figure 1

The observation I want to draw attention to is that to produce the N output RDDs, it has to run a random sampling over every element in the input for each output. So if you are splitting into 10 outputs (e.g. for a 10-fold cross-validation), you are re-sampling your input 10 times, the only difference being that each output is created using a different acceptance range for the RNG output.

To see what this looks like in code, consider a simplified version of random splitting that just takes an integer n and always produces (n) equally-weighted outputs:

```scala def splitSampleT :ClassTag: Seq[RDD[T]] = { Vector.tabulate(n) { j =>

rdd.mapPartitions { data =>
  data.filter { unused => scala.util.Random.nextInt(n) == j }

} } ```

(Note that for this method to operate correctly, the RNG seed must be set to the same value each time, or the data will not be correctly partitioned)

While this approach to random splitting works fine, resampling the same data N times is somewhat wasteful. However, it is possible to re-organize the computation so that the input data is sampled only once. The idea is to run the RNG once per data element, and save the element into a randomly-chosen collection. To make this work in the RDD compute model, all N output collections reside in a single row of an intermediate RDD -- a "manifold" RDD. Each output RDD then takes its data from the corresponding collection in the manifold RDD, as in this diagram:

Figure 2

If you abstract the diagram above into a generalized operation, you end up with methods that might like the following:

```scala def muxPartitionsU :ClassTag => Seq[U], persist: StorageLevel): Seq[RDD[U]] = { val mux = self.mapPartitionsWithIndex { case (id, itr) =>

Iterator.single(f(id, itr))

}.persist(persist) Vector.tabulate(n) { j => mux.mapPartitions { itr => Iterator.single(itr.next()(j)) } } }

def flatMuxPartitionsU :ClassTag => Seq[TraversableOnce[U]], persist: StorageLevel): Seq[RDD[U]] = { val mux = self.mapPartitionsWithIndex { case (id, itr) =>

Iterator.single(f(id, itr))

}.persist(persist) Vector.tabulate(n) { j => mux.mapPartitions { itr => itr.next()(j).toIterator } } } ```

Here, the operation of sampling is generalized to any user-supplied function that maps RDD partition data into a sequence of objects that are computed in a single pass, and then multiplexed to the final user-visible outputs. Note that these functions take a StorageLevel argument that can be used to control the caching level of the internal "manifold" RDD. This typically defaults to MEMORY_ONLY, so that the computation can be saved and re-used for efficiency.

An efficient split-sampling method based on multiplexing, as described above, might be written using flatMuxPartitions as follows:

```scala def splitSampleMuxT :ClassTag: Seq[RDD[T]] = rdd.flatMuxPartitions(n, (id: Int, data: Iterator[T]) => {

scala.util.Random.setSeed(id.toLong * seed)
val samples = Vector.fill(n) { scala.collection.mutable.ArrayBuffer.empty[T] }
data.foreach { e => samples(scala.util.Random.nextInt(n)) += e }

}, persist) ```

To test whether multiplexed RDDs actually improve compute efficiency, I collected run-time data at various split values of n (from 1 to 10), for both the non-multiplexing logic (equivalent to the standard randomSplit) and the multiplexed version:

Figure 3

As the timing data above show, the computation required to run a non-multiplexed version grows linearly with n, just as predicted. The multiplexed version, by computing the (n) outputs in a single pass, takes a nearly constant amount of time regardless of how many samples the input is split into.

There are other potential applications for multiplexed RDDs. Consider the following tuple-based versions of multiplexing:

```scala def muxPartitionsU1 :ClassTag, U2 :ClassTag => (U1, U2), persist: StorageLevel): (RDD[U1], RDD[U2]) = { val mux = self.mapPartitionsWithIndex { case (id, itr) =>

Iterator.single(f(id, itr))

}.persist(persist) val mux1 = mux.mapPartitions(itr => Iterator.single(itr.next.1)) val mux2 = mux.mapPartitions(itr => Iterator.single(itr.next.2)) (mux1, mux2) }

def flatMuxPartitionsU1 :ClassTag, U2 :ClassTag => (TraversableOnce[U1], TraversableOnce[U2]), persist: StorageLevel): (RDD[U1], RDD[U2]) = { val mux = self.mapPartitionsWithIndex { case (id, itr) =>

Iterator.single(f(id, itr))

}.persist(persist) val mux1 = mux.mapPartitions(itr => itr.next.1.toIterator) val mux2 = mux.mapPartitions(itr => itr.next.2.toIterator) (mux1, mux2) } ```

Suppose you wanted to run an input-validation filter on some data, sending the data that pass validation into one RDD, and data that failed into a second RDD, paired with information about the error that occurred. Data validation is a potentially expensive operation. With multiplexing, you can easily write the filter to operate in a single efficient pass to obtain both the valid stream and the stream of error-data:

```scala def validateT :ClassTag = { rdd.flatMuxPartitions((id: Int, data: Iterator[T]) => {

val valid = scala.collection.mutable.ArrayBuffer.empty[T]
val bad = scala.collection.mutable.ArrayBuffer.empty[(T, Exception)]
data.foreach { e =>
  try {
    if (!validator(e)) throw new Exception("returned false")
    valid += e
  } catch {
    case err: Exception => bad += (e, err)
(valid, bad)

}) } ```

RDD multiplexing is currently a PR against the silex project. The code I used to run the timing experiments above is saved for posterity here.

Happy multiplexing!

Go(lang) meets Fedora

Is it this?

Goban(Editor’s note:  don’t get it?  ^^  See [1])

No, it is…

Gopher Gopher image is reproduced from work created and shared by Google and Renee French on golang.org page and used according to terms described in the Creative

No discriminatory tariffs for data services in India

Finally we have won. The Telecom Regulatory Authority of India has issued a press release some time ago telling that no one can charge different prices for different services on Internet. The fight was on in an epic scale, one side spent more than 100million in advertisements, and lobbying. The community fought back in the form of #SaveTheInternet, and it worked.

In case you are wondering what is this about, you can start reading the first post from Mahesh Murthy, and the reply to the response of Facebook. Trust me, it will be an amazing reading. There is also this excellent report from Lauren Smiley. There are also few other amazing replies from TRAI to Facebook about same.

HFOSS: Reviewing “What is Open Source?”, Steve Weber
What is Open Source? - Steve Weber

Steve Weber

This blog post is part of an assignment for my Humanitarian Free and Open Source Software Development course at the Rochester Institute of Technology. For this assignment, we are tasked with reading Chapter 3 of Steve Weber’s “The Success of Open Source“. The summary of the reading is found below.


  • Steve Weber, professor at the University of California, Berkeley


  • The Success of Open Source



  • August 22nd, 2005

The Gist

You’re in a social setting. Someone says “Hey, did you ever read X?” You quickly respond “Oh heck yeah! X was {awesome,terrible}!” The person next to you in the circle says “Oh snap, I didn’t read X. What was it about…?” You have exactly 3 lines, MAX, to prove you are not a hipster—and “The Gist” is those 3 lines.

Chapter 3 of “What is Open Source?” essentially aims to define what open source development is as a “thing”, describe the process in which it works, the problems it tries to solve, and how it does them. He lightly touches on many core concepts of open source, such as licensing, who is a contributor and what makes someone a contributor in open source, and how the new method of collaboration has positives and negatives—something that reflects human nature.

The Good

The three best things I took out of this excerpt were:

  • The eight general principles of what people do in the open source process (starting on pg. 73)
    • I thought this was a good analysis and breakdown of how things tend to work in open source, and is something I think I will end up referring to even in the future.
  • Analyzing Linus Torvalds’ role as the “Benevolent Dictator for Life” in the Linux project and how that is reflective of some open source communities
  • How exactly open source licensing tries to build a positive and open social structure beneficial to the user (pg. 85)
    • Licensing is something I’m passionate about in particular with open source, and the author’s words resounded with my own thoughts about licensing. It’s leaving me wanting to develop my own thoughts and opinions on open source licensing further.

The Bad

My least favorite things from this excerpt were:

  • BSD-style licenses are prohibitive to real collaboration (pg. 63)
    • I disagree with this, especially as it seems to have a particular favoring towards the GPL. While I may also represent the opposite bias, I don’t think these licenses necessarily make a project “not vitally collaborative on a very large scale” as the author states.
  • Principle #8 of the open source process: “Talk a lot” (pg. 81-82)
    • I agree, talking happens a lot in open source… on hot topic, controversial topics. Usually this happens on the development lists more than other things. But what are “other things”? For larger projects, there are more areas that are important, such as Marketing, Translations / Globalization, on-boarding new contributors, and more. There are not enough people talking in these groups. I think that strictly from a code perspective, topics are many and discussion is much. But on other areas of a project? Unless you have people being paid full-time to work on it, volunteers are far and few in between.
    • I also partly disagree with the vehemence of open source discussion. I do not think we can disagree lists like the Linux Kernel Mailing Lists are not the most friendly of places. But there are projects that have clearly defined codes of conduct and how to behave. In my experience in Fedora, the community was beyond welcoming to me when I began contributing, and I never experienced any of the harshness or close-mindedness that is sometimes associated with open-source development. I think the example the author used then may have been the norm, but over the years, I think others have seen that the harshness is not sustainable and closes the door on adding new, valuable insight from potential contributors, and they try to reflect this in their own projects.
  • Lastly, I hope dzho will let me pass on the third “bad” thing for the list, as the above two topics were the only things that distinctly stuck out to me.

The Questions

Three questions I had after reading the chapter were:

  • Eleven years have passed since the book was published. What does the author think of the open source scene in 2016?
  • In particular, are open source communities largely as harsh as he originally described or does he feel like a new era is beginning or begun in terms of the inclusiveness of open source projects?
  • What made you believe that BSD-style licenses are contradictory to real collaboration in open source?

Your Review

Imagine you are on Yelp, Amazon, eBay, Netflix, or any other online community that has customer reviews. This is the message that you want to leave behind, to represent yourself, and inform (or warn) others. You should add a quick rating system of your choosing (X/Y stars, X thumbs up) as part of the review.

★ ★ ★ ★ ☆

Overall, I rate this article four out of five stars. I think it did a fairly effective job of analyzing open source in the twenty-first century. While some small sections of the book may have changed in the eleven years since original publication, most of the content is still very much relevant and very much important. I would share this chapter with a friend or another student who was seriously considering getting involved with open source. That goes without saying, I might explicitly add a few extra comments of my own to the small subset of the chapter that I disagreed with.

The post HFOSS: Reviewing “What is Open Source?”, Steve Weber appeared first on Justin W. Flory's Blog.

Getting started with Shotwell

Shotwell is a simple yet powerful program that comes installed with most flavors of Fedora, such as Fedora Workstation and the Cinnamon desktop spin. It’s also available for install on any other desktop or spin. You can use it as either a photo viewer and organizer, or an editor.

Viewing and organizing photos with Shotwell

Shotwell works as a simple viewer for various formats, including PNG, JPG, and TIFF. It also organizes photos by either copying files to a database or showing and managing files from the source files. This provides the user with a wide flexibility of options for where they want to import their photos from. You can use it to manage different devices like cameras, phones, and more. Users of Apple computers may find it similar in functionality to iPhoto.

Photos can be organized into events (usually sorted by date) or by custom albums created by the user.

Shotwell, a viewer and organizer

Viewing and organizing photos

Shotwell as an editor

Shotwell also makes a simple and useful editor, the kind of program that “just works.” You can crop, resize, rotate, and straighten images. You can also automatically enhance and change some elements of your photos like hue and contrast.

Editing photos in Shotwell

Plenty of editing choices and options

If you resize an image and want to re-save it under a new format, it’s easy! Make your adjustments and then use the “Save as” menu option. If you need to adjust the scaling constraints, that’s also possible. You can have Shotwell do it for you automatically or you can manually enter the values you wish to use. Need to correct red-eye? There’s also a tool to make any corrections to some extra shiny eyes at the bottom of the editor mode.

Get Shotwell today

Shotwell is a useful tool for keeping photos organized and making minor edits and corrections to pictures. It’s also a well-designed, intuitive, and lightweight photo-managing program. If you are using Fedora 23 Workstation or some other desktop spins, you might already have it installed. If you don’t see it anywhere, you can install it from the Software application on Workstation.

If you are using another desktop, you can easily install it from the command line. Open a terminal and enter the following command.

$ sudo dnf install shotwell

Confirm the download and you will have Shotwell on your desktop!

In a second part, I will be commenting on other alternate photo viewing or editing programs available in Fedora, such as Gwenview, Gthumb, and GNOME Photos.

Image courtesy Giuseppe Milo – originally posted to Flickr as Newgrange – Co. Meath, Ireland – Travel photography

I spent last week at Devconf in the Czech Republic. I didn't have time to write anything new and compelling, but I did give a talk about why everything seems to be on fire.


I explore what's going on right now, why do things look like they're on fire, and how we can start to fix this. Our problem isn't technology, it's the people. We're good at technology problems, we're bad at people problems.

Give the talk a listen. Let me know what you think, I hope to peddle this message as far and wide as possible.

Join the conversation, hit me up on twitter, I'm @joshbressers
Dealing with Duplicate SSL certs from FreeIPA

I reinstalled https://ipa.younglogic.net. My browser started complaining when I try to visit it; The serial number of the TLS certificate is a duplicate. If I am seeing this, anyone else that looked at the site in the past is going to see it, too, so I don’t want to just hack my browser setup to ignore it. Here’s how I fixed it:

FreeIPA uses Certmonger to request and monitor certificates. The Certmonger daemon runs on the server that owns the certificate, and performs the tricky request format generation, then waits for an answer. So, In order to update the IPA server, I am going to tell Certmonger to request a renewal of the HTTPS TLS certificate.

The tool to talk to cermonger is called getcert. First, find the certificate. We know it is going to stored in the Apache HTTPD config directory:

sudo getcert list
Number of certificates and requests being tracked: 8.
Request ID '20160201142947':
	stuck: no
	key pair storage: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='auditSigningCert cert-pki-ca',token='NSS Certificate DB',pin set
	certificate: type=NSSDB,location='/etc/pki/pki-tomcat/alias',nickname='auditSigningCert cert-pki-ca',token='NSS Certificate DB'
	CA: dogtag-ipa-ca-renew-agent
	issuer: CN=Certificate Authority,O=YOUNGLOGIC.NET
	subject: CN=CA Audit,O=YOUNGLOGIC.NET
	expires: 2018-01-21 14:29:08 UTC
	key usage: digitalSignature,nonRepudiation
	pre-save command: /usr/lib64/ipa/certmonger/stop_pkicad
	post-save command: /usr/lib64/ipa/certmonger/renew_ca_cert "auditSigningCert cert-pki-ca"
	track: yes
	auto-renew: yes
Request ID '20160201143116':
	stuck: no
	key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt'
	certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB'
	issuer: CN=Certificate Authority,O=YOUNGLOGIC.NET
	subject: CN=ipa.younglogic.net,O=YOUNGLOGIC.NET
	expires: 2018-02-01 14:31:15 UTC
	key usage: digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment
	eku: id-kp-serverAuth,id-kp-clientAuth
	pre-save command: 
	post-save command: /usr/lib64/ipa/certmonger/restart_httpd
	track: yes
	auto-renew: yes

There are many in there, but the one we care about is the last one, with the Request ID of 20160201143116. It is in the NSS database stored in /etc/httpd/alias. To request a new certificate, use the command:

sudo ipa-getcert resubmit -i 20160201143116

While this is an ipa-specific command, it is essentially telling certmonger to renew the certificate. After we run it, I can look at the list of certificates again and see that the “expires” value has been updated:

Request ID '20160201143116':
	stuck: no
	key pair storage: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB',pinfile='/etc/httpd/alias/pwdfile.txt'
	certificate: type=NSSDB,location='/etc/httpd/alias',nickname='Server-Cert',token='NSS Certificate DB'
	issuer: CN=Certificate Authority,O=YOUNGLOGIC.NET
	subject: CN=ipa.younglogic.net,O=YOUNGLOGIC.NET
	expires: 2018-02-07 02:29:42 UTC
	principal name: HTTP/ipa.younglogic.net@YOUNGLOGIC.NET
	key usage: digitalSignature,nonRepudiation,keyEncipherment,dataEncipherment
	eku: id-kp-serverAuth,id-kp-clientAuth
	pre-save command: 
	post-save command: /usr/lib64/ipa/certmonger/restart_httpd

Now when I refresh my browser window, Firefox no longer complains about the repeated serial number. Now it complains that “the site administrator has incorrectly configured the Security for this site” because I am use a CA cert that it does not know about. But now I can move on and re-install the CA cert.

Video: Fedora 23 LXC - Debian SID and CentOS 7 XFCE containers via X2Go

Being a LONG-TIME OpenVZ user, I've been avoiding LXC some. Mainly because it wasn't quite done yet. I thought I'd give it a try on Fedora 23 to see how well it works... and the answer is surprisingly... fairly well. I made two screencast (without sound). I just used the lxc-{whatever} tools rather than virt-manager. Both containers just use the default network config (DHCP handed out via DNSMasq provided by libvirtd) which is NAT'ed private addresses... and were automatically configured and just worked.

Here's a list of all of the container OS Templates they offer on x86:

centos 6 amd64 default 20160205_02:16
centos 6 i386 default 20160205_02:16
centos 7 amd64 default 20160205_02:16
debian jessie amd64 default 20160204_22:42
debian jessie i386 default 20160204_22:42
debian sid amd64 default 20160207_11:58
debian sid i386 default 20160204_22:42
debian squeeze amd64 default 20160204_22:42
debian squeeze i386 default 20160204_22:42
debian wheezy amd64 default 20160204_22:42
debian wheezy i386 default 20160204_22:42
fedora 21 amd64 default 20160205_01:27
fedora 21 i386 default 20160205_01:27
fedora 22 amd64 default 20160205_01:27
fedora 22 i386 default 20160205_01:27
gentoo current amd64 default 20160205_14:12
gentoo current i386 default 20160205_14:12
opensuse 12.3 amd64 default 20160205_00:53
opensuse 12.3 i386 default 20160205_00:53
oracle 6.5 amd64 default 20160205_11:40
oracle 6.5 i386 default 20160205_11:40
plamo 5.x amd64 default 20160207_11:59
plamo 5.x i386 default 20160207_13:13
ubuntu precise amd64 default 20160205_03:49
ubuntu precise i386 default 20160205_03:49
ubuntu trusty amd64 default 20160205_03:49
ubuntu trusty i386 default 20160205_03:49
ubuntu trusty ppc64el default 20160201_03:49
ubuntu vivid amd64 default 20160205_03:49
ubuntu vivid i386 default 20160205_03:49
ubuntu wily amd64 default 20160205_03:49
ubuntu wily i386 default 20160205_03:49
ubuntu xenial amd64 default 20160205_03:49
ubuntu xenial i386 default 20160205_03:49

The first one shows the basics of LXC installation on Fedora 23 (per their wiki page on the subject) as well as creating a Debian SID container, getting it going, installing a lot of software on it including XFCE and most common desktop software... and accessing it via X2Go... and configuring XFCE the way I like it. This one was made on my home laptop and my network is a bit slow so I cut out a few long portions where packages were downloading and installing but everything else is there... yes including quite a bit of waiting for stuff to happen.

<video controls="controls" height="454" poster="/files/vp9/lxc-on-fedora-23-debian-sid-GUI-container.png" preload="none" src="/files/vp9/lxc-on-fedora-23-debian-sid-GUI-container.webm" width="720"></video>
lxc-on-fedora-23-debian-sid-GUI-container.webm (25 MB, ~41.5 minutes)

The second video is very similar to the first but it is a remote ssh session with my work machine (where the network is way faster) and shows making a CentOS 7 container, installing XFCE and the same common desktop software, and then connecting to it via X2Go using an ssh proxy, and configuring XFCE how I like it. It was done in a single, un-edited take and includes a bit of waiting as stuff downloads and installs... so you get the complete thing from start to finish.

<video controls="controls" height="436" poster="/files/vp9/lxc-on-fedora-23-centos-7-GUI-container.png" preload="none" src="/files/vp9/lxc-on-fedora-23-centos-7-GUI-container.webm" width="720"></video>
lxc-on-fedora-23-centos-7-GUI-container.webm (22.7 MB, ~31 minutes)

I recorded the screencasts with vokoscreen at 25 frames-per-second @ slightly larger than 720p resolution... and then converted them to webm (vp9) with ffmpeg @ 200kbit video. They compressed down amazing well. I recommend playback in full-screen as the quality is great. Enjoy!

read more

February 07, 2016

DevConf 2016 is over
DevConf 2016 is over. I was doing the mining from social networks to get something interesting on screens at corridors. DevConf was really big and it became trending topic. Unfortunately, this caused hijacking of #devconfcz hashtag. I am really sorry for that. I have collected all pictures from social networks into this post (please scroll … Continue reading "DevConf 2016 is over"
New release: usbguard-0.4

I’m not dead yet. And the project is still alive too. It’s been a while since the last release, so it’s time to do another. The biggest improvements were made to the rule language by introducing the rule conditions and to the CLI by introducing a new command, usbguard, for interacting with a running USBGuard daemon instance and for generating initial policies.

Here’s an example of what you can do with the new rule conditions feature:

allow with-interface one-of { 03:00:01 03:01:01 } if !rule-applied

This one-liner in the policy will ensure that a USB keyboard will be authorized only once. If somebody connects another USB keyboard, it won’t be allowed. Of course, if you diconnect yours, then that one won’t be authorized either when connected again. Another, somewhat similar example is this:

allow with-interface one-of { 03:00:01 03:01:01 } if !allowed-matches(with-interface one-of { 03:00:01 03:01:01 })

That one will allow to connect a USB keyboard only if no other is currently connected. You can narrow down the match to a specific type, serial number, or whatever else that the rule language supports. Including other conditions.

Another feature that improves the usability of USBGuard is the new command-line interface which allows you, among other things, to generate initial policies for your system. To quicky generate a policy based on all the connected USB devices, run:

# usbguard generate-policy > rules.conf
# vi rules.conf
(review/modify the generated rule set)
# cp rules.conf /etc/usbguard/rules.conf
# chmod 0600 /etc/usbguard/rules.conf
# systemctl restart usbguard

There are some options to tweak the resulting policy. See the usbguard(1) manual page for further details.

And last but not least, thanks to Philipp Deppenwiese, USBGuard is now packaged for the Gentoo Linux distribition.

Major changes

  • The daemon is now capable of dropping process capabilities and uses a seccomp based syscall whitelist. Options to enable these features were added to the usbguard-daemon command.
  • Devices connected at the start of the daemon are now recognized and the DevicePresent signal is sent for each of them.
  • New configuration options for setting the implicit policy target and how to handle the present devices are now available.
  • String values read from the device are now properly escaped and length limits on these values are enforced.
  • The library API was extended with the Device and DeviceManager classes.
  • Implemented the usbguard CLI, see usbguard(1) for available commands.
  • Initial authorization policies can be now easily generated using the usbguard generate-policy command.
  • Extended the rule language with rule conditions. See usbguard-rules.conf(5) for details.
  • Moved logging code into the shared library. You can use static methods of the Logger class to configure logging behaviour.
  • Removed the bundled libsodium and libqb libraries.
  • Fixed several bugs.
  • Resolved issues: #46, #45, #41, #40, #37, #32, #31, #28, #25, #24, #21, #16, #13, #9, #4

WARNING: Backwards incompatible changes

  • The device hashing procedure was altered and generates different hash values. If you are using the hash attribute in your rules, you’ll have to update the values.
  • The bundled libsodium and libqb were removed. You’ll have to compile and install them separately if your distribution doesn’t provide them as packages.


If you are using Fedora or the USBGuard Copr repository, run:

$ sudo dnf update usbguard


Tarballs can be downloaded here:

February 06, 2016

Look over the fence – StartUp Weekend Phnom Penh

Linux and Free Software plays in South East Asia not that role as in Europe or North America. To change that at least a bit, I came here. The asian culture plays definitely a role and this was often discussed. But it plays lesser the vital role as we think and as the linked article shows, we will not find an easy an solution for the cultural differences. From my perspective it is lesser necessary that we adopt, the most asians I met are willing to accept the differences and can live with them.
The people in South East Asia have other interests and that is the problem, Google has here more success with his Google Developer Community. There are only a few successful FOSS events and most of them are one timer opposite the BarCamp scene is huge. BarCamp Yangon 14.000, Bangkok 7.000, Phnom Penh and HCMC 5.000 visitors. If you go to them you will meet a lot of tech interested people but you will realize that the interest in start up topics is much higher. I am not the only one who realized that, Pravin made the expirience on his visit to BarCamp Yangon also.

So it was time for me, to digg deeper into the StartUp scene and look if we have common areas and if we could inter-connect so that we have an profit from. So I participated in StartUp Weekend Phnom Penh 2016 as an mentor. What is StartUp Weekend? We would call it Hackfest, people come together to work on a specific problem. On start some people pitch (we would call it lightning talk)  about a problem they want to solve and then people can vote for it and join that team. Then they work on the problem with the target to build out a business with it and on the end they have an final pitch again, before a jury.

On this StartUp Weekend 2 of the idea was directly open source related, but both of them do not understand some principles of the open source and free software world and for that will be lesser successful. But there was other interesting ideas of technical nature. One was the idea to uniting different smart cards into one, but later on became, because of the limited storage on a magnet card, an smart phone app and solved the problem just partly. The other one started as an app from the begin on. All in all there was 11 teams, with ideas like student card, and idea about used books, an medical consultation app for the people living in the provinces and some others more.

So there are overlaps with the free software community, especially there are special editions for developer. In this ones we should engage, to win new users and contributors. We have to go some other ways in Asia, that is for sure. All in all it was an interesting experience to look over the fence.

Keystone Implied roles with CURL

Keystone now has Implied Roles.  What does this mean?  Lets say we define the role Admin to  imply the Member role.  Now, if you assigned someone Admin on a project they are automatically assigned the Member role on that project implicitly.

Let’s test it out:

Since we don’t yet have client or CLI support, we’ll have to make due with curl and jq for now.

This uses the same approach Keystone V3 Examples

. ~/adminrc

export TOKEN=`curl -si -d @token-request.json -H "Content-type: application/json" $OS_AUTH_URL/auth/tokens | awk '/X-Subject-Token/ {print $2}'`

export ADMIN_ID=`curl -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" $OS_AUTH_URL/roles?name=admin | jq --raw-output '.roles[] | {id}[]'`

export MEMBER_ID=`curl -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" $OS_AUTH_URL/roles?name=_member_ | jq --raw-output '.roles[] | {id}[]'`

curl -X PUT -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" $OS_AUTH_URL/roles/$ADMIN_ID/implies/$MEMBER_ID

curl  -H"X-Auth-Token:$TOKEN" -H "Content-type: application/json" $OS_AUTH_URL/role_inferences 

Now, create a new user and and assign them only the user role.

openstack user create Phred
openstack user show Phred
| Field     | Value                            |
| domain_id | default                          |
| enabled   | True                             |
| id        | 117c6f0055a446b19f869313e4cbfb5f |
| name      | Phred                            |
$ openstack  user set --password-prompt Phred
User Password:
Repeat User Password:
$ openstack project list
| ID                               | Name  |
| fdd0b0dcf45e46398b3f9b22d2ec1ab7 | admin |
$ openstack project list
| ID                               | Name  |
| fdd0b0dcf45e46398b3f9b22d2ec1ab7 | admin |
openstack role add --user 117c6f0055a446b19f869313e4cbfb5f --project fdd0b0dcf45e46398b3f9b22d2ec1ab7 e3b08f3ac45a49b4af77dcabcd640a66

Copy token-request.json and modify the values for the new user.

 curl  -d @token-request-phred.json -H "Content-type: application/json" $OS_AUTH_URL/auth/tokens | jq '.token | {roles}'
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  1643  100  1098  100   545  14742   7317 --:--:-- --:--:-- --:--:-- 14837
  "roles": [
      "id": "9fe2ff9ee4384b1894a90878d3e92bab",
      "name": "_member_"
      "id": "e3b08f3ac45a49b4af77dcabcd640a66",
      "name": "admin"

February 05, 2016

Giving up democracy to get it back

Do services like Facebook and Twitter really help worthwhile participation in democracy, or are they the most sinister and efficient mechanism ever invented to control people while giving the illusion that they empower us?

Over the last few years, groups on the left and right of the political spectrum have spoken more and more loudly about the problems in the European Union. Some advocate breaking up the EU, while behind the scenes milking it for every handout they can get. Others seek to reform it from within.

Yanis Varoufakis on motorbike

Most recently, former Greek finance minister Yanis Varoufakis has announced plans to found a movement (not a political party) that claims to "democratise" the EU by 2025. Ironically, one of his first steps has been to create a web site directing supporters to Facebook and Twitter. A groundbreaking effort to put citizens back in charge? Or further entangling activism in the false hope of platforms that are run for profit by their Silicon Valley overlords? A Greek tragedy indeed, in the classical sense.

Varoufakis rails against authoritarian establishment figures who don't put the citizens' interests first. Ironically, big data and the cloud are a far bigger threat than Brussels. The privacy and independence of each citizen is fundamental to a healthy democracy. Companies like Facebook are obliged - by law and by contract - to service the needs of their shareholders and advertisers paying to study and influence the poor user. If "Facebook privacy" settings were actually credible, who would want to buy their shares any more?

Facebook is more akin to an activism placebo: people sitting in their armchair clicking to "Like" whales or trees are having hardly any impact at all. Maintaining democracy requires a sufficient number of people to be actively involved, whether it is raising funds for worthwhile causes, scrutinizing the work of our public institutions or even writing blogs like this. Keeping them busy on Facebook and Twitter renders them impotent in the real world (but please feel free to alert your friends with a tweet)

Big data is one of the areas that requires the greatest scrutiny. Many of the professionals working in the field are actually selling out their own friends and neighbours, their own families and even themselves. The general public and the policy makers who claim to represent us are oblivious or reckless about the consequences of this all-you-can-eat feeding frenzy on humanity.

Pretending to be democratic is all part of the illusion. Facebook's recent announcement to deviate from their real-name policy is about as effective as using sunscreen to treat HIV. By subjecting themselves to the laws of Facebook, activists have simply given Facebook more status and power.

Data means power. Those who are accumulating it from us, collecting billions of tiny details about our behavior, every hour of every day, are fortifying a position of great strength with which they can personalize messages to condition anybody, anywhere, to think the way they want us to. Does that sound like the route to democracy?

I would encourage Mr Varoufakis to get up to speed with Free Software and come down to Zurich next week to hear Richard Stallman explain it the day before launching his DiEM25 project in Berlin.

Will the DiEM25 movement invite participation from experts on big data and digital freedom and make these issues a core element of their promised manifesto? Is there any credible way they can achieve their goal of democracy by 2025 without addressing such issues head-on?

Or put that the other way around: what will be left of democracy in 2025 if big data continues to run rampant? Will it be as distant as the gods of Greek mythology?

Still not convinced? Read about Amazon secretly removing George Orwell's 1984 and Animal Farm from Kindles while people were reading them, Apple filtering the availability of apps with a pro-Life bias and Facebook using algorithms to identify homosexual users.

testing flannel

I noticed today (maybe I’ve noticed before, but forgotten) that the version of flannel in Fedora 23 is older than what’s available in CentOS. It looks like this is because no one tested the more-recent version of flannel in Fedora’s Bodhi, a pretty awesome application for testing packages.

Why not? Maybe because it isn’t always obvious how to test a package like flannel, but I here’s how I tested it, and added karma to the package in Bodhi.

I use flannel when I cluster atomic hosts together with kubernetes. I typically use the release versions of centos or fedora atomic, but the fedora project also provides an ostree image built from fedora’s updates-testing repo, where packages await karma from testers.

I prepare three atomic hosts with vagrant:

[my-laptop]$ git clone https://github.com/jasonbrooks/contrib.git

[my-laptop]$ cd contrib/ansible/vagrant

[my-laptop]$ export DISTRO_TYPE=fedora-atomic

[my-laptop]$ vagrant up --no-provision --provider=libvirt

Next, I rebase the trio of hosts to the testing tree:

[my-laptop]$ for i in {kube-node-1,kube-master,kube-node-2}; do vagrant ssh $i -c &amp;quot;sudo rpm-ostree rebase fedora-atomic:fedora-atomic/f23/x86_64/testing/docker-host&amp;quot;; done

[my-laptop]$ vagrant reload

Reloading the hosts switches them to the testing image, and runs the ansible provisioning scripts that configure the kubernetes cluster. Now to ssh to one of the boxes, confirm that I’m running an image with the newer flannel, and then run a test app on the cluster to make sure that everything is in order:

[my-laptop]$ vagrant ssh kube-master

[kube-master]$ rpm -q flannel

[kube-master]$ sudo atomic host status
  TIMESTAMP (UTC)         VERSION   ID             OSNAME            REFSPEC                                                        
* 2016-02-03 22:47:33     23.63     65cc265ae1     fedora-atomic     fedora-atomic:fedora-atomic/f23/x86_64/testing/docker-host     
  2016-01-26 18:16:33     23.53     22f0b303da     fedora-atomic     fedora-atomic:fedora-atomic/f23/x86_64/docker-host

[kube-master]$ sudo atomic run projectatomic/guestbookgo-atomicapp

That last command pulls down an atomicapp container that deploys a guestbook example app from the kubernetes project. The app includes two redis slaves, a redis master, and a trio of frontend apps that talk to those backend pieces. The bits of the app are spread between my two kubelet nodes, with flannel handling the networking in-between. If this app is working, then I’m confident that flannel is working.

[kube-master]$ kubectl get svc guestbook
guestbook                 3000/TCP   app=guestbook   55m

[kube-master]$ exit

[my-laptop]$ vagrant ssh kube-node-1

[kube-node-1]$ curl
# Server

The app is working, flannel appears to be doing its job, so I marched off to bodhi to offer up my karma:

instant karma